Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential improvements for the integrated test infra #5527

Closed
attekei opened this issue Jul 20, 2017 · 5 comments
Closed

Potential improvements for the integrated test infra #5527

attekei opened this issue Jul 20, 2017 · 5 comments
Labels

Comments

@attekei
Copy link
Contributor

attekei commented Jul 20, 2017

I thought that it is good to collect the current needs/desires to one issue. The idea is that instead of one monolithic improvement PR these can be implemented incrementally as part of fixing bugs / developing features.

Integrated test API

Helpers for making the creation of testable dashboards easy

See #5527 (comment)

Helper function for covering all click scenarios at once

Implemented in #5732

I often find that I waste time because I forget that clicking router links or submitting forms doesn’t work with the normal simulate(”click”).

We could implement a click(enzymeWrapper) helper function that abstracts away the differences between different clicks. It should cover:

Ideally it would also warn if the current enzyme-wrapped element doesn’t seem clickable but that might be a little risky (i.e. it might give false negatives which would not be cool).

Helper function for changing field values

Implemented in #5732

Basically just a convenience shorthand for simulate('change', { target: { value } }). It could additionally check that the field value actually changed (I think that it can pretty easily be done with input elements).

Ability to dismiss popovers, modals etc

This need appeared for the first time when working on the remapping admin page.

Currently it is impossible because OnClickOutsideWrapper uses document.addEventLister and window.addEventListener instead of React’s synthetic event system.

I think that it’s easiest that we mock document.addEventListener and window.addEventListener like suggested here: window.addEventListener not triggered by simulated events · Issue #426 · airbnb/enzyme · GitHub

Helpful formatting and coloring to error messages

Implemented in #5732

This is more of a micro-optimization. For instance the store.waitForActions and HTTP query error messages could use line breaks and coloring to make them easier to scan.

Stuff that would be nice but maybe too hard/complicated

Using real popovers, tooltips and modals instead of test mocks

Those elements render to stuff to document.body using React’s DOM API. That means that those elements are outside the reach of Enzyme. I think that we should somehow maintain a reference to those rendered React components instance inside Popover, Tooltip etc and make them available to Enzyme. I currently have no clue how exactly to do that.

We would also need to do attach in every Enzyme mount the component instance to JSDom document body using attachTo (by default mount(...) doesn't touch the global DOM):

mount(<Component />, { attachTo: document.body })
@attekei
Copy link
Contributor Author

attekei commented Jul 24, 2017

I feel that currently writing integrated tests for dashboards is hard and there is a strong pull for making it easy.

Stuff that could be retroactively covered by integrated dashboard tests are for instance

I could just first start writing small helpers for stuff like programmatically creating a dashboard and adding dashcards and filters to it. Maybe there should also be some kind of monolithic "default dashboard" that can be used across multiple tests (although each test suite should save and remove it in isolation).

Some helpers could later be replaced by dashboard-related classes in metabase-lib. The work done with helpers could also help in designing the interface for those classes.

@attekei
Copy link
Contributor Author

attekei commented Aug 9, 2017

Wanted to capture my current thoughts regarding our frontend testing story and felt that it's most convenient to put those in this issue for time being:

Stuff that I'm happy about

  • Most Some of the basic operations of query builder (creation of a question, filtering, adding an aggregation, adding a breakout, using drill-through in tables, using the action widget) are currently covered by simple behavioral tests which is cool.
  • A big part of the remapping feature – both admin side and query builder – is covered by behavorial tests. Same applies to binning as well. This level of coverage could and should be the new normal for all future features.

Stuff I'm a little scared about

  • VIsualization settings are complex and many knobs are hidden behind multiple clicks which makes them an easy target for oversights when modifying visualization logic.

Stuff I'm really scared about

Addressed in #5489

These should be definitely addressed already in 0.26 if you ask from me.

  • Our dashboards have practically zero test coverage although it's maybe our most central user-facing feature. As a first step, I can convert the old (currently disabled) E2E tests to Jest/Enzyme tests. In addition to that, dashboard filters should have proper coverage (field values & their search, filters with different attached field combinations).

  • Public questions and dashboards have zero coverage too. We have some old disabled E2E tests that I'm able to convert to Jest/Enzyme. We should properly test how the shared stuff looks for anonymous users. Also testing embedding with an actual simulated embedded app (in NodeJS obviously!) would be really helpful.

  • Parametrized SQL queries have practically zero test coverage although we encounter regularly new regressions related to them – both in query builder and dashboard filters.

@attekei
Copy link
Contributor Author

attekei commented Aug 24, 2017

These were addressed in #5834

What I encounter way too often is that Jest crashes after tests have finished and the JSDom environment has been cleared. Error is usually something like this:

/Users/atte/dev/mb/metabase/node_modules/jsdom/lib/jsdom/browser/Window.js:148
      return idlUtils.wrapperForImpl(idlUtils.implForWrapper(window._document)._location);

I found a similar issue here: https://stackoverflow.com/questions/44075567/cannot-read-property-location-of-null-when-using-react-apollo-in-a-jest-test

I don't know other fix than adding a long delay to the end of test which makes sure that any pending javascript isn't executing anymore:

    afterAll(async () => {
        await delay(2000)
    })

But that is not a flexible fix. The issue is distracting because currently the integration test script quits as well when the Jest quits, and you have to run yarn run test-integrated again which spins up the backends.

I think that the integrated test runner should say something like
Jest process exited. Press [ctrl-c] to quit the integrated test runner or [Enter] to restart Jest.


An unrelated note: we should say when you run yarn run test-integrated that you should run ./bin/build version uberjar before that.

@attekei
Copy link
Contributor Author

attekei commented Sep 19, 2017

@senior in Slack about CI failures related to integration tests (in container 6):

I've been seeing occasional failures there, but the output is too big to be viewed in the Circle UI, so I usually just restart it without looking at what the real problem is

That is something we talked earlier about with @kdoh as well. Currently we mix the server logs to Jest output and the whole test suite produces a TON of server logs. That makes debugging test failures in CircleCI inconvenient.

We should maybe only show Jest output in stdout and create a test result artefact file which would contain the current mixture of backend and test runner logs.

@flamber
Copy link
Contributor

flamber commented Nov 8, 2021

Closing, Enzyme was removed #13657

@flamber flamber closed this as completed Nov 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants