-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add tests for happy paths in critical workflows #1110
Conversation
dd6503b
to
2b438a0
Compare
cff80eb
to
44269ba
Compare
cd9502b
to
ad81c87
Compare
ad81c87
to
44ec28f
Compare
The following dependencies were installed: - msw to mock backend API responses - @testing-library/react for idiomatic react testing APIs - @testing-library/jest-dom for idiomatic DOM assertion APIs - @testing-library/user-event for idiomatic DOM user event APIs - jest-canvas-mock to mock Canvas for Lottie as jsdom doesn't implement it - @peculiar/webcrypto as jsdom doesn't implement SubtleCrypto Also install the necessary TypeScript definitions.
The purpose is to pass in the testMode option during tests. Note: I've also shifted the declaration of initialChecks() into the useEffect() call where it is used. This is mainly to silence the React Hooks ESLint warnings on missing dependencies in the useEffect() dependency list.
This allows us to specify the initial modal content when setting up test environments.
We don't care about these environment variables in tests. Also, setting the axios baseURL for tests will result in flakiness in tests that involve API responses.
This is mainly an issue in test environments. React treats 'muted' as a property and sets it after the test environment has been torn down, resulting in console warnings [1]. Fix this by mocking the setter property to do nothing. This doesn't pose any issue in test environments. [1]: testing-library/react-testing-library#470
Make sure to set CI=true explicitly as Amplify doesn't set it by default. CI=true will run all the tests exactly once and exit. Also compile translations before running the tests.
We want to render App.tsx in a test environment with separate providers. Hence, move the providers to index.tsx. Also, move the `import 'locales'` statement to the top of index.tsx as well.
3b30b5f
to
ffc5028
Compare
Since the Dashboard component is actually a collection of routes, I was thinking the campaign creation tests could fall under the respective channel Create components, e.g. EmailCreate or any other component specific to the channel. |
Yup, we could do that. However, one issue is that there is logic in the common Specifically for these tests, I thought that it would be better if they were complete integration tests, which included all the components being used in the campaign creation workflow. Hence, I thought it would be better if we also included the Going along the same line of thought, I also considered placing these tests in the |
Yup I think these tests are integration tests rather than individual component unit tests. Perhaps you can create a new folder to house integration tests similar to this? |
Sounds good. I guess the main goal is to make the intent of the tests clearer. I'm not very sure what would be a good way to do this though. My initial thought would be to split the tests up into smaller files like this:
I feel like it is quite an inadequate solution though. How would you suggest we do this? |
1. Rename `templates` to `{email|sms|telegram}_templates` to match the actual API response 2. Convert `params` to an array to match the actual API 3. Add `has_credential` to match the actual API 4. Add a LOGGED job to the `job_queue` once a campaign is sent to better mimic the API
I think its fine to place it as you suggested for now, I can't think of a strong reason to place it elsewhere. We can always move around our tests as needed in the future! |
I've split the Dashboard integration tests to separate files |
Thanks for the review! |
* develop: refactor: use shared function to initialize models (#1172) chore: setup scaffolding for backend tests (#940) 1.23.0 fix(frontend): fix frontend test flakiness (#1162) fix: update successful delivery status only if error does not exist (#1150) chore: upgrade dependencies (#1153) feat: add unit tests for error states in critical workflows (#1118) feat: support whitelisting domains through `agencies` table (#1141) feat: add tests for happy paths in critical workflows (#1110) fix: prevent campaign names from causing dashboard rows to overflow (#1147) fix(email): Fix SendGrid fallback integration (#1026)
* develop: feat: refactor msg template components; add telegram character limit (#1148) refactor: use shared function to initialize models (#1172) chore: setup scaffolding for backend tests (#940) 1.23.0 fix(frontend): fix frontend test flakiness (#1162) fix: update successful delivery status only if error does not exist (#1150) chore: upgrade dependencies (#1153) feat: add unit tests for error states in critical workflows (#1118) feat: support whitelisting domains through `agencies` table (#1141) feat: add tests for happy paths in critical workflows (#1110) fix: prevent campaign names from causing dashboard rows to overflow (#1147)
Problem
The following critical workflows are not tested. If one of them breaks, it would be difficult to realise unless we go through the tedium of testing each one manually. Hence, let's add automated tests for them.
Workflows:
Partially resolves issue #1079
Solution
Features:
Before & After Screenshots
There are no visual changes.
Tests
These tests have been verified on Travis and Amplify.
Deploy Notes
The aforementioned tests have been added and should be run during every deployment on Travis and Amplify. If any test fails, the deployment should not go through.
Specifically, Amplify should not publish the new build if any test fails.