Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix e2e tests with next.js #4313

Closed
wants to merge 17 commits into from

Conversation

ryanjduffy
Copy link
Contributor

No description provided.

@vercel
Copy link

vercel bot commented Nov 5, 2021

This pull request is being automatically deployed with Vercel (learn more).
To see the status of your deployment, click below or on the icon next to each commit.

🔍 Inspect: https://vercel.com/recordreplay/devtools/3493jhYhj4EwXjg7Fs8MR7LygSHX
✅ Preview: https://devtools-git-fork-ryanjduffy-ryan-fix-e2e-t-972586-recordreplay.vercel.app

@ryanjduffy ryanjduffy closed this Nov 5, 2021
@ryanjduffy ryanjduffy deleted the ryan/fix-e2e-tests-next branch November 5, 2021 16:55
jcmorrow pushed a commit to jcmorrow/devtools that referenced this pull request Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true).
- Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment.
- Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now.
- Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem!

I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this.
- Actions are somewhat limited. More details here: actions/runner#646

Fixes replayio#4313
jcmorrow pushed a commit to jcmorrow/devtools that referenced this pull request Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need a dev server running in CI. Previously, we would spawn that manually from inside of a script. Now, we start that from inside of the action that does the testing. This means that if you want to run the tests locally, you'll also need your local dev server running (which was already true).
- Add necessary parts of the `test` folder to public. This means that we will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only have to copy one line everywhere, not an entire step. It would be nice if we could further refactor out the test stripes, because a lot of things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then we should try and keep our config as out-of-the-box as possible, in order to allow us to stay up-to-date. The fact that the `test` folder has to be available on the development server for the tests to run at all seems like the real problem here. A good follow-on task would be to sort out the best way to handle those files in the test/dev environment.
- Go back to node 14 for lock file. I didn't realize when I changed this that we were stuck on such an outdated node version. We should fix this ASAP, but until then, let's stick with what was working, just to minimize the factors that have gone into resolving all of the failing tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's really frustrating when a test fails and all it says is `timed out`. Like... timed out while waiting for *what*? Test failures should guide you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we should tune it, but it's nice to know what's going on in these tests, and we're only going to look at the output when something is failing anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I fixed them, but sometimes it was hard to see what tests were timing out. Now, when a test completes, we emit some test stats. Eventually, it might be nice to just like... use a real test runner... but this is fine for now.
- Mock test URLs like http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1 show a gray screen after a few seconds. If you remove the `mock` query param then the screen shows up totally normally. I'm not sure of the exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest runners locally, and taking care of making sure that the files get shuffled around correctly before and after, and that the dev server is running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this PR was to get things to *run*, getting them to pass is a future us problem!

I'm going to put these into a notion doc as well, but documenting here couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have a branch and it's unclear why actions aren't kicking off, make sure you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get automatically turned into environment variables, which is convenient. However **when using `composite` actions this won't happen**. You have to manually pass the environment variables in with an [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict `clean` of the directory that it's about to clone into! So, if you are trying to do some sort of caching or otherwise unloading assets into a dir that you also need to do `git checkout XXXX` in (with the checkout action), you either need to checkout *first*, or you can use `[clean: false](https://github.com/actions/checkout#usage)` I think, though I haven't actually tried this.
- Actions are somewhat limited. More details here: actions/runner#646

Fixes replayio#4313
jcmorrow pushed a commit to jcmorrow/devtools that referenced this pull request Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need
  a dev server running in CI. Previously, we would spawn that manually
  from inside of a script. Now, we start that from inside of the action
  that does the testing. This means that if you want to run the tests
  locally, you'll also need your local dev server running (which was
  already true).
- Add necessary parts of the `test` folder to public. This means that we
  will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only
  have to copy one line everywhere, not an entire step. It would be nice
  if we could further refactor out the test stripes, because a lot of
  things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then
  we should try and keep our config as out-of-the-box as possible, in
  order to allow us to stay up-to-date. The fact that the `test` folder
  has to be available on the development server for the tests to run at
  all seems like the real problem here. A good follow-on task would be
  to sort out the best way to handle those files in the test/dev
  environment.
- Go back to node 14 for lock file. I didn't realize when I changed this
  that we were stuck on such an outdated node version. We should fix
  this ASAP, but until then, let's stick with what was working, just to
  minimize the factors that have gone into resolving all of the failing
  tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's
  really frustrating when a test fails and all it says is `timed out`.
  Like... timed out while waiting for *what*? Test failures should guide
  you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the
  DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we
  should tune it, but it's nice to know what's going on in these tests,
  and we're only going to look at the output when something is failing
  anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I
  fixed them, but sometimes it was hard to see what tests were timing
  out. Now, when a test completes, we emit some test stats. Eventually,
  it might be nice to just like... use a real test runner... but this is
  fine for now.
- Mock test URLs like
  http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1
  show a gray screen after a few seconds. If you remove the `mock` query
  param then the screen shows up totally normally. I'm not sure of the
  exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest
  runners locally, and taking care of making sure that the files get
  shuffled around correctly before and after, and that the dev server is
  running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this
PR was to get things to *run*, getting them to pass is a future us
problem! I'm disabling the tests that seem to *always* fail for now.

I'm going to put these into a notion doc as well, but documenting here
couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have
  a branch and it's unclear why actions aren't kicking off, make sure
  you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get
  automatically turned into environment variables, which is convenient.
  However **when using `composite` actions this won't happen**. You have
  to manually pass the environment variables in with an
  [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict
  `clean` of the directory that it's about to clone into! So, if you are
  trying to do some sort of caching or otherwise unloading assets into a
  dir that you also need to do `git checkout XXXX` in (with the checkout
  action), you either need to checkout *first*, or you can use `[clean:
  false](https://github.com/actions/checkout#usage)` I think, though I
  haven't actually tried this.
- Actions are somewhat limited. More details here:
  actions/runner#646

Fixes replayio#4313
jcmorrow pushed a commit to jcmorrow/devtools that referenced this pull request Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need
  a dev server running in CI. Previously, we would spawn that manually
  from inside of a script. Now, we start that from inside of the action
  that does the testing. This means that if you want to run the tests
  locally, you'll also need your local dev server running (which was
  already true).
- Add necessary parts of the `test` folder to public. This means that we
  will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only
  have to copy one line everywhere, not an entire step. It would be nice
  if we could further refactor out the test stripes, because a lot of
  things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then
  we should try and keep our config as out-of-the-box as possible, in
  order to allow us to stay up-to-date. The fact that the `test` folder
  has to be available on the development server for the tests to run at
  all seems like the real problem here. A good follow-on task would be
  to sort out the best way to handle those files in the test/dev
  environment.
- Go back to node 14 for lock file. I didn't realize when I changed this
  that we were stuck on such an outdated node version. We should fix
  this ASAP, but until then, let's stick with what was working, just to
  minimize the factors that have gone into resolving all of the failing
  tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's
  really frustrating when a test fails and all it says is `timed out`.
  Like... timed out while waiting for *what*? Test failures should guide
  you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the
  DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we
  should tune it, but it's nice to know what's going on in these tests,
  and we're only going to look at the output when something is failing
  anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I
  fixed them, but sometimes it was hard to see what tests were timing
  out. Now, when a test completes, we emit some test stats. Eventually,
  it might be nice to just like... use a real test runner... but this is
  fine for now.
- Mock test URLs like
  http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1
  show a gray screen after a few seconds. If you remove the `mock` query
  param then the screen shows up totally normally. I'm not sure of the
  exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest
  runners locally, and taking care of making sure that the files get
  shuffled around correctly before and after, and that the dev server is
  running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this
PR was to get things to *run*, getting them to pass is a future us
problem! I'm disabling the tests that seem to *always* fail for now.

I'm going to put these into a notion doc as well, but documenting here
couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have
  a branch and it's unclear why actions aren't kicking off, make sure
  you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get
  automatically turned into environment variables, which is convenient.
  However **when using `composite` actions this won't happen**. You have
  to manually pass the environment variables in with an
  [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict
  `clean` of the directory that it's about to clone into! So, if you are
  trying to do some sort of caching or otherwise unloading assets into a
  dir that you also need to do `git checkout XXXX` in (with the checkout
  action), you either need to checkout *first*, or you can use `[clean:
  false](https://github.com/actions/checkout#usage)` I think, though I
  haven't actually tried this.
- Actions are somewhat limited. More details here:
  actions/runner#646

Fixes replayio#4313
jcmorrow pushed a commit to jcmorrow/devtools that referenced this pull request Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need
  a dev server running in CI. Previously, we would spawn that manually
  from inside of a script. Now, we start that from inside of the action
  that does the testing. This means that if you want to run the tests
  locally, you'll also need your local dev server running (which was
  already true).
- Add necessary parts of the `test` folder to public. This means that we
  will be able to run against preview branches as well.
- Add a github action for setting up Next.js with a cache, so we only
  have to copy one line everywhere, not an entire step. It would be nice
  if we could further refactor out the test stripes, because a lot of
  things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then
  we should try and keep our config as out-of-the-box as possible, in
  order to allow us to stay up-to-date. The fact that the `test` folder
  has to be available on the development server for the tests to run at
  all seems like the real problem here. A good follow-on task would be
  to sort out the best way to handle those files in the test/dev
  environment.
- Go back to node 14 for lock file. I didn't realize when I changed this
  that we were stuck on such an outdated node version. We should fix
  this ASAP, but until then, let's stick with what was working, just to
  minimize the factors that have gone into resolving all of the failing
  tests.
- Add a `waitingFor` field to the `waitUntil` method used in tests. It's
  really frustrating when a test fails and all it says is `timed out`.
  Like... timed out while waiting for *what*? Test failures should guide
  you to the place of failure as quickly as possible.
- By default, wait for 10 seconds. If something doesn't appear in the
  DOM in 10 seconds, that should be considered a failure.
- Log more from playwright tests. This could be annoying, maybe we
  should tune it, but it's nice to know what's going on in these tests,
  and we're only going to look at the output when something is failing
  anyways, so it may as well be verbose.
- Log the time that tests take. We started with a lot of timeouts. I
  fixed them, but sometimes it was hard to see what tests were timing
  out. Now, when a test completes, we emit some test stats. Eventually,
  it might be nice to just like... use a real test runner... but this is
  fine for now.
- Mock test URLs like
  http://localhost:8080/recording/24c0bc00-fdad-4e5e-b2ac-23193ba3ac6b?mock=1
  show a gray screen after a few seconds. If you remove the `mock` query
  param then the screen shows up totally normally. I'm not sure of the
  exact interactions here. Let's figure this out in a follow-on task.
- Add `test/run` and `test/mock/run` scripts for running the rest
  runners locally, and taking care of making sure that the files get
  shuffled around correctly before and after, and that the dev server is
  running, etc.
- Add README instructions about `.env` files and new test run scripts.

That's true! We still have flaky, intermittent tests. The goal of this
PR was to get things to *run*, getting them to pass is a future us
problem! I'm disabling the tests that seem to *always* fail for now.

I'm going to put these into a notion doc as well, but documenting here
couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have
  a branch and it's unclear why actions aren't kicking off, make sure
  you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get
  automatically turned into environment variables, which is convenient.
  However **when using `composite` actions this won't happen**. You have
  to manually pass the environment variables in with an
  [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict
  `clean` of the directory that it's about to clone into! So, if you are
  trying to do some sort of caching or otherwise unloading assets into a
  dir that you also need to do `git checkout XXXX` in (with the checkout
  action), you either need to checkout *first*, or you can use `[clean:
  false](https://github.com/actions/checkout#usage)` I think, though I
  haven't actually tried this.
- Actions are somewhat limited. More details here:
  actions/runner#646

Fixes replayio#4313
jcmorrow pushed a commit to jcmorrow/devtools that referenced this pull request Nov 9, 2021
- In order to run either the mock tests or the end-to-end tests, we need
  a dev server running in CI. Previously, we would spawn that manually
  from inside of a script. Now, we start that from inside of the action
  that does the testing. This means that if you want to run the tests
  locally, you'll also need your local dev server running (which was
  already true).
- Add the `test/examples` and `test/scripts` folder to public. This
  means that we will be able to run against preview branches as well.
- Add a GitHub action for setting up Next.js with a cache, so we only
  have to copy one line everywhere, not an entire step. It would be nice
  if we could further refactor out the test stripes, because a lot of
  things are repeated 9 times.
- Remove the `dev-server.js` file. If we're going to use next.js, then
  we should try and keep our config as out-of-the-box as possible, in
  order to allow us to stay up-to-date. The fact that the `test` folder
  has to be available on the development server for the tests to run at
  all seems like the real problem here. A good follow-on task would be
  to sort out the best way to handle those files in the test/dev
  environment.
- Go back to node 14 for lock file. I didn't realize when I changed this
  that we were stuck on such an outdated node version. We should fix
  this ASAP, but until then, let's stick with what was working, just to
  minimize the factors that have gone into resolving all of the failing
  tests.
- Log more from playwright tests. This could be annoying, maybe we
  should tune it, but it's nice to know what's going on in these tests,
  and we're only going to look at the output when something is failing
  anyways, so it may as well be verbose.
- Disable the worst of the failing tests.

That's true! We still have flaky, intermittent tests. The goal of this
PR was to get things to *run*, getting them to pass is a future us
problem! I have been unable to get the mock tests to work to any extent
on CI, although they do pass intermittently locally.

There are also tests (it doesn't seem to be consistent which ones) where
the browser starts up and just never seems to navigate to the page
properly. I haven't been able to figure out where it's getting stuck,
but when tests time out now, it's usually for this reason.

I'm going to put these into a notion doc as well, but documenting here
couldn't hurt. Three things that tripped me up on this PR:

- *any* merge conflicts mean actions *will not run at all*. If you have
  a branch and it's unclear why actions aren't kicking off, make sure
  you check for and resolve merge conflicts.
- When using actions that are type `javascript` your `inputs` get
  automatically turned into environment variables, which is convenient.
  However **when using `composite` actions this won't happen**. You have
  to manually pass the environment variables in with an
  [expression](https://docs.github.com/en/actions/learn-github-actions/expressions).
- The `checkout` action that we use all the time performs a strict
  `clean` of the directory that it's about to clone into! So, if you are
  trying to do some sort of caching or otherwise unloading assets into a
  dir that you also need to do `git checkout XXXX` in (with the checkout
  action), you either need to checkout *first*, or you can use `[clean:
  false](https://github.com/actions/checkout#usage)` I think, though I
  haven't actually tried this.

Fixes replayio#4313
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant