Store e2e test artifacts between test runs to catch regressions #1164
Replies: 3 comments 4 replies
-
I like the idea. @holzeis and me brainstormed about this as well. A few thoughts we had are:
|
Beta Was this translation helpful? Give feedback.
-
I guess there is a difference semantically between a cache and storing test data to prevent regressions. Caching imo means that it's purely done to speed something up. I.e, it is transparent, and wouldn't affect the outcome of the tests at all. Storing test data has its own application in regression testing (almost like guideline testing or snapshot testing with known-difficult snapshots). Hence, it tests something distinct. |
Beta Was this translation helpful? Give feedback.
-
a few questions / comments that rose from discussion, hopefully we can get more aligned with this @bonomat :) please feel free to respond / clarify / add more requirements!
We currently can test these kinds of regressions locally, if you run
|
Beta Was this translation helpful? Give feedback.
-
I've been thinking how we could solve the following problems that we sometimes encounter:
I realised that it might be a quite simple solution for this, by storing the databases from e2e test runs.
This way we would know whether we're about to introduce a regression in maker, app, or coordinator.
unsolved problem is how to deal with the fact when we decide that it's OK to break things - perhaps manually clearing the action cache via the CLI or web interface would be enough in such situation.
Beta Was this translation helpful? Give feedback.
All reactions