-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to run spec files in a specific order #390
Comments
No, you shouldn't need to run tests in any specific order. That is usually indicative of a testing anti pattern, whereby you are relying on state being built up between all the tests. Can you explain your situation / use case? |
For example: before my other tests, i need create a user. But what i really want is https://docs.cypress.io/docs/configuration#section-global
TXS @brian-mann ! |
Ordering tests could be convenient if you want to run the often failing tests first. It saves you some time in CI. |
@brian-mann I have a use case for why it is really important to be able to run tests in order. We are using Cypress for functional live tests. When running live tests we are dealing with real customer data that is somewhat dependent on state, in order for us to run our tests we need to hit real endpoints and perform actual CRUD operations. If we cannot control what test packages are called in an order the tests will fail since clearing out and creating data in order is a big part of live testing. I believe this is somewhat related to this issue: #263 |
Right now we are resorting to labelling our test folders |
As @digitaldavenyc points out full blown web apps often test complex system interactions. Should every test be able to run independently and in a vacuum? Sure. In theory. Real world and theoretical science do not always get along. It may be anti-pattern but coloring 100% within "the pattern" lines means my test scripts take 27 hours to complete instead of 27 minutes. Use Case-- My app allows users to add locations to a data set on the back end. On the front end it renders a map of locations. Front-End test: Make sure 10 locations are returned from Charleston SC This is heavily dependent on the data set that includes two primary elements: location data + user-set options such as "default radius to be searched". I could write the test to first pull the options data (default: 5 mile radius), THEN pull all location data (5k entries), THEN let the script loop through all 5,000 to calculate the distance to create the baseline of valid results, THEN run the front-end query and make sure it returns the same list my test calculated. I've now created another point of failure as my test script code is more complex and prone to errors. Not too mention it takes a LOT longer to run by not being able to make data assumptions. OR I could write a test that ASSUMES my "load_5k_locations_spec.js" has executed and passed. The test is now pull the option data, load a "valid_5_miles_from_charleston.json" fixture, run the front-end query and compare it against the displayed location divs. An order of magnitude faster AND far less complex test script code. NOW... take the above and run the test for five different radius options. I'd rather pull a valid_5_mile.json, valid_10_mile.json, etc. and compare against an assumed set of data that can ONLY be a valid assumption if I am certain my "load_5k_locations_spec.js" ran BEFORE all my "5miles_from_charleston.js" and "10miles_from_charleston.js" scripts ran. Bonus Points: Have a Cy.passed( "load_5k_locations_spec" ) command the returns true|false if the specified test passed on the prior run -- makes it easy to completely skip a test if a prior test run failed. No - not perfect "by the pattern" rulebook implementations, but in the real world people have deadlines. Making tools that are malleable enough to meet user's needs versus doing things "strictly by the book" is what makes them powerful. I'm fairly certain chain saws are not designed to be used to make ice sculptures - but people do it. If you try to do the same and cut your arm off because you don't have the skills to use that tool in a manner it was not intended, consider it a "learning curve". |
I strongly suggest NOT going down the path "real world requires running spec files in order". It is just JavaScript and you can easily set state before each test the way you want, without relying on the previous test. In fact we are working on automatic test balancing so different specs will be running on different CI machines and in different order (if you use Instead, split tests into functions and just call these functions to set everything up. |
I agree with @cristopher-rodrigues that running tests in order has major advantages - speed wise. You can save time but not reloading the login page(or doing requests for i) each time |
How to run login spec before create user spec? |
It is, indeed, a huge pain to run e2e tests with an SSO redirect (which can't be removed) if you cannot make sure your login procedure runs as the first test to set a valid token. Otherwise, you have to write logic in every single test file to handle that, which seems silly. |
@RadhikaVytla you could run the login_spec first as an isolated job (CI), and depend all other test specs on that job being successful. Could save you some time if login isn't working. |
@araymer can you set the token through an auth api call? If so, just use a beforeEach (write it once) and this way each test can use it's own separate auth token |
@araymer You can also move this to the support file within a |
Being able to specify the order could be very useful when testing a process composed by several steps. I'd like each step to be a standalone test, but they need to be executed in a specified order. Currently I can only follow the advice that naming them with |
Just found an easy hack, add a number before your file or folder and it will run those in that order. For example, @id-dan you can do something like 01-SignUp/01-SignUp.feature/, 01-SignUp/02-Login.feature/.....05-Account/... and so on. Cypress should pick up the files in that order and run them. |
Thx @loxator your suggestion solved my problem! =) |
This would be a great feature. I currently have two test files that take a lot longer to run and are "more flakey" than others resulting in the occasionally fail for non legitimate reasons. Due to the file names ( |
Any news on this @brian-mann ? Cy best practice for each test to run standalone is excellent and perfectly architected, but there's definitely an added convenience to being able to order tests, and I'm curious if you and your guru coders are considering it? |
To reiterate, this feature would be really useful in order to run often-failing/fragile tests first, thereby ensuring that failures are flagged as quickly as possible. Some workarounds for this have been described in this issue thread (such as numbering tests in intended order), but this isn't really a solution, as it relies on a cumbersome manual renaming of test files. If test ordering could be specified by some configuration file, this file could be generated and updated by subsequent test runs, putting frequently-failing tests first. I would love to get an update on whether this feature is being considered. |
That's correct for unit tests, however integration and end to end tests are a different beast. I appreciate the additional power cypress gives us for injecting state directly into the page and the cy.request feature to avoid having to do things like ordered tests. But sometimes you just need ordered tests and the whole "tests should be able to run in any order" is simply a cargo cult mindset without considering the context. |
I stumbled upon this feature request searching for a solution for an adjacent use case. I have tests that are heavily dependent upon an API responding in ways that I expect. I'd like to be able to write a test that's sort of like a pre-flight checklist using I think this is also related to #518, so I'd need both the ability to specify that the pre-flight check runs first and that the entire run aborts if it fails. |
@ddehart You know, that's a great point. Just today, we had a meeting about using Lighthouse to test our app for accessibility. There is a lighthouse/cypress plugin I was interested in using. If I go this route, I want my lighthouse tests to be completely separate from my integration testing, but I still want to use cypress to do it. I would like a way of running integration tests first, followed by lighthouse tests. A way of doing this might be to follow the Ansible pattern. In Ansible, you can apply tags at the playbook or task level. I could tag all my integration tests as I think that would work smoothly with cypress. |
@craig-dae I would place different types of tests in different subfolders and then run them using
Note: there is also https://github.com/bahmutov/cypress-select-tests but it relies on rewriting tests and in general I would not rely on it |
Just a little piece of history as it relates to fitting users needs. Intel used to dictate to its' customers the products it provided. They normally would fit 70-90% of the customers needs. Intel executives figured that because they produced a superior product that it didn't matter if customers had to implement the other 10-30% to complete the customers needs. Competitors decided to listen to customers to given them 95-100% of their needs since it was minor tweaks to fit their needs. Intel market share plummeted forcing them to change. They still haven't recovered the market loss. The point I am trying to make is that while Cypress does a lot of things really well, there still is a need from the users to have some structured order to certain testing scenarios. For instance there are some tests that I have that need to be ran in order (some elements of integration where I check record creation and modification) but most of mine I would want ran in parallel. To be against this idea of having some test execution in a specific order is to also acknowledge that while Cypress is a great design that it is not designed for long term usage because it is inflexible to the needs of the users. Relying on plugins is not a good route as sooner or later they cease being maintained which means loss of efficiency in test execution or worse, incompatibility due to structure changes that are likely to occur in Cypress. |
@jwetter I mean, while that all makes sense, @bahmutov literally just gave you a solution in the previous comment for your outlier scenario. Cypress is opinionated about how testing should work, rather than being checkbox heroes. This is why I like them. Their opinions have frustrated me, mostly because they have constantly ended up being correct. They've steered me to far better testing practices than I'd be engaging in if I were able to force them to bend to my incorrect ways of doing things. But again, your use-case sounds like it is solvable pretty trivially, by organizing the phases of your tests by top-level directories in |
@craig-dae and @jwetter using the wildcard pattern and Dashboard also works - just use the
|
@craig-dae and @bahmutov To be clear, I only posted because I think this tool has great promise to become an industry standard. The reason Selenium is still an industry standard is because they changed their software to meet the needs of their users where reasonable. Cypress could beat Selenium but only if it is meeting the needs of the users. |
Having this issue surpassing the 4 year mark and given that users keep providing sensible use cases, it would be all-around beneficial to begin solving the issue by formalizing the The following tasks would accomplish the aforementioned goal:
P.D. I originally had the intention of doing the tasks myself and submit a PR but I haven't had the time, so by submitting this reply do not think I expect the Cypress team to complete the tasks, it is truly an invitation to any colleague and/or hobbyist to take the lead in making sure that the "unofficial" fix (that has earned me 200+ points in Stackoverflow so far 😄 ) becomes a properly recognized feature 🚀 |
Not all ordered tests are anti-pattern. We have a very valid use case. We distribute and balance specs across workers. We use run duration and passing status to update the balancing after each pass (very much like Cypress Dashboard). The value in doing so is that we place failing tests first on subsequent runs for faster feedback for developers when running under CI. Very often tests that pass locally might fail in CI. So devs might have to wait for many other specs to run before receiving feedback on a test they are trying to fix. This is a huge waste of time. Perhaps provide a flag to force cypress to respect ordering when not using Cypress Dashboard tooling so it doesn't interfere with all the In other words, something like an |
I very much would like to echo @diggabyte's point that, any tests that have failed or have been flaky recently should be run first, followed by the order Cypress currently does (longest first I think?) Cypress is great. This causes me to use it more. The more I use it, the more I rely on it. The longer my tests get. The more parallel boxes I run it on (8 now). The more impatiently I wait to see if my tests pass. The more I beat my head against my keyboard when a frequently-failing test gets run on the bottom half of my list of tests. If Cypress just did this by default (flaky and failing tests first), @diggabyte would not have to manually order his tests (for the reason he stated). Seems like a relatively cheap LOE to add a lot of value. Update: Actually, it turns out that this is already a feature if you get the Business-level subscription. You don't get it on the Team subscription, which is what we have. Makes sense I guess. They gotta get paid, and if you're using Cypress THAT much, $300/mo is not unreasonable. |
Is there currently a way to define that if a test fails, then the whole run should fail? It could be a good addition to ordering — I'd place general (bird-eye-view) tests first, and if they passed — move on to more detailed ones. Currently our full set takes about an hour, which basically means that if something fails — we'll only know about it the next day (still better than hearing about it from our end, so I'm thankful for what Cypress allows us to do). Ability to order tests AND configure cypress to stop on first error (or specify which tests are “critical” — there may already be a way to do it?) would help improve the responsiveness of this process. |
@NPC checkout cypress-fast-fail; however, sometimes it doesn't play nice with other plugins. Tests can be ordered in folders like this:
|
@NPC As part of Cypress’s pricing, we included the ability to cancel test runs when a test fails. This setting is accessible from the Dashboard for organizations starting at the Business Plan. This offers a solution for those running tests using the Cypress Dashboard - and also ensures a parallelized run is cancelled so that all parallel running specs will also be cancelled to save time on the run. To get this feature, you will need to update to Cypress 6.8.0 and also be a member of an organization subscribed to a Business Plan. This feature was implemented with parallelized runs in the Dashboard in mind since this was the hardest use case to address. We had to build this feature specifically to continue to receive all of the tests in a cancelled run to ensure proper reporting. Now that we have a mechanism to cancel runs across these channels of communication, we can now consider a way to initiate cancelling test runs when a test fails from the Test Runner when not recording to the Dashboard. (Likely this would be implemented by some CLI flag or config specifying `cancelOnFailures: true) See this issue for cancelling test runs when a test fails from the Test Runner when not recording to the Dashboard. |
I would also appreciate being able to do this without having to resort to the workaround of changing my test file names, so that I can run the tests that fail most often first. |
Been wanting a |
For Cypress v10 just list the specs in the order you want them to run const { defineConfig } = require('cypress')
module.exports = defineConfig({
e2e: {
// baseUrl, etc
supportFile: false,
fixturesFolder: false,
setupNodeEvents(on, config) {
config.specPattern = [
'cypress/e2e/spec2.cy.js',
'cypress/e2e/spec3.cy.js',
'cypress/e2e/spec1.cy.js',
]
return config
},
},
}) Of course, if you want to parallelize them using Cypress Dashboard it will change the order based on timings / failed tests first / new specs first. |
Is there a way where i can run particular spec file twice ?
For the above config , spec1.cy.js is running only once. Is there any way to run the same file twice or more? |
Has there been any update on this for an official test ordering solution since last conversation over a year ago? We have a need for this on our project as well. @jennifer-shehane is it still CY's official stance that this will not be implemented? |
@skiKrumbRob why is the solution with explicit spec order not working for you? If you need to run specs in specific order in parallel and have concrete requirements and example, open an issue in https://github.com/bahmutov/cypress-split and it would be simple to implement |
@bahmutov Forgive my learning brain here, but I digested this full thread yesterday trying to figure out if/how ordering can be done and perhaps my eyes glazed over that bit by the end of this monster thread. Pretty new to Cypress and learning on the fly. Is there any documentation for the explicit spec order that I could look through to get a better grip on where to set it up etc.? |
How can I run all tests in a order without rename the files using a number to determine the order to execute?
There is a another way that can I run the tests in a custom order?
For now, I’m renaming the files using the prefix 1_xxx.js, 2_xxx.js
The text was updated successfully, but these errors were encountered: