-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Screenshots can be very slow #3069
Comments
I eventually got it to reproduce with logs - I think its related to the file globbing. - I didn't see anything fishy with the unlocking so please ignore that. I think that if the timing is right, the globbing happens during all the async calls in the screenshot and it taking time and the screenshot taking time, basically prolong the screenshot taking dramatically. If its lucky, the globbing occurs just before the screenshot and its able to complete with only a single globbing run inbetween. |
@jennifer-shehane @chrisbreiding server/open_project polls server/utils/specs every 2500ms. Because we have our cypress spec files inline with the rest of our code, we have thousands of directories. On my top of the range machine with ssd, glob takes 950ms to complete. On my colleagues machines it takes 2-3 times that. The solution would be to use a watcher to watch for any file changes and to re-run the spec glob when that watcher sees changes. I see you already use chokidar for watching files - but the poll option is true.. I'm not sure whether that invalidates the performance saving of watching (I don't know why you have that since the default is to rely on os events now). If the code files were in javascript and not coffeescript I'd happily look into making a PR, but I find coffeescript too annoying as someone who's never written in it. |
The code for this is done in cypress-io/cypress#4038, but has yet to be released. |
1 similar comment
The code for this is done in cypress-io/cypress#4038, but has yet to be released. |
Released in |
Current behavior:
Taking a screenshot takes a long amount of time. I have a high-dpi monitor and it can take ~21 seconds per screenshot.
I suspect the extreme time taken is down to high dpi, so the bytes are x4, but still even on normal dpi it takes a number of seconds.
If you take 2-3 screenshots per test and have 200 tests, it adds up to a significant amount of time.
Problem:
Logs:
I tried using
pngjs
directly to load the image - it took 1s to load, not explaining the 7s above. I added some logging.. and suddenly it was quick..I ran it again with full logs and this time it was slow..
In addition, I noticed that the cypress server process is basically at 100% cpu.
What logs are in between?
So it seems that I intermittently get the above going round in a loop.
There are a number of things here.
get:project:status
The spec file globbing is done every 2.5 seconds - and because of our setup I assumed this was why the screenshot was slow - it certainly seems to be the cause of constant cpu usage - ideally it would listen for file system changes.
There were a couple of problems that lead to this:
testFiles
option is not documented (https://docs.cypress.io/guides/references/configuration.html#Global) - that lead my colleague to specify all files, then useignoreTestFiles
ignoreTestFiles
is less efficient because it usesminimatch
per file, instead of using theminimatch
filter, combined with a leading negate (so converting the glob to a regex happens every file instead of once)However, this wasn't the solution to the problem, as I then reproduced the situation where screenshots were fast. The only difference is that there is no big time gaps in the "attempt to get lock" log messages..
And right now.. I can't reproduce the massive delays any more. Looking at the code I suspect something to do with the lock file ends up synchronously locking the system for a couple of seconds every time it is called.
I wonder if its connected with if getting the lock and reading the file goes over the debounce time and you have multiple attempts to lock/unlock at once and you get some kind of deadlock.. I need to do more debugging if I reproduce this again.
I don't think its connected with the slow glob as going back to changes at the beginning of the investigation I still can't reproduce.
The text was updated successfully, but these errors were encountered: