-
-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mac: High CPU usage for large folders #447
Comments
The physical size of the files shouldn't have an impact, but a deeply nested directory tree with many entries would have an impact on the initial scan of the file structure. It should subside after the Also, using |
@es128 interesting, I was not aware of the initial scan, is there a reason why you need to scan all files on startup? does this scan ignore folders that are excluded with the exclude setting? |
Skimmed through the linked issue. Switching versions of node with nvm will break |
@es128 I think this issue is independent and only shows up when running Code from its source code form. When running the bundled product, the environment is made of the node environment Electron provides and is always stable independent from the node version installed. |
You and I have discussed the initial scan with The paths it is keeping track of can now be exposed with the Just FYI - there's been some movement toward making the scan more efficient (#412), but I'll need to focus more time and attention toward it at some point to work toward landing those improvements. |
But I don't think the scan has much to do with the original issue report. The symptoms described in this comment make it pretty clear to me that for whatever reason polling mode is being used in that case. |
@es128 yes that was my theory as well but microsoft/vscode#3222 (comment) made me think otherwise:
|
Ah, yes sorry, I was focusing on the comment about how CPU usage stays up after 30 minutes, but in the prior paragraph the description does point more toward the initial recursive scan of the file tree. But you've had trouble reproducing with a similarly large file tree on a Mac? Not sure how to explain the discrepancy. I suppose hardware differences could have an impact. If you have a way to ignore |
@es128 well I have to take that back, when I now open a folder with the chrome sources I see the CPU go up for maybe 10 seconds. I realize this is the initial file scan that you guys do but afterwards the CPU stays low. I do think about introducing a way to exclude large folders in VS Code for file watching, but it feels like a workaround for something that ideally gets fixed on the watcher layer. I find #412 a good start to see how to speed up this initial scan. Unfortunately my own experiments have shown that a disk scan is always eating CPU cycles simply because modern disks are so fast that they keep you busy for a while... |
@es128 we found out about the issue and it seems #412 would fix it. here is what happens:
We also verified that chokidar was not falling back to polling. My current fix is:
I am wondering if VS Code should configure chokidar differently if it detects that it fails for N times. For example, we could put chokidar into polling mode with a very long polling interval. Or do you have other ideas? |
Polling will only make it worse, the readdirp scan process stays the same, and then you eat even more CPU by installing all those polling watchers, even if they are at long intervals. Looks like we really need to drive #412 home. I would appreciate help with that. Have you actually used it in the same situation where you were able to reproduce the problem and seen that it eliminated it? Any ideas what the relative limits are of the new file walker? Is it possible to add enough files that it falters too? Do you have any concerns related to your feedback that you'd want addressed before we move toward making this change? |
Thanks, I also think polling is the worst of all solutions. I did not test chokidar with the suggested changes yet but can dedicate some time after our GA (end of march) to look into this. One issue I am already seeing is that chokidar grew in size from previously 1.5 MB (1.0.5) to 5.5 MB (1.4.3) which might block us from advancing because we are sensitive to download size. I think my feedback is valid for the case of protecting against endless loops due to cyclic links in the file system. I did some benchmarks with fs.readdir vs. fs.lstat and found that eager fs.readdir calls are not good. However I think this feedback is more for readdirp-filtered than chokidar. I like how simple the change for chokidar will be because it is just a drop in replacement without much changes. |
Regarding size, perhaps we should resume the discussion at fsevents/fsevents#93 (comment). Don't want to derail this issue with it though. |
Makes sense. |
This is probably because fsevents is not loaded and the node watchers are really slow. you can check that it's loaded: let watcher = chokidar.watch(...)
if (process.platform === 'darwin' && ! watcher.options.useFsEvents) {
console.error('Watcher is not using native fsevents library and is falling back to unefficient polling.');
} and fix it with compiling it for electron: electron.atom.io/docs/tutorial/using-native-node-modules/ |
@bumpmann |
This is NOT because |
@bpasero in my case a simple |
I think this can be closed, the most recent versions should've improved this. |
This is more a question than a bug report with concrete steps. VS Code (using chokidar 1.0.5) has numerous bug reports where Mac users report high CPU usage for our file watcher. The only thing in common is that these users open VS Code on a very large folder (> 2 GB) which seems to cause the high CPU spikes.
Is there any known issue for chokidar or fsevents where on a Mac CPU would go crazy when a folder is large? I would assume that the folder size has no impact on how chokidar/fsevents behaves but maybe there is something I am missing?
The text was updated successfully, but these errors were encountered: