-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with watchify@2.1.1 on OSX #106
Comments
Just started hitting this today. |
Yes I guess we should revert this then. /cc @es128 |
cc @pipobscure The errors are actually coming out of the fsevents module, but this is the first time I've seen it look like that. Nonetheless, @hughsk I'd appreciate if you could try installing my fork of fsevents at https://github.com/es128/fsevents to see if the issue goes away with it. If so, I may publish my fork to npm and switch chokidar to it while I'm waiting for the last change to be reviewed/published by @pipobscure. |
FWIW I could not reproduce the issue using webgl-workshop@1.0.4 on OS X 10.9.5 |
I'm having a similar issue with 2.1.1 on OSX when watchifying a heavy load of seperate bundles when I was on 2.1.0 I would get this error:
gulp + bundler config |
Ok now I've been able to reproduce the issue with webgl-workshop 1.0.4 on both Yosemite and Mavericks. It doesn't happen until you click a link in the UI. Seems to have something to do with the amount of simultaneous fsevents watchers being opened - so it only impacts specific use-cases. Starting to explore ways to mitigate the problem. |
Resolves gh-167 browserify/watchify#106 pipobscure/fsevents#41
Should be mostly resolved with chokidar 0.10.5 |
Not seeing any difference on my end with chokidar 0.10.5 |
Are you running more than one instance of watchify simultaneously by any chance? The fix prevents a single process instance from exceeding the system limit of fsevents listeners. Do you have the same problem if you run webgl-workshop@1.0.4? Another change that could be made would be to expose chokidar's |
I'm running 20+ instances of watchify. Don't know enough about node to understand what the other changes you are proposing mean |
@builtbylane If you hack the source of watchify in your I was suggesting that watchify should expose an option to allow users to set this when needed without resorting to such a hacky workaround. |
thanks @es128 for taking the time. I change this option but now I get another error (the same I was getting in earlier versions of watchify)
|
Are you watching the same files with your 20+ instances of watchify? Perhaps you are triggering a lot of redundant watching that you could consolidate. As a quick workaround, I suppose you could try something like |
Actually it isn't chainable like that, so it would be |
I tried adjusting max listeners to no avail. sure there's some redundant watching down the chain, but at the top level, there's 20+ js files for 20+ different site sections or so, with about 30-50 modules that they share. I don't just get that error once btw. get about 30 of them and there's definitely a memory leak. On one level this seems like an issue with node itself? nodejs/node-v0.x-archive#5463 On another, do you or anyone else out there have any recommended strategies for handling this situation? |
The suggestion would be to set one watcher over the file tree you need watched and execute actions conditionally based on the path of the changed file. Can you share a repo that represents your project well enough to make the error reproducible? |
You rock! I'm creating a repo now, I'll invite you in a few. |
The issue @builtbylane has been observing did come down to running two separate gulp processes which use watchify simultaneously, bypassing the chokidar 0.10.5 fix since it involves system-wide limits, and I haven't found a way to automatically detect when the limit has been reached (because the I'm not sure how common this sort of usage is, or whether it's more of an edge-case situation. If https://github.com/pipobscure/fsevents/issues/42 got resolved, a much more elegant solution would be possible. |
@es128 i have a single gulp process that uses three watchify bundles and am seeing this error. just thought I'd add that signal in terms of how come the usage is |
The error shouldn't occur within one process, even with separate watchify bundles, unless there are other fsevents watchers active on the system taking substantially from the overall limit. Again, it's very unfortunate that there doesn't seem to be a way to detect that. The limit within a single node process using chokidar is currently set to 400, while the system-wide limit is apparently 450. |
I just released chokidar 0.10.6 which further addresses this issue by reusing watcher instances, despite watchify asking for a separate one for each file. In the @builtbylane use-case, it drops the number of fsevents watchers created from hundreds to just one, and on webgl-workshop there ends up being around 100 after loading every section of the workshop (originally there were thousands). @hughsk @mattdesl @builtbylane @scamden Please try it out and let us know if you're still experiencing any problems. |
FIXED!!! @es128 you are a hero 🍤 🍤 🍤 |
@es128 i'm so grateful. fixed my issue. you are the man |
Happy to help. @mafintosh @substack I think this issue can be closed. |
Thanks for the fix upstream! Closing. |
Resolves gh-167 browserify/watchify#106 pipobscure/fsevents#41
Which I believe can be attributed to the recent bump of chokidar to
^0.10.5
. For a reproducible case, try installingwebgl-workshop@1.0.4
and compare it to the now fixedwebgl-workshop@1.0.5
.The change was generating a lot of log output, and while on my machine it was running it was causing page loads to hang indefinitely on my friend's machine. Sample output: https://gist.github.com/cabbibo/617f20241fe876453a88
This is more of an issue relevant to chokidar, but just a heads up considering that this change came in as a patch update – it would probably be worth reverting, and then publishing with a minor or major bump when the issue is resolved.
Thanks!
The text was updated successfully, but these errors were encountered: