-
Notifications
You must be signed in to change notification settings - Fork 141
Linter blocks on large files + high CPU/memory usage #755
Comments
Did some digging into this. From what I can tell, eslint is generating ~170MB of lint error data from this file. The actual linting does indeed take less than 1 second on the worker child process. The slowness seems to come from the transfer of that 170MB from the worker to the main process. The giant string in the heap snapshot seems to confirm this suspicion, as it slowly approaches 170MB. Something either in the I measured how long the whole thing took once:
Which is, uh, not right. IPC shouldn't be that slow. I'll try measuring how long node usually takes to transfer data by IPC |
Possible cause: nodejs/node#3145 |
Great investigating. I've also noticed slow response times even for moderately large files with lots of linting errors. I wonder if one possible "solution" here would be to truncate the linting errors after a reasonably high number. I can't see any real benefit to showing more than a few hundred linting errors in a single file. That probably just indicates that you need to add that file to your @Arcanemagus, what do you think about only showing the first few hundred (maybe 500?) linting errors? We could then throw up a notification that further linting messages have been suppressed. At least this way we won't lock up editors for a quarter of an hour. 🤦♂️ |
Another option may be to apply some compression scheme on sufficiently large results before transferring the data between processes. Could be helpful if we find the time saved transferring less data greatly exceeds the time spent compressing/decompressing. A lot of the data is fairly repetitive, so it might work. A cursory google finds libraries like JSONH that do this, though there may be something better. On the flip side, it's added complexity for a bug that should arguably be fixed in node core if possible. |
Would you be willing to spike out some compression and see if it helps?
…On Thu, Jan 5, 2017, 00:44 Rahat Ahmed ***@***.***> wrote:
Another option may be to apply some compression scheme on sufficiently
large results before transferring the data between processes. Could be
helpful if we find the time saved transferring less data greatly exceeds
the time spent compressing/decompressing. A lot of the data is fairly
repetitive, so it might work. A cursory google finds libraries like JSONH
<https://github.com/WebReflection/json.hpack/wiki> that do this, though
there may be something better.
On the flip side, it's added complexity for a bug that should arguably be
fixed in node core if possible.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#755 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEZyAVT-IkDwDgefobc1ip6SfRgjTTZ7ks5rPIMjgaJpZM4K16g->
.
|
Maybe. I'll try to investigate the issue in node core first, but failing that I can see if compression works. |
UPDATE: looks like someone has already submitted a PR to fix that issue in node: nodejs/node#10557. Once that gets to electron and atom updates, it should make this issue much less annoying. However, large enough lint results will still take several seconds, so we probably still need to truncate the results. |
That's awesome, thanks for all the investigation work. Judging by how long it took to get the last node bump in Atom, I have a feeling this may take quite some time to trickle down, and agree that it might make sense to take some mitigating actions within linter-eslint in the meantime. I'd like to hear what @Arcanemagus thinks, though. |
It's looking like this will land in node |
The current Electron version in Atom It's looking like this is due to land around Atom v1.19.0 so beyond the compression idea thrown around above we are likely stuck waiting on that for any progress to be made here. |
Is this still an issue? |
Issue Type
Bug
Issue Description
I've encountered a large file that stalls linter-eslint for around 10-15 minutes.
I've created a minimal reproduction of the issue (the code itself is obfuscated but the issue is still there).
Steps to reproduce w/ above files
atom .
test.js
. write some code that has linting errors. i've enabled comma spacing so something likevar abc = [1,2,3]
should show errors. everything is working at this pointconstants.js
cmd+s
to save to ensure the linter is triggeredAt this point the issue occurs. Go back to
test.js
tab, edit the file to change the linter errors. you'll notice the linter doesn't update the errors. You'll also notice the Atom Helper process for eslint starts taking significantly large amounts of memory and CPU time, slowing down the main editor. If you take heap snapshots you'll find a large string. Subsequent snapshots will show that string increasing in size.After like 10-15m everything returns to normal. This kind of slowdown isn't due to eslint, since the cli can lint the two files in less than a second.
Bug Checklist
eslint
CLI gives the proper result, whilelinter-eslint
does notLinter Eslint: Debug
command from the Command Palette belowThe message doesn't show up until after the 10-15m it takes for the linter to run. I assume this means it's blocked the whole time
The text was updated successfully, but these errors were encountered: