-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU usage increases over time on RasPi 3 B by using .windowTime() #2052
Comments
@olivermue does this happen with any other time-based operators (such as auditTime or throttleTime)? Knowing that can help us determine whether we should start looking at the async scheduler, or just the windowTime operator. |
Just ran some tests: By using |
I made further tests on a different hardware and there the problem occurs too for The other hardware was also ARM based:
And the used node version was v4.2.6. So I can reproduce the same error on a different hardware (but also ARM based) and a different node version. |
@olivermue thanks for testing further. Glad to know it's probably not an issue with the Scheduler architecture! will look into windowTime when I get some time next week. |
When trying to reproduce on Ubuntu 15 can you also confirm whether or not you were using node 6.9.1 exactly? As you can guess, it's pretty unusual for JavaScript code to cause this sort of disparity in CPU clime in the same node version but differing OS/hardware, so wanna double check it's not just some issue with that node version on that hardware/OS. If that is the case, we can still look into what triggers it and see if we can work around it. It's also gonna be hard to reproduce since most of us probably don't have a raspberry pi. 😄 so we may have to have you try some things for us, assuming it's not also reproducable on our desktops. |
Probably it is still a good idea to grab a Raspi 3 and reproduce it there in order to isolate the culprit. Do you need a donation? |
Tested on a Raspberry Pi 3 with NodeJS 7.0 with similar results. I'm seeing about 200% CPU scattered across all 4 CPUs. Working on getting some flame graphs of processor usage with perf which hopefully can turn up some clues. |
@daixtrose Thanks for the offer! @chriscareycode is a co-worker of ours who has some Pi 3's (obviously) so we don't need a donation anymore. |
@chriscareycode I am so curious what triggers this bug, so please also report how you try to catch it. Many things to learn from this issue ... |
Looking over the windowTime operator, it seems like if there's only a |
I'd bet it not being removed from the windows list is the source of the CPU usage issues. Because that means the list of what it has to loop over is going to grow over time. It's still going to try to next into those dead observers, which executes some code. I'll add some fixes to this and will see if that resolves the issue. |
@Blesh Any update on this issue? |
@daixtrose @olivermue I believe #2278 fixes issues when using |
I have a Raspberry Pi 3 here with the cpu-issue code to reproduce still on it. I re-ran the original client and server code and saw the node process increase up to 99% CPU after 5 to 10 minutes of running. Next I copied the cpu-issue folder and updated RxJS to 5.1.0 and ran the test again. After 10 minutes of running, the node processes are each hovering below 50% CPU and are not increasing. So far so good. |
Cool. @chriscareycode could you please |
Did not see a difference in memory consumption. The new code ran for about 2 hours and did not increase above 50% CPU |
I'm going to close the issue as I believe this specific issue itself has fixed with the latest release. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
RxJS version: 5.0.0-beta.10 - 5.0.0-rc.1
Code to reproduce:
(see attachement)
The line that causes the issue:
Expected behavior:
The cpu usage stays stable.
Actual behavior:
The cpu usage slowly climbs up to 200%.
Additional information:
Okay, let's explain a little more. 😉
I'm running node.js 6.9.1 on my RasPi 3 B and have a node instance that gets data from outside through socketcan or socket.io. This incoming data will be pushed into an Observable and then further consumed. But I seen that after a while (10 minutes to 3 hours) the cpu usage is constantly climbing up till it reaches 200% and everything gets really slow. So I stripped down the code to find the root cause and attached you'll find the result of these investigations.
The attached script has something around 90 lines and can be started in four modes, depending on the given command line argument being server, client-empty, client-rx or client-rx-window. To reproduce the problem, the script must be started once with parameter server and once with parameter client-rx-window. Afterwards you can examine the cpu usage through top.
When the scripts starting the server will run with around 30% cpu usage and the client around 40-50%. After a while you'll see the client is slowly climbing up (reaching 60% and on) and if you wait long enough it will hit the 200%.
Just to prove, that the problem really comes from the
.windowTime()
, the other clients are there as well. The client-empty doesn't use rxjs at all, it just receives the data from socket.io and calls an empty lambda functionsocket.on(channelName, () => { });
. The client-rx directly subscribes with an empty lambdaobservable.subscribe(values => { /* do nothing */ });
to the observable without putting the.windowTime()
in between.I tested the same scripts also on a Ubuntu 15 within a VMware image to check if the problem really comes from the library and guess what, there the problem does not occur. 😟
So even that I cannot reproduce this problem on a different hardware (like desktop pc) I want to inform you about this issue and maybe someone has a clue why using the
.windowTime()
leads to such problems, if theobserver.next()
call will be triggered from an external event like incoming data through socket.io or socketcan.app.zip
The text was updated successfully, but these errors were encountered: