-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pub / Sub Subscriber: CPU Usage eventually spikes to 100% #4600
Comments
@anorth2 Would you mind trying with the patched |
Testing it out now, I'll edit this comment once I have it deployed. @dhermes Having issues with the wheel, getting this: |
|
Fixed by #4642. |
Hi @dhermes , I am still having problem with high CPU usage. After around one hour CPU usage is 100%. google-api-core (0.1.3) I am using |
@arindamchoudhury Can you confirm that your Python shell is in the same environment where those packages are installed? Also, could you share some code so I might reproduce the CPU spike? |
I just now have run this with |
Hi @dhermes , If I install
it works fine. The publish also creates high cpu uses. |
The |
As suggested [1], this actually stopped the spinlock bug. [1]: googleapis/google-cloud-python#4600 (comment)
I filed grpc/grpc#13906 to note the difference in behavior between the source dist and the binary wheels. (Thanks @arindamchoudhury!) Also note that this issue must remain open since the spinlock fix has been reverted. |
I posted this comment grpc/grpc#13906 (comment), but it might be worth posting here directly as well: Our Linux binary wheels target You can try forcing the |
@mehrdada Thanks for stopping by, I really appreciate it! The real issue this is tracking is what grpc/grpc#13665 tried to fix, but it's all valuable information. |
I'm seeing constant 100% usage with |
@explicitcall Is that when installing |
@dhermes just a plain |
I'm also confused if I should still install from source when this issue is closed already grpc/grpc#13906 |
Yes you should install from source. I'm not a |
Thanks for the clarification. Is there an issue to track or a certain grpc release to wait for until this wheel/source issue is resolved after grpc/grpc#13906 is closed? |
/cc @mehrdada Can you weigh in? |
Is there a known old version that is not affected by this bug or alternatively a Dockerfile with a workaround? |
@explicitcall @dhermes Please note that this is a limitation enforced by PyPI: we simply have no way to use That said, it is my understanding that the CPU issue was resolved (or at least a resolution was tried) in 1.8.4. |
@mehrdada The old As for the fix that works with @jonparrott Is it worth discussing alternate hosting for Ubuntu wheels (and other Linux platforms)? |
That experience is really bad, but it's worth us discussing some alternatives here, as the sentence |
@dhermes Agreed. To be clear, I was addressing the source/binary distinction. My understanding is that a bug fix was attempted in 1.8.4 (including poll code path). Are you still encountering the same bug on 1.8.4? If yes, please ping the relevant issue on the gRPC tracker again. |
@jonparrott We are actively trying to improve the situation on the binary side (grpc/grpc#14041), but as far as PyPI packages are concerned, some of those limitations are imposed by PyPI. Perhaps the Python community should work on a |
Moving this to an internal thread for now. :) Please check email? |
Shared binaries are like the waltzing bear: it's not how well he waltzes, but that he waltzes at all. We shouldn't be surprised that the |
According to @mehrdada and team, this appears to be resolved by grpcio 1.8.4. Closing, but if it reappears we can ofc re-open. |
Yes, please file an issue on gRPC issue tracker (or reopen the one) with new repro if you encountered this again. FYI, since it is relevant to the previous discussions on this thread, we modified the epoll1 IO manager code path in core to rely on epoll_create followed by fcntl when compiled on manylinux1 (instead of epoll_create1), so by default our binary packages should have epoll support enabled by default from 1.9.0rc1, so that should be helpful too. |
due to CPU eventually spikes to 100% see googleapis/google-cloud-python#4600 - apply client.projects().subscriptions().pull() to pull messages - change message object to message dictionary (hmessage.py) - updated message handling statements
This is a distillation of this report from many other issues. Large thanks to @anorth2 and @dmontag for reporting and helping refine the issue.
Issues that have been partially resolved (except for the CPU spike) are being collapsed into this one:
Core issue: there is a spinlock bug in gRPC (present in all recent versions
1.6.x
,1.7.x
,1.8.1
). A "bandaid" fix exists but has not been merged (as of noon Pacific on December 15, 2017).Update: The fix was rolled back, but will be rolled forward (grpc/grpc#13918).
Potential workaround: compile
grpcio
from source (e.g.) while including the bandaid fix or just use the 64-bitmanylinux
wheel that I already created (I may be open to creating Mac OS X wheels, not sure about Windows)The text was updated successfully, but these errors were encountered: