-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
paho Rust slower than paho Python? #63
Comments
Ouch! Yes, I'm with you. The Rust version should be more efficient. I really do want to get a set of standard measurement for the Paho libraries so that we can get side-by-side comparisons of the performance and requirements of each. Messages per sec, memory use, CPU use, etc. The one performance issue that I'm aware of is that there is more memory copying than might be necessary on the border between the Rust and underlying C library. Sometimes a buffer is copied in order to ensure Rust lifetime guarantees, but there might be places to improve on this. But I wouldn't imagine that it degrades performance to what you report. The only thing I can think of is that there have been some bugs against the C library filed recently that the library is "spinning" and using up a lot of CPU in some instances: That could be related. |
According to the flamegraph, a lot of time is spent in |
Ah. (Sorry, I didn't have much time this morning to dig into the graph). |
I need to look at the Websocket implementation, or someone does. It works to the extent that basic function operates but there are issues that need addressing. Also remember that the Python implementation has no disk persistence. You can turn that off in the C library if you want. |
The code is not actually using websockets. Looks like |
Ok. WebSocket_getch() is where we wait for the next incoming packet to be delivered (the first byte of the MQTT packet). WebSocket_getdata() is where the rest of the packet will be read in. So I'd be surprised if the getch() call is using a lot of CPU time. Elapsed time? |
Should be CPU time. Created using cargo-flamegraph. |
I remeasured again using the https://github.com/eclipse/paho.mqtt.rust/blob/master/examples/async_subscribe.rs example, and it is faster, almost as fast as the Python PyPy version. |
There are also some logs triggered by the C lib that may be possible to disable (21% CPU time with StackTrace enabled). |
A |
Agreed. I pushed out v0.7 based on what had been sitting in the repo for months waiting on the upstream bug fixes. But I'm immediately jumping on the next release and will start testing this. I was assuming I would just enable this in the build. I didn't imagine not wanting to use it, but I suppose I can add an inverted feature to turn it off, just in case. |
This is in the |
Released in v0.8 |
I have an application implemented in both Rust and Python using the paho mqtt libraries for each language.
The app is receiving around 800 mqtt messages per second and then triggering http calls for a few of the messages based on some simple parsing.
The Rust version is using the futures API with tokio 0.2. The Python version is using PyPy3.6 v7.3.0.
For some reason the Rust version is using 50% more cpu than the Python version (running on AWS T3 instance). This was a bit surprising to me as I expected the Rust version to consume less resources.
flamegraph.svg.gz
The text was updated successfully, but these errors were encountered: