-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FFI client can spawn tasks faster than they can be executed leading to memory build-up #135
Comments
We already found that when a modbus command function (like Also, in our load test we have seen that thousands of requests and callbacks are created without being deleted, even when we are making sure the clients are in the |
@xlukem Thanks for this report. I'll have a look and see what I can come up with. |
Can you confirm that you're using the latest release (1.3.1) and also tell me what OS you're using? |
Yes, I can confirm that we are using the 1.3.1 release in a Debian based Docker Container on the Torizon OS Platform. We are using the |
Good. Should be easy for me to replicate and figure out. |
```We already found that when a modbus command function (like read_input_registers()) is called on the client, while it is still in the connecting state, the modbus callback doesnt get deleted.`` Inspecting the code, I do see that when connecting, all queue operations are deferred until after connect succeeds or fails. With the statement about do you mean "callback doesn't get called right away" or do you literally mean that the I am going to do a release that makes requests fail while the connect operation is processing to see how this changes things. |
Yes, I have now seen that this part of the queue does work properly. The I think our problem might be limited to the scenario under load. I would assume that more modbus commands get created than our device is able to send. It doesnt look like any changes to the |
It's a bit complicated. There are actually 2 queues:
That said, I think the issue is that we're spawning requests faster than the tasks are being failed. I believe this is b/c the client task is currently not servicing 1) during a May I ask what you have the connection retry delays set to? |
Yeah, that might be it. In the worst case, we will just have to make sure not to overload the queue. I use the standard values for the
I once tried setting them both to 100ms with no notable difference. |
Do you: A) Schedule your polls on some timer in the main C++ thread OR B) Schedule the next poll in the callback from the previous poll? I guess if the network is slow, the scheduling would pile up with the way the bindings currently work. I believe that I'd like to change this such that the async queue would also respect the |
I think this also leads to an API that I'd like to add... one in which the library schedules the polls at a fixed frequency on your behalf whenever you are connected. |
@xlukem Can you try out this milestone release? https://github.com/stepfunc/rodbus/releases/tag/1.4.0-M1 All of the methods on the client will now instantly fail with the error |
Yes, thats what we did. Thats why the queue piled up pretty quickly. We have already tried out the newest release and it works like a charm. Thank you very much! |
Yes. Requests are now immediately failed when the client is connecting. I will adjust the release notes to explain this behavior change as well. |
Fixed in #136 which will be incorporated into 1.4.0 release. |
Great work! We really appreciate your support for the library. We encountered one last thing we would like to discuss. With the current implementation we never know how full the queue is and want to make sure that certain messages are guaranteed to be executed at a certain point of time. Being able to lookup the queue length/status would enable us to dynamically fill the queue and always keep room for new messages, without provoking errors. Is this a feature that could fit into the library? |
I don't believe so. Another option would be to make the operations blocking: https://docs.rs/tokio/latest/tokio/sync/mpsc/struct.Sender.html#method.blocking_send The problem with that is you could never call those methods from a callback b/c you can't call them from an async context. I still think that the best thing to do in a following release is to have an API where you let the library schedule the polls on your behalf. |
@xlukem FYI these changes are now in the 1.4.0 stable release. |
Hey guys!
We are actively using the C++ bindings with your library and are very happy with it.
When we recently expanded our software to communicate with many devices at once, we noticed a significant steady increase in memory usage until the software crashes.
I ran our binary with the
valgrind
tool and got this as part of the result:A significant amount of memory is still occupied by the rodbus library although it is not needed anymore.
To me it looks like that the tokio runtime tasks never get deleted.
At the time of testing, we sent about 5000 Modbus Requests per second.
We also pass some arguments into the ModbusCallbacks. As those are just pointers or ints, this shouldnt be the cause of the problem. Other than that, I dont see anything we are doing differently than the examples.
Our timeout is set to 1 second and the max_queued_requests is set to 10.
Is this an issue on our end or needs the rodbus library to be adjusted?
Thank you!
The text was updated successfully, but these errors were encountered: