-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tokio::sync::Lock leaks memory #2237
Comments
I've ported the repro to Tokio 0.2, and I cannot reproduce the leak. Valgrind:
running the 0.1 repro (blue line is RSS): AFAICT, this issue only exists in 0.1. I suspect we can probably fix it pretty easily by backporting the 0.2 impl to 0.1 |
I have a branch (https://github.com/tokio-rs/tokio/tree/eliza/0.2-semaphore) that backports the 0.2 semaphore implementation to
I'm surprised that the RSS graphs still look similar to the ones for the unpatched 0.1 repro, but that may be due to @carllerche any thoughts? Shall I go ahead and open a PR against v0.1.x to backport the 0.2 semaphore impl? |
An update on this: we believe that an issue still exists in the |
As described in tokio-rs/tokio#2237, the `tokio::sync::Semaphore` can hold unbounded memory, especially when the semaphor is being contested and consumers drop interest. Unfortunately, this use case is common in the proxy, especially when a destination service is unavailable and the proxy is timing out requests. This change reimplements the Lock middleware without using `tokio::sync::Semaphore`. This implementation is in some ways more naive and inefficient, but it appears to be better suited for the proxy's needs. Specifically, waiters are stored in a LIFO stack, which optimizes for minimizing latency. Under certain high-load scenarios, this Lock could be forced to grow its waiters set without cleaning up expired watchers. If this becomes a more serious concern, we could change the implementation to use a FIFO queue of waiters.
As described in tokio-rs/tokio#2237, the `tokio::sync::Semaphore` can hold unbounded memory, especially when the semaphor is being contested and consumers drop interest. Unfortunately, this use case is common in the proxy, especially when a destination service is unavailable and the proxy is timing out requests. This change reimplements the Lock middleware without using `tokio::sync::Semaphore`. This implementation is in some ways more naive and inefficient, but it appears to be better suited for the proxy's needs. Specifically, waiters are stored in a LIFO stack, which optimizes for minimizing latency. Under certain high-load scenarios, this Lock could be forced to grow its waiters set without cleaning up expired watchers. If this becomes a more serious concern, we could change the implementation to use a FIFO queue of waiters.
As described in tokio-rs/tokio#2237, the `tokio::sync::Semaphore` can hold unbounded memory, especially when the semaphor is being contested and consumers drop interest. Unfortunately, this use case is common in the proxy, especially when a destination service is unavailable and the proxy is timing out requests. This change reimplements the Lock middleware without using `tokio::sync::Semaphore`. This implementation is in some ways more naive and inefficient, but it appears to be better suited for the proxy's needs. Specifically, waiters are stored in a LIFO stack, which optimizes for minimizing latency. Under certain high-load scenarios, this Lock could be forced to grow its waiters set without cleaning up expired watchers. If this becomes a more serious concern, we could change the implementation to use a FIFO queue of waiters.
As described in tokio-rs/tokio#2237, the `tokio::sync::Semaphore` can hold unbounded memory, especially when the semaphor is being contested and consumers drop interest. Unfortunately, this use case is common in the proxy, especially when a destination service is unavailable and the proxy is timing out requests. This change reimplements the Lock middleware without using `tokio::sync::Semaphore`. This implementation is in some ways more naive and inefficient, but it appears to be better suited for the proxy's needs. Specifically, waiters are stored in a LIFO stack, which optimizes for minimizing latency. Under certain high-load scenarios, this Lock could be forced to grow its waiters set without cleaning up expired watchers. If this becomes a more serious concern, we could change the implementation to use a FIFO queue of waiters.
Version
0.1.22
Platform
Subcrates
tokio_sync
Description
Under concurrency,
tokio_sync::Lock
leaks memory when acquired locks are dropped. We have a reliable repro for this.We believe that this leak may be present in 0.2.x versions as well, though. @hawkw is investigating.
The text was updated successfully, but these errors were encountered: