-
Notifications
You must be signed in to change notification settings - Fork 472
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apparent memory leak of spawned tasks #38
Comments
If you modify Tokio to print when |
Thanks for the report. I will try to look later today. My guess, w/o looking, is that it is a similar problem as the old semaphore had. If that is a case, it will need a similar rework as tokio-rs/tokio#2325 |
@carllerche do you think |
Open PR here: tokio-rs/tokio#2509 |
I have been running the reproduction using the PR linked above and memory is stable for me. |
Fixed w/ the |
As Tokio version 0.2.21 contains a fix in the broadcast channel that removes a memory leak in mini-redis, I don't think 0.2.20 should be considered the minimum supported version, even if the codebase compiles with that version. Refs: #38
As Tokio version 0.2.21 contains a fix in the broadcast channel that removes a memory leak in mini-redis, I don't think 0.2.20 should be considered the minimum supported version, even if the codebase compiles with that version. Refs: #38
I believe there is some form of memory leak, and this was discussed some in the Discord channel
#tokio-users
recently. This graph should illustrate it pretty well:The orange 6.2MB are a single allocation by
tracing-subscriber
, the green areRawTask
s that are allocated viaspawn
, and the blue are something inbroadcast
. The workload is just runningwhile true; do target/release/cli get k; done
for a while (perhaps after setting a value fork
, but I believe it doesn't actually matter).As far as I can tell, the spawned tasks have actually completed. A different implementation (using
watch
instead ofbroadcast
) fixes it. Another project usingbroadcast
does not have a similar issue. My hunch is that there is some kind of reference cycle, maybe in part because the shutdown coordination is bidirectional, but I have not looked more closely yet.The text was updated successfully, but these errors were encountered: