-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak going from 3.2.2 to 3.3.0 #612
Comments
Thanks for reporting this. From the top of the head I don't know what could cause this, though we were investigating a (not yet fixed) leak in redis-rb. The change was rather big from 3.2.2 to 3.3.0. We're investigating that. |
Cool. Let me know if there's anything I can do to help. It's easy enough for me to redeploy this test server and see the issue if you need it. :-) |
I confirm we experience the same issue after upgrading to 3.3.0. We are using Sidekiq, as well, and though it is not heavily loaded its memory usage climbs up more than what we usually monitor. In a couple of hours I can confirm if the issue really goes away after I just rolled back to 3.2.2. |
I'm not a Sidekiq user, can you gist a runnable self-contained example so
|
@kamen-hursev, @smeyfroi: Are you using redis-rb in combination with hiredis-rb or without? |
I'm not using |
that confirms my suspicion. I ran a very tiny script through memory_profile: gist. If possible, can either of you run with hiredis? |
I won't be able to test |
No problem then. I don't expect you to test in production. :) |
Hi, I can test that out on my test server. Might be the morning (UK time) time though. |
@smeyfroi Thanks! |
OK I just redeployed with redis3.3.0 and hiredis0.6.1. The graphs look good so far but I'll let it run for a while and report back. Is hiredis recommended underneath redis-rb for performance? (I never understood this part of the ecosystem.) |
I don't have benchmarks at hand, but yeah, performance is the one reason you might want to use hiredis. |
Great! Well done tracking that down. |
Quick note to say that my test server running redis3.3.0 and hiredis0.6.1 looks completely normal at the end of the day here. Guess that helps confirm your forthcoming fix! |
Hello, any update about this issue? |
@fertobar If you use hiredis-rb you should be fine. This is lying around for far too long already :/ |
@badboy Is there any news regarding the memory leak of 3.3.x? Thanks! |
I finally merged the fix now, though can't release the gem on rubygems right now. I'll push it out once I get access. |
So, this is fixed in Release v3.3.1 ? |
Yes, it should be. |
Thanks a lot, much appreciated! |
If someone can confirm this, it would be appreciated. If anything else comes up please report back, until then I'm gonna close this ticket. |
Looks better, thanks! |
Version 3.3.1 resolves the issue, thanks! v 3.3.0:
v 3.3.1:
|
Our app uses sidekiq ent (latest version) with the redis-rb driver. It also access redis directly using redis-rb to do stuff using redis as a datastore.
After updating gems last week, I saw the active memory on the rails server increase quickly and eventually kill the box.
See attached: the section before the big dip (redeploy) is what happens with redis 3.3.0, where the green [active memory] line eventually maxes out and crashes the server. The section after the redeploy is what happens with redis 3.2.2. The 3.2.2 'active memory' line tends to level out over time roughly at the level where I've taken this screenshot.
Here's another screenshot that shows the different a bit more clearly. Here, the sections up to a redeploy early on 22/4 are 'normal' with active memory sitting reasonably well around 1.17GB. Then after the update to redis-rb you can see the green 'active memory' line climbs and climbs quickly, until catastrophic failure when the kernel kills the process and the cycle continues.
I think this is due to the work that sidekiq does in polling work queues in the background, because once I'd disabled most of our sidekiq workers from doing any work by doing
return
from theirperform
methods, the memory-chomping continued. So I think this effect may be visible with sidekiq just polling in the background etc. We use sidekiq-ent's unique jobs functionality which also polls redis in the background.I'm not sure how to help debug this but want to raise it because it's probably an important issue unless I'm doing something really weird in my app (which is entirely possible).
Downgrading to 3.2.2 does definitely fix the issue with no other changes (I'm watching the graphs now: they look completely normal again.)
Are you aware of any changes that may have introduced a problem like this? Can I help you debug it?
The text was updated successfully, but these errors were encountered: