-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConnectionManager peer HighWater configuration not honored (# peers spike up => OOM on VPS) #4718
Comments
Your expectations are pretty much correct. We close open connections at most once every 10 seconds and only close connections that have been open for the grace period (20s) but you shouldn't be spiking up to ~200 peers in a couple of minutes (unless there all connecting within 20 seconds...). Regardless, this is definitely a bug. |
A couple more observations: I did manage to see the "endgame": after 8 hours, it was at 1.2GB memory usage (RAM and swap), but there were only 97 peers. So, it seems that the memory increases, but it is not proportional to the number of peers (leak?). I also tried running the server with |
The current release has an issue where it:
This should be fixed in the next release (it has been fixed in master) but we're trying to iron out a few bugs first. |
I did a source install from master (0.4.14-dev). The memory usage decreased - though it is unclear at this point if it grows indefinitely or not. However, I still see too many peers connect (oscillating between 100 and 115 at the moment). |
After a couple of days running 0.4.14-dev, I can report the following:
|
Late to the party... The issue here is that we don't have any "MaxConns" hard limit. You were probably noticing a different memory leak. |
Actually, reading through this issue, it appears that it is really about other per-peer memory leaks. (sorry for the noise) |
Version information:
go-ipfs version: 0.4.13-
Repo version: 6
System version: amd64/linux
Golang version: go1.9.2
Type:
Bug
Description:
Context: on my system the ipfs memory grows until it is OOM-killed. (I run ipfs on a VPS (linode), with 1 vCPU and 1GB of RAM. )
I have tried all the suggestions that I found by looking for similar issues in the past and stumbled upon the suggestion of limiting the number of peers.
However it seems that the configuration switches for the connection manager are not honored.
I use this configuration:
I would expect the number of peers to be bounded by 20 (or 20 + something), but the peers average to ~60 after a couple of minutes and they spike up to 200 some time later. I suspect it gets even higher, but at that point the instance becomes unresponsive and I cannot access it anymore. Later on, when I log in, I find that ipfs was OOM-killed.
The text was updated successfully, but these errors were encountered: