Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit number of connections #14

Closed
mrusme opened this issue Dec 28, 2021 · 11 comments
Closed

Limit number of connections #14

mrusme opened this issue Dec 28, 2021 · 11 comments
Assignees
Labels
enhancement New feature or request

Comments

@mrusme
Copy link
Owner

mrusme commented Dec 28, 2021

It has been reported that the number of connections IPFS creates makes it tricky to run Superhighway84 on especially older hardware.

It was mentioned that apparently this is a known issue in IPFS:

The libp2p team is currently refactoring the "dialer" system in a way that'll make it easy for us to configure a maximum number of outbound connections. Unfortunately, there's really nothing we can do about inbound connections except kill them as soon as we can. On the other hand, having too many connections usually comes from dialing.

I couldn't find out what the status of the libp2p refactoring is though. However, people are mentioning that disabling QUIC in the IPFS repository has helped a bit with the issue, e.g.:

... ipfs init --profile server, and (3) I removed the lines of quic under Addresses.Swarm in ~/.ipfs/config.

and

I fust followed the advice here of disabling QUIC
ipfs config --json Swarm.Transports.Network.QUIC false

Another idea could be to try and press Superhighway84 into a set of iptables rules that limit its available bandwidth and the amount of connections artificially. I didn't investigate whether that really works, but apparently other users have done that for IPFS.

@mrusme mrusme added the enhancement New feature or request label Dec 28, 2021
@Jay4242
Copy link

Jay4242 commented Dec 28, 2021

It's a pretty serious problem. Anything over 2,000 peers makes it unusable on some hosts.
Why does it seem to ignore the highwater so much?
I have it set to 10. It's going into 2,000.
This is also the only time I see #3 and other errors.
OOM killer comes hunting it at that point.

@mrusme
Copy link
Owner Author

mrusme commented Dec 28, 2021

Currently digging through the IPFS godocs to see if I can find anything. Right now I'm looking at the Swarm.ConnMgr and trying to figure out if for some reason "none" might be set for Superhighway84 - which I don't believe, though, as the default is "basic" and I didn't remember to have messed around with that.

@mrusme
Copy link
Owner Author

mrusme commented Dec 28, 2021

@Jay4242 just an idea, but can you try running the official IPFS daemon on the very same repo and see how that acts over a longer period of time? E.g. if the OOM killer also at some point triggers and how connections behave there.

@Jay4242
Copy link

Jay4242 commented Dec 29, 2021

It did calm down a bit, there was an initial spike up to 2k then if it survived that it went lower. ~200-600 peers.
When I run the ipfs daemon in that path and run ipfs swarm peers | wc -l in a loop it does spike into the hundreds initially then drop down to ~10-20.
Maybe a separate issue, but a screen re-draw key combination could be handy for clearing errors, although reading an article and exiting also clears the screen.

@mrusme
Copy link
Owner Author

mrusme commented Dec 29, 2021

Interesting. I've been running 0.0.4 on a VPS for the whole day now and it didn't crash nor reported any "too many open files" issues.

Maybe a separate issue, but a screen re-draw key combination could be handy for clearing errors, although reading an article and exiting also clears the screen.

Yeah, unfortunately running tview's Application.Redraw() can lead to the whole application freezing, for reasons that I didn't bother to investigate yet.

@mrusme mrusme mentioned this issue Dec 29, 2021
mrusme added a commit that referenced this issue Dec 29, 2021
@mrusme
Copy link
Owner Author

mrusme commented Dec 29, 2021

Connections seem to be down to little over 300 now. Maybe that's due to the latest change in master or it's simply a time when not many peers are online.

screenshot_2021-12-29-004829

Also CPU usage and network usage alike are very decent right now.

screenshot_2021-12-29-004905

@mrusme
Copy link
Owner Author

mrusme commented Dec 29, 2021

@Jay4242 please try setting your IPFS repository to lowpower profile and test with that:

ipfs config profile apply lowpower

See this for more info.

@mrusme mrusme self-assigned this Dec 29, 2021
@Jay4242
Copy link

Jay4242 commented Dec 29, 2021

Seems much better now. I did have it on lowpower before, even setting it lower. That's why I was so baffled that it was hitting as high as it was.
Thanks for the updates!

@mrusme
Copy link
Owner Author

mrusme commented Dec 29, 2021

I will close this issue for now but feel free to comment if things start getting worse again. I also talked to the folks over at ipfs on matrix and it seems that upgrading IPFS from the currently used 0.9.1 version to the latest might also help with performance. However, since a few interfaces have changed, it would require me to update go-orbit-db which is a bigger task.

@mrusme mrusme closed this as completed Dec 29, 2021
@Winterhuman
Copy link

@mrusme Btw, also check the value of GracePeriod, during that time all low and high water values are ignored in order to build up connections; consider lowering the value to decrease the amount of time new connections can form in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants