-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rootless: slirp4netns slow initial input to container #4537
Comments
I think it depends on the MTU value we set for slirp4netns. If I try without the
which doesn't show the initial slowness, but then slirp4netns performs significantly worse @AkihiroSuda FYI |
if you'd like to reproduce these results, you can build a custom version of Podman where you skip the |
Would there be a way to balance this for all users instead? Having it basically choke on high load tcp for all users of podman doesnt sound like a good thing to have default. Especially for webservers in containers if they ever get hit by more than a few requests per second. |
If that fixes the issue, we should add this to the troubleshooting page for podman. |
This actually improves the situation, a lot. However the weirdest bug in all of this: Also while that says i could modify it inside the namespace my alpine example container sees it as read-only filesystem, even as in-container "root", and refuses to lower it for only the container. How increasing a cache brings down performance for localhost connections is beyond me. |
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
RootlessKit port forwarder has a lot of advantages over the slirp4netns port forwarder: * Very high throughput. Benchmark result on Travis: socat: 5.2 Gbps, slirp4netns: 8.3 Gbps, RootlessKit: 27.3 Gbps (https://travis-ci.org/rootless-containers/rootlesskit/builds/597056377) * Connections from the host are treated as 127.0.0.1 rather than 10.0.2.2 in the namespace. No UDP issue (containers#4586) * No tcp_rmem issue (containers#4537) * Probably works with IPv6. Even if not, it is trivial to support IPv6. (containers#4311) * Easily extensible for future support of SCTP * Easily extensible for future support of `lxc-user-nic` SUID network RootlessKit port forwarder has been already adopted as the default port forwarder by Rootless Docker/Moby, and no issue has been reported AFAIK. As the port forwarder is imported as a Go package, no `rootlesskit` binary is required for Podman. Fix containers#4586 May-fix containers#4559 Fix containers#4537 May-fix containers#4311 See https://github.com/rootless-containers/rootlesskit/blob/v0.7.0/pkg/port/builtin/builtin.go Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
I experience the same problem with |
Not true since kernel 4.15. Note that you need to nsenter the namespaces to change it.
The RootlessKit port forwarder doesn't seem to hit the problem |
Thanks for your quick answer. I did a mistake in my thinking regards network namespace, was using the namespace of the slirp4netns process, which I couldn't modify without root privileges (and didn't make any sense anyway). In the bug hint, I associated "inside the namespace" with the namespace of slirp4netns itself, which is wrong. I got it working by hooking into the container processes themselves. For example, I had a php-fpm container talking to a database over TCP, where each connection using slirp4netns was "idle" everytime in the first 5 seconds. I could change the rmem setting of the main php-fpm process (might work for any other pool process too as the namespace should be the same) this way, without root privileges: |
Isn't wrong, and that is same as the container's namespace? 🤔 |
Whenever I try to nsenter -n with target PID of any |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Running a rootless container and testing network throughput with iperf3 reveals that while the container can handle outputting perfectly fine trying to load its input gets stuck for ~5s. See attached images.
Preparation:
Upload becoming stuck for ~5s, then performing fine:
Reverse way working as intended:
Steps to reproduce the issue:
Run an alpine container using rootless podman run -p
Run iperf3 -s in the container on a forwarded port with slirp
Run iperf3 -c on the host and target the forwarded port
Describe the results you received:
Upload only managing 5MB, then getting stuck for 5 seconds, then returning to expected speeds.
Describe the results you expected:
Consistent, instant input speed to the container.
Additional information you deem important (e.g. issue happens only occasionally):
It happens across a range of hardware, and i only tested it with slirp so far. Its one of the things that i am really not sure about where that could come from, if its just happening to iperf, and if it'd be fine for other uses, like grafana/statsd or logstash - or if those would be equally affected. Its been months since i first noticed this and it didnt change since then, so to get to the bottom of this i'm reporting it.
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Happens on all kinds of machines. Works flawlessly when spawning containers as root.
The text was updated successfully, but these errors were encountered: