-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternate port_handler that keeps the source ip for user-defined rootless networks #8193
Comments
Since the "port_handler=slirp4netns" solution from PR 6965 was implemented to address the "only 127.0.0.1" problem within rootless containers ("port_handler=rootlesskit" by default), and since that solution is not implemented for user-defined rooteless cni networks, perhaps revisiting the "only 127.0.0.1" problem with "port_handler=rootlesskit" is in order before simply extending the solution from PR 6965 to user-defined rootless cni networks. Furthermore, the default behavior of "only 127.0.0.1" within rootless containers, regardless of user-defined rootless cni networking, is confusing for users who are new to containers/podman but not new to webserver/application configuration, like myself.
Additionally, the default behvavior of "only 127.0.0.1" within rootless containers, regardless of user-defined rootless cni networking, breaks potential remote-address-based authentication mechanisms a developer could devise for rootless container services. Background as I know it PR 6965 allows users to specify an alternate mechanism for rootless port binding which successfully preserves the remote address for packets reaching rootless containers. PR 6965 did not fix the "only 127.0.0.1" issue stemming from the use of "port_handler=rootlesskit". Instead, PR 6965 made it possible to select "port_handler=slirp4netns" which does not exhibit the "only 127.0.0.1" behavior of "port_handler=rootlesskit" (the default). Since the alternate "port_handler" work-around for "only 127.0.0.1" provided in PR 6965 is not implemented for user-defined rootless cni networks, perhaps a more general or clearer way to handle "rootlesskit" vs "slirp4netns" port_handler is required. That clearer way to handle "rootlesskit" vs "slirp4netns" port_handler ought to support "slirp4netns" network options when creating and/or connecting to user-defined rootless cni networks (whichever is more appropriate) to address the issues identifed at the top of this post. There may be future options required for current/future underlying network subsystems. A non-convoluted mechanism to easily support such subsystems/options could be useful or help to avoid some future headaches.
***** User-defined rootless network limitations *****
Oddly: |
***** potentially informative "port_handler=rootlesskit" behavior ***** Unexpected IP6 address binding behavior of "port_handler=rootlesskit" Alternate "port_handler" provided in PR 6965 helps to reveal ...
Rather Odd Observations:
I do not know what underlying IP4/IP6 linux mechanism allows connections like $> curl http://IP4 address:8081 (and 8082) to reach IP6 address:8081 (and 8082) where "rootlesskit" opened the ports. Perhaps that is an important mechanism in the behavior observed. Still, the IP4 vs IP6 address bindings provide a clue regarding why "127.0.0.1" is the source address within containers launched with "port_handler=rootlesskit" (default) network option Perhaps "rootlesskit" must recreate each packet it receives?
Work-arounds on other forums suggest diabling ip6 on container host. A better approach may be to actually have "rootlesskit" bind to container host IP4 address, assuming that is actually part of the problem. There still remains the problem of seeing only "127.0.0.1" regardless of IP4 vs IP6 address binding ("myNginx2" and "myNginx3" examples) |
@AkihiroSuda WDYT? |
SGTM
If we do that, it will be slowed down as in slirp4netns. Probably we can just spoof srcIP using IP_TRANSPARENT socket, but I'm not familiar with IP_TRNASPARENT. |
My apologies for the confusion - it wasn't a suggestion for rootlesskit to recreate packets. It was a hypothesis as to why 127.0.0.1 is the remote address within a container when the packet passed through "rootlesskit". I should have said "Perhaps rootlesskit recreates each packet it receives?" The subpoints under that statement in original comment, above, make more sense in that clarified context. |
I know I muddied the topic by hypothesizing. However, the initial bug remains. It is not possible to specify "port_handler=slirp4netns" for user-defined rootless CNI networks, which means it is not currently possible to determine the correct remote address for containers using such a network. The step-by-step section at the beginning of this thread describes how to reproduce the behavior. |
A friendly reminder that this issue had no activity for 30 days. |
I take it this is still an issue. |
Yes. Did we ever add the required fields to |
I did not. |
When I used v2ray and tproxy in podman, I encountered a similar problem.
|
That does not look similar to me. Please open a fresh issue with further
details.
…On Sun, Jan 10, 2021 at 06:29 daiaji ***@***.***> wrote:
When I used v2ray and tproxy in podman, I encountered a similar problem.
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet/tcp: listening TCP on 127.0.0.1:15490
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet: failed to apply socket options to incoming connection > v2ray.com/core/transport/internet: failed to set IP_TRANSPARENT > operation not permitted
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet/tcp: listening TCP on 127.0.0.1:12345
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet: failed to apply socket options to incoming connection > v2ray.com/core/transport/internet: failed to set IP_TRANSPARENT > operation not permitted
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet/udp: listening UDP on 127.0.0.1:12345
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet: failed to apply socket options to incoming connection > v2ray.com/core/transport/internet: failed to set IP_TRANSPARENT > operation not permitted
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet/tcp: listening TCP on [::1]:12345
2021/01/10 11:26:55 [Info] v2ray.com/core/transport/internet: failed to apply socket options to incoming connection > v2ray.com/core/transport/internet: failed to set IP_TRANSPARENT > operation not permitted
2021/01/10 1
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#8193 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCCVUXBDT5W6JLBFLTDSZGFS7ANCNFSM4TEKXVPQ>
.
|
@mheon ok |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
As a friendly reminder - this would be a useful feature. |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that this issue had no activity for 30 days. |
Is there a way to use port_handler=slirp4netns and still connect to a different container somehow or is this blocked due to this issue? |
Is there any priority assigned to looking into or resolving this? My scenario: I came across #5138 (comment), specifically stating that this was intentional and for "speed". [engine]
network_cmd_options= [ 'port_handler=slirp4netns'] Testing it I see it works when I'm running some containers and didn't immediately recognise that this still wouldn't work with docker-compose, as it will create new network, and " My use case doesn't need the 7Gbps -> 28 Gbps increase linked as the reason for moving away from slirp4netns. |
I am planning to fix this soon but not with slirp4netns. |
Sounds good, but to clarify: the current rootless configuration doesn't use slirp4netns, it defaults to |
No pasta is a slirp4netns replacement, the rootlesskit forwarder is not needed with pasta because its native port forwarding is already very fast and also keeps the correct source ip. |
@Luap99 as far as I remember this was complicated by the fact that pasta doesn't support multiple containers as source/destination for port forwarding entries. That's still work in progress. However, automatic UDP port forwarding with periodic port scanning, which according to my understanding should make this doable (albeit not elegant) is now implemented and available in passt version 2023_11_19.4f1709d, as well as Fedora's passt-0^20231119.g4f1709d-1.fc39 and passt-0^20231119.g4f1709d-1.fc40. I still have to prepare a Debian package update. |
Has there been any progress on this issue recently? In the pasta mode, it can indeed get the expected remote_ip. However, it seems that the issue still exists because the IPAddress field of the container is empty. podman inspect traefik | grep IPAddress
"IPAddress": "", This is disastrous for traefik because its service auto-discovery relies on this field.
podman version
Client: Podman Engine
Version: 4.9.3
API Version: 4.9.3
Go Version: go1.21.6
Built: Thu Jan 1 08:00:00 1970
OS/Arch: linux/amd64 |
This is not related to this issue, it makes zero sense to ever add the ip address to podman inspect for the slirp4netns or pasta mode because that ip will not be reachable from the host or other containers. You would need |
I just ran into this issue while migrating my rootful Nextcloud and Nginx Proxy Manager containers over to rootless. This bug basically breaks reverse proxies. Nginx Proxy Manager can't communicate to Nextcloud about where requests are actually coming from. This means that Nextcloud can't do things like brute-force protection based on remote IPs. I've been wracking my brain and googling all over on how to use the slirp4netns port handler with If I define a network within my docker-compose to contain Nextcloud and NPM, does this mean that I can not use slirp4netns port handler at all? Is there really no way around this? If that's the case, I think I might have to revert back to rootful, which is a shame. Whatever solution ultimately comes up, I hope it's compatible with docker-compose. |
Well you can set Fixing this is not trivial at all, sure I love to implement this but given other priorities I can promise any timeline for this. |
Side note: There is a workaround for this issue. Use socket activation. For details, see (To be able to use this workaround, the software in the container needs to support socket activation which is unfortunately often not the case) |
That has an asterisk though:
In my case, i tried setting up pihole in rootless network - but without source IPs, a lot of features including the dashboard become useless. Pi-hole (FTL/dnsmasq) does not support socket activation (AFAIK) so in this case the workaround is not really available. I'm also not sure if this is hindered by them using s6-overlay. But i guess it would be good to open issues in the respective projects to change this long-term but i think there will almost always be software that can not just use socket activation for one reason or another. |
Good point. I forgot to say, there is also a Here is minimal demo
|
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Alternate "port_handler=slirt4netns" provided in PR 6965 (#6965) for "only 127.0.0.1 within containers" not implemented for user-defined rootless cni networks
Steps to reproduce the issue:
Describe the results you received:
"myNginx1" container has only 127.0.0.1 as remote address (confusing), though explained in PR 6965
"myNginx2" container launches without error however port not opened and no ip address assigned
"myNginx3" container - no problem - everything makes sense
Describe the results you expected:
"myNginx1:" expected to see correct remote address wihin container (though explainded in PR 6965)
"myNginx2:" expected successful ip assignment and port to open using "port_handler=slirp4netns", or contaier launch to fail
"myNginx3" container - no problem - everything makes sense
Additional information you deem important (e.g. issue happens only occasionally):
happens consistently and reproducably
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Container host is a VirtualBox VM running on Fedora 32
podman packages installed from OpenSuse repo
$> cat "/etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
$> cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
The text was updated successfully, but these errors were encountered: