-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connect to [::1]:nodeport does not work #90236
Comments
/sig network |
I made a "combo" issue with 2 problems at first. Please forget that ever happened. New issue; #90259 |
/triage unresolved Comment 🤖 I am a bot run by vllry. 👩🔬 |
Note; I have moved (copied) the comment below to #90259 (comment) where it belongs. @andrewsykim Are there any reasons to set net.ipv4.conf.all.route_localnet=1 in the ipvs proxier? If not, I will make a PR to remove it. Consideting the problems described in #67730 I don't think is makes any difference. |
/cc |
Cool. This will be fun. While it seems that localhost nodeport might be a weird thing, I know that there are people who use it. The example that immediately comes to mind is an in-cluster docker registry that uses a NodePort to be available to kubelet as "localhost:32100". This works because, while docker demands TLS for registries, it makes a special exception for localhost. So a pod can say Disabling localhost NodePorts would break that. It seems that doesn't work at all on v6, which we know is not widely adopted (by comparison). Do we: a) normalize down: make v4 optional (default on)? |
a) If the only reason for net.ipv4.conf.all.route_localnet=1 is access 127.0.0.1:nodeport this option has my vote. b) I have encountered this problem before when trying to get ipv6 working on K3s, k3s-io/k3s#284 (comment). And this excellent analyze; #67730 (comment) make me believe that this can not work for proxy-mode=ipvs without a kernel change. |
I think that we should correct the title, is not totally accurate, because [::1]:nodeport works, is just that it does not work if the source address is localhost. I've tested it creating a NodePort service, in this example listening in port 30815. I'm running all commands in the same node that runs the pod with the service. I can access the NodePort service in the node IP address, as expected:
but, as correctly pointed out here, I can not access it if I try to access it in the localhost address, the connection hangs, as expected:
however, if I use the node IP address to access the nodePort in localhost it works:
the connection goes through because the source address "legit" To my view For IPv6, sooner or later we should stop to do the mapping IPv4 to IPv6 and evolve the networking model independently, leveraging the advantages of IPv6. From my POV the current kubernetes network model is heavily influenced by the IPv4 limitations regarding the limited number of addresses, being public and solved with load balancer, nodeports, NAT, ... or private, overlays networks to solve overlapping subnets problems, ... I dream with using global addresses as ClusterIPs and move all the service plumbery to Gateway , is that too unrealistic? |
FYI, there is no v6 equivalent to route_localnet, because it's not permitted to route packets to a larger scope. |
@thockin If you aren't able to handle this issue, consider unassigning yourself and/or adding the 🤖 I am a bot run by vllry. 👩🔬 |
4 similar comments
@thockin If you aren't able to handle this issue, consider unassigning yourself and/or adding the 🤖 I am a bot run by vllry. 👩🔬 |
@thockin If you aren't able to handle this issue, consider unassigning yourself and/or adding the 🤖 I am a bot run by vllry. 👩🔬 |
@thockin If you aren't able to handle this issue, consider unassigning yourself and/or adding the 🤖 I am a bot run by vllry. 👩🔬 |
@thockin If you aren't able to handle this issue, consider unassigning yourself and/or adding the 🤖 I am a bot run by vllry. 👩🔬 |
/close |
@aojea: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
Access via
[::1]:nodeport
does not work in ipv6-only or dual-stack clusters.What you expected to happen:
Access should work or the limitation should be documented.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
That this does not work even for ipv4 for ipvs is reported in #67730. But this is with proxy-mode=iptables or ipvs.
The problem seem to be that there is no equivalent setting for ipv6 for;
This is set by
kube-proxy
(in both modes). If you set net.ipv4.conf.all.route_localnet=0 for proxy-mode=iptables access to 127.0.0.1:nodeport does not work either.I have been told by security people that this setting opens a security issue; a host on the local segment can actually route 127.0.0.1 to a k8s-node and access ports that really should be local. Reported in #90259.
Environment:
kubectl version
):v1.19.0-alpha.1.547+250884c9c1cd41
cat /etc/os-release
): own setup BusyBoxuname -a
): Linux 5.4.2 Better error messages if go isn't installed, or if gcloud is old. #2 SMP Thu Apr 16 17:58:35 CEST 2020The text was updated successfully, but these errors were encountered: