-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for ipv6 Kubernetes cluster #284
Comments
I'm definitely in favor of this. If you can provide PRs we can figure out what we need to do to support it. Ideally I would like dual stack on by default for k3s. If ipv6 only is a good stepping stone for that then lets do it. |
First ipv6 try without any modifications to Server
When specifying an ipv6
The flag Is there a way to specify extra flags to While I am at it; I propose that if Agent
As you can see the server url is still ipv4. I think it doesn't matter since the server opens 6443 for ipv6 (
I think the first fault is caused by the crashing server and the second problem is a cause of the first. |
Just saw #290 |
Applied PR #309 and started with;
And;
😄 There is still a problem with image loading, containerd insist on using ipv6 addresses;
I will try to pre-load for testing and also configure containerd. Or, if it comes to that, resort to |
My mistake. My routing from within the cluster was bad. Now it works;
Note the ipv6 addresses. So it seems that I will now see what needs to be done for |
External access to services with But access to the kubernetes service from within a pod does not work. The endpoint for the kubernetes service is;
This is OK, but on the agent nodes port 6445 is only open for ipv4;
Among other things this prevents coredns to work for cluster addresses since it can't connect to the api-server. I am unsure where this address is set. Any hint is appeciated. It's so close... |
<sigh...>
|
I made a PR #319 that fixes the problems above but access to the kubernetes api service
|
It seems like when the DNAT rule I suspect that some sysctl for ipv6 needs to be set (like rp_filter for ipv4), but I have not found anything except forwarding (which is on). A correstonding trace on ipv4 with flannel shows packets with translated addresses on the |
The problem seem to be that the localhost is set as destination which is detected as a However, this also applies for ipv4 but the DNAT to 127.0.0.1:6443 works in @ibuildthecloud Can you please explain how you manage to get rid of the "martian destination" problem for ipv4 in Then I hope to be able to do the same thing for ipv6. |
Looking at https://en.wikipedia.org/wiki/Martian_packet, I am curious if it might have something to do with iptables or the flannel/cni setup. |
Hey I'm interested in IPv6 for K3S too because I plan to switch from docker swarm because they still haven't implemented IPv6 on swarm. Do you have any news about ipv6 on k3s? I saw that project: https://docs.projectcalico.org/v3.7/usage/ipv6 but I don't know if it would work on K3S. |
@unixfox I got other things to do so I couldn't work with ipv6 on k3s. I noticed that my PR for fixing the certificate problem does not work on 0.5.x. I think the problem is small. IFAIK the "martian packet" problem is still a stopper, but I think it's the only one. Almost all ipv6 works with just re-configuration as described above, but access to the API from agents needs a corrected certificate fix and access to the API from pod's are stopped by the martian-packet problem. "normal" ipv6 traffic via services works thogh. About the CNI-plugin you must select one that supports ipv6, Calico is one as you have seen. The CNI-plugin is started and configured separate from k3s so you must use the instructions for the cni-plugin. You can configure Calico to hand out both ipv4 and ipv6 addresses so you get "dual-stack" on PODs even though k3s can't handle it. Actually I recommend that as a first step if you are using Calico. |
This issue should migrate to; "Support for dual-stack Kubernetes cluster" or be closed. The ipv6 "martian" problem will probably not be present in a k8s dual-stack cluster since the API-server communication will still be ipv4. So adapt to dual-stack is likely simpler than ipv6-only. |
@uablrek K8s release with dual-stack is out and k3s can install it...is this now confirmed working and supported? |
I don't know. If |
Removed my previous comment... Looks like in order to enable this need to do two things,
I've got an extra IPv6 /64 from Comcast that I'll use. |
Opened #1405 to track dual stack |
Isn't now IPV6 only cluster in beta for 1.18: kubernetes/enhancements#508? |
Found the [::1]:6443 martian problem; net.ipv4.conf.all.route_localnet=1 is set by K8s, but there is no corresponding setting for ipv6. Please see; kubernetes/kubernetes#90259 |
Trying to build a k3os IPv6 only cluster with no more success, and as I can't spend more time at the moment to try having a full IPv6 only cluster, I published some personal notes about "Unsuccessful attempt to deploy CIRRUS cluster IPv6 only with k3os" on a alternative git site. see https://framagit.org/snippets/5803. Hope this can be useful to go further in that quest... |
Not sure if related but you have two servers and you are defining two different configurations for each (check the output of Can you deploy with just one server and show me again the output please? And I'd also like to see the journactl logs of k3s |
Oh yeah, the second node there is form a previous iteration of the commands. I wasn't sure how to clear the database to forget that node. |
Okay, after Here are the k3s logs: https://gist.github.com/xaque208/f28674902374ad523c9741ac6c30a1f8 And here is the output from the requested commands after deleting the old k2 node.
|
Could you run |
You can't add IPv6 to existing nodes - IPv6 needs to be enabled on the cluster at the time the PodCIDRs are assigned to the node; the IPAM controller won't add a missing IPv6 PodCIDR to the node after the fact. In short, make sure that you're starting with a clean cluster datastore. I can tell you're not doing this because the node UID didn't change when you reportedly uninstalled and reinstalled. |
Oh I did run an uninstall before the last output. What mechanism to you suggest for clearing the datastore? |
Running k3s-uninstall.sh should delete /var/lib/rancher/k3s which would include the cluster datastore. Can you ensure that this directory is gone after uninstalling on your node? |
The database bit was good information. I've dropped the database and started again. In the install commands above I was using a postgres database. I figured the uninstall script would have done something there too if it was needed. After running the install again I can start a pod and it has a v6 address! Amazing.
Can you also tell me if the traffic leaving the node will be NATed, or if I'll be able to route directly to those CIDRs? |
Oh right, sorry - I missed that you were using an external DB. If using an external DB, uninstalling/reinstalling would definitely include (manually) dropping the table or db. Flannel only supports ipv6 in vxlan mode so I believe all the traffic will be NATed; @manuelbuil would probably know better than I. |
Okay, nice. I appreciate the support here, thank you all. My goal is to be able to use some external (to the cluster) ipv6-only services. I'll be giving that a try here shortly. |
Traffic leaving the node will be NATed. There is an option in flannel that removes the natting However, I am not sure how well will it work with backend=vxlan. I have never tried it |
Could you add this note to the "known issues"? I had the exact same panic, when I tried to add IPv6 to the existing home-cluster, following the options from the k3s docs. |
|
Validated in the scope of testing #2123 |
Pod to pod communication over ipv6
Ping to the pod on another node using ipv6
|
Is your feature request related to a problem? Please describe.
Since 1.9 K8s supports ipv6-only but it is still in alpha after 5 minor releases and >1.5 years. In that sense it does not fit in the
k3s
concept with "no alpha features". However the main reason for the lingering alpha state is lack of e2e testing. This is aggressively addressed now for the upcoming dual-stack support in k8s.To bring up a ipv6-only k8s cluster is currently not for the faint hearted and I think if the simplicity of
k3s
can also include ipv6 it would be greatly appreciated. Also dual-stack is on the way IMHO support for ipv6-only is an important pro-active step.Describe the solution you'd like
A
--ipv6
option 😄This would setup node addresses, service and pod CIDRs etc with ipv6 addresses but keep the image loading (containerd) configured for ipv4. The image loading should still be ipv4 because the internet and ISP's are still mostly ipv4-only and for ipv6 users the way images gets loaded is of no concern.
A requirement will then be that the nodes running
k3s
must have a working dual-stack setup (but the k8s cluster would be ipv6-only).Describe alternatives you've considered
The "not for the faint hearted" does not mean that setting up an ipv6-only k8s cluster is particularly complex, more that most users have a fear of the unknown and that support in the popular installation tools is lacking or not working. To setup k8s for ipv6-only is basically just to provide ipv6 addresses in all configuration and command line options. That may even be possible without modifications to
k3s
(I have not yet tried). It may be more complex to support the "extras" such as the internal lb and traefik, so I would initially say that those are not supported for ipv6. Coredns with ipv6 ink3s
should be supported though (coredns supports ipv6 already).The flannel CNI-plugin afaik does not support ipv6 (issue). So the
--no-flannel
flag must be specified and a CNI-plugin with ipv6 support must be used.Additional context
I will start experimenting with this and possibly come up with some PR's. The ammount of time I can spend may be limited.
I am currently adding
k3s
in my xcluster environment where I already have ipv6-only support in my own k8s setup.The text was updated successfully, but these errors were encountered: