Replies: 12 comments 2 replies
-
@JanKrb Which version of the module are you using? Please make sure you upgrade to the latest version with terraform init -upgrade. We fixed a lot of IPv6 errors a few months back. |
Beta Was this translation helpful? Give feedback.
-
Also, if your snapshots are old, you need to recreate them. See pinned discussion on upgrade. |
Beta Was this translation helpful? Give feedback.
-
Created an all new cluster without any existing resources. So snapshots shouldn't be the issue here. I'm using the latest module (v2.9.2) directly from GitHub. |
Beta Was this translation helpful? Give feedback.
-
@M4t7e Any idea on this please? |
Beta Was this translation helpful? Give feedback.
-
@JanKrb You are correct. Just confirmed it.
|
Beta Was this translation helpful? Give feedback.
-
@kube-hetzner/core Any ideas folks why that would be happening. It's interesting that it gets the dns resolution right. But can't ping from within a pod. |
Beta Was this translation helpful? Give feedback.
-
I think we just do not support it yet. As Hetzner private network do not support IPv6 (unless I am mistaken). More on the dual stack that we do not implement here https://docs.k3s.io/installation/network-options#dual-stack-ipv4--ipv6-networking. @aleksasiriski @M4t7e @ifeulner What do you think? |
Beta Was this translation helpful? Give feedback.
-
That's right. Currently you can only create IPv4 networks on the Hetzner interface. I have asked Hetzner whether it is also possible to create IPv6 networks or whether they have any initiatives for implementation. |
Beta Was this translation helpful? Give feedback.
-
@mysticaltech Yes, you are totally right! IPv6 support is currently only available for ingress via LB; egress from Pods is (almost) impossible to achieve with Hetzner, as they do not support native routed IPv6. Regarding DNS, it is not mandatory to use IPv6 to query for IPv6 DNS records. Queries over IPv4 can retrieve both A (IPv4 addresses) and AAAA (IPv6 addresses) records. It is up to the client to decide whether to request A or AAAA records, typically based on the IP protocols that are supported and if both are available, IPv6 is usually preferred. The only way to make public IPv6 addresses work internally is via an overlay network. However, configuring K3s IPAM with IPv6 networks for individual nodes is challenging because you can only specify a single IPv6 CIDR. K3s IPAM will then autonomously create subnetworks from this CIDR and distribute them to the nodes, which will not align with the IPv6 networks assigned to the nodes by Hetzner. An alternative is to set up a network with private IPv6 addresses. This still requires an overlay, as Hetzner does not support this configuration either. In this setup, the overlay network would manage internal routing, and at least Cilium does offer experimental support for IPv6 masquerading for Pods. This could work, but I haven't tested it and wouldn't recommend it for various reasons. Additionally, many will argue that NAT is not permissible with IPv6. Other solutions are only theoretically possible and would require custom implementations in either K3s IPAM or HCCM IPAM to correctly assign IPv6 networks to nodes. With such an implementation, you could achieve a native IPv6 connection to the internet while using an overlay network internally. Native IPv6 support for Hetzner's networks would still be a significant improvement. |
Beta Was this translation helpful? Give feedback.
-
@M4t7e, beautiful in-depth explanation, thank you! 🙏 @JanKrb Up to you, you could try with Cilium like suggested above, you need to enable it via the |
Beta Was this translation helpful? Give feedback.
-
Got this from Hetzner @JanKrb:
They were very certain that they offered it at first but relented when I pointed out that their docs only use IPv4 examples; typical sales people... Anyway, seems Cilium is the only option. Also, @M4t7e, why?
@mysticaltech are you planning on adding support for dual-stack? |
Beta Was this translation helpful? Give feedback.
-
@mysticaltech I'm seeing IPv6-related errors in my cluster. The connection times out inside a pod whenever a domain is resolved to an IPv6 address. In other words, IPv6 egress from inside pods isn't working. Do you know if this is expected? Do you know how I can resolve this? |
Beta Was this translation helpful? Give feedback.
-
Description
As soon as I'm trying to reach any ipv6 network, I'll get a network error. This is the case when using the ip adress or AAAA-only DNS records.
While browsing through the ipv6 related issues I've found this issue #772 which looks like it is the same error. I'm not quite sure if it's the same case? It has been closed a while ago.
To reproduce, I created a new cluster with the default config. Only thing I changed is the autoscaling, but it is also happening without. I booted myself a new pod (busybox in my case)
kubectl run test --image=busybox --rm -it -- sh
and triedping6 google.com
.My nodes seem to have access to ipv6.
Kube.tf file
(trimmed the comments)
Screenshots
This is my pod:
--
This is my node (k3s-control-plane):
Platform
MicroOS (openSUSE-Tumbleweed)
Beta Was this translation helpful? Give feedback.
All reactions