-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
*: add IPv6 support #248
Comments
@Nurza flannel currently does not support IPv6. However let's leave this issue open as we should add IPv6 support in the future. |
+1 |
@patrickhoefler Can I ask about your use case for IPv6? Are you running out of RFC1918 IPv4 addresses? Or want to migrate to IPv6 throughout your org? Something else? I'm not against IPv6 support but it is work and considering that it's usually ran in 1918 address space, I don't see it as a high priority item. But I would like to know more. |
@eyakubovich You can definitely ask me, but it would probably make more sense to ask @Nurza instead ;) |
Because I own a few servers with free IPv6 blocks (IPv4 is SO expensive) and I would like to use flannel with them. |
I think there are enough use cases, which require ipv6 addresses. I'm working on such and the lack of ipv6 support in kubernetes, flannel forces me to use a single docker host with pipework. Docker supports ipv6 using the command line option: --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx:/64"". I'm Working with Unique Local Unicast Addresses. The usecase is clear. Ipv6only with DNATS ans SNATS to the pods. Personally i think, you should give ipv6 more priority. So i'm wirting on something like kiwi to track changes to a pod, and then issue pipework cmd to go back to kubernetes. |
+1 EDIT: also pitching in a use case- microservices. The bare-metal or VM way would generally have you putting a few microservices on one node (with one IP, then segregate by port for service), just for conservation of strictly allocated resources and... probably ipv4 addresses (not even public per-se, we actually have pretty high demand internally at our company for private 10.x.y.z subnets and its getting hard to reserve even a /20) you'll have a few services on a box. Docker you're basically running a 'machine' per service (or can, not much reason to bunch things together), so you're naturally going to need more IPs than normal. IP addresses are also really the only strictly allocated resources when using docker (at least in the flannel/fabric way). IPv6 definitely fixes this... |
+1 |
2 similar comments
+1 |
+1 |
Currently if you start doing sizing, you can get 300-400 hosts in a rack. (Supermicro microblades), two racks will go over the flannel/ipv4 limites. If you start looking at maximums, the flannel/ipv4 limit sounds ok, at around 261K containers max, but the reality its only 1-2 racks today. If you actually start applying devops/microservices into realworld apps, the container counts explode. I did a sizing for a day trading app im designing and its about 1.8million containers. There only a very small number of container types (20+) yet there reused in alot of ways for the app. If you consider real wall street system, or if you look at credit risk or a lot of typical commercial apps, When looking at hardware preference, you would actually want to do this across multiple data centers, multiple floors in data centers, and multiple racks. When you start looking at this, the number of nodes needs to be bigger. |
+1 Use case: no IPv4 in the network infrastructure. None of my internal servers (next gen internet provider infrastructure) have ipv4 addresses. |
+1 |
+1 - this is probably the most critical issue for my company. |
Instead of +1's, please use the github +1 feature using the smiley face in the upper right of the original post. This will help organize this and allow the actual discussions of how to solve this occur rather than polluting the issue with +1's. Thank you for your understanding and cooperation. |
Its sufficient for us to have support IPv6 on alloc backend, we don't need overlay network. |
Mobile applications for Apple also require IPv6 — https://developer.apple.com/news/?id=05042016a which may increase the desire to transition for some use cases |
Just curious since this ticket will be two years old soon: Is anybody working on it? |
It's surprising that this isn't implemented yet since IPv6 is very attractive since most hosts have a full /64 to play with. |
I'm trying to work out what to prioritize here, are a see a few different things that "IPv6 support" could mean
For both 3) and 4) there would be the issue of getting traffic between containers and hosts outside the flannel network (NAT64 could be used). There's also the possibility of providing containers with both IPv4 and IPv6 addresses. This could just be an extension of 2) or it could involve elements of 3) and/or 4) if it would it be useful to do this even when the hosts don't have both IPv4 and IPv6 connectivity. Is doing just 1) and 2) (and maybe adding dual stack support) going to be enough for people, or is there any desire to get 3) and 4) done too. I'd love to hear people's thoughts on this. |
@tomdee We have no IPv4 addresses, we don't need them or want the complexity of running a dual stack. So I guess this means: 1 and 2. |
For us the use case is similar. We can do pure IPv6 (and have a load balancer in front of the cluster for IPv4 ingress), but IPv4 itself is hard. For a project at work we're out of IPv4 addresses. Even allocating a not-to-be-routed /19 in 10/8 is difficult. |
I believe 1 and 2 should be good enough. |
In our context 1 and 2 would be great! |
Are you aware of https://github.com/leblancd/kube-v6 and kubernetes/enhancements#508 ? I'm actually surprised that IPv6 wasn't chosen as the default networking layer for Kubernetes from the beginning. It makes a lot of sense due to the sheer size and structure of the address space and would virtually eliminate the need for NAT inside a cluster (NAT64/DNS64 access to IPv4 networks notwithstanding). |
@onitake my guess is that some workloads (running in the containers) might not support IPv6 so I guess they may have wanted it to be backwards compatible for some stuff. Annoying I know! |
@CalvinHartwell That's a valid concern; however, it's 2018 and there's not much of an excuse left for not supporting IPv6. In fact, I'm quite certain that most software these days is implicitly IPv6 capable. |
Since v1.16.0 K8s supports dual-stack. Any plans to support dual-stack in Flannel? |
If you don't mind I'll just start working on it at least for the vxlan backend! |
any update on the backend support @xvzf ? |
+1, any update? |
Hi, I'm using host-gw backend. |
Wow, is this still open? |
You might want to want to check the docs , I think there's at least partial support for IPv6. It appears that some backends might be missing v6 support based on comments on this issue, but it also seems like progress has been made too |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hello, I am using Flannel in CoreOS 717.1.0 with IPv6 but it failed to launch. When I look at the logs, I see the message:
Is Flannel compatible with IPv6 ?
Thank you.
The text was updated successfully, but these errors were encountered: