Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

*: add IPv6 support #248

Closed
GabMgt opened this issue Jul 10, 2015 · 35 comments
Closed

*: add IPv6 support #248

GabMgt opened this issue Jul 10, 2015 · 35 comments

Comments

@GabMgt
Copy link

GabMgt commented Jul 10, 2015

Hello, I am using Flannel in CoreOS 717.1.0 with IPv6 but it failed to launch. When I look at the logs, I see the message:

Failed to find IPv4 address for interface eth1.

Is Flannel compatible with IPv6 ?

Thank you.

@eyakubovich
Copy link
Contributor

@Nurza flannel currently does not support IPv6. However let's leave this issue open as we should add IPv6 support in the future.

@pierrebeaucamp
Copy link

+1

@eyakubovich
Copy link
Contributor

@patrickhoefler Can I ask about your use case for IPv6? Are you running out of RFC1918 IPv4 addresses? Or want to migrate to IPv6 throughout your org? Something else?

I'm not against IPv6 support but it is work and considering that it's usually ran in 1918 address space, I don't see it as a high priority item. But I would like to know more.

@patrickhoefler
Copy link
Contributor

@eyakubovich You can definitely ask me, but it would probably make more sense to ask @Nurza instead ;)

@GabMgt
Copy link
Author

GabMgt commented Oct 16, 2015

Because I own a few servers with free IPv6 blocks (IPv4 is SO expensive) and I would like to use flannel with them.

@fskale
Copy link

fskale commented Oct 19, 2015

I think there are enough use cases, which require ipv6 addresses. I'm working on such and the lack of ipv6 support in kubernetes, flannel forces me to use a single docker host with pipework. Docker supports ipv6 using the command line option: --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx:/64"". I'm Working with Unique Local Unicast Addresses. The usecase is clear. Ipv6only with DNATS ans SNATS to the pods. Personally i think, you should give ipv6 more priority. So i'm wirting on something like kiwi to track changes to a pod, and then issue pipework cmd to go back to kubernetes.

@jonboulle jonboulle changed the title IPv6 issue *: add IPv6 support Oct 19, 2015
@colinrgodsey
Copy link

+1

EDIT: also pitching in a use case- microservices. The bare-metal or VM way would generally have you putting a few microservices on one node (with one IP, then segregate by port for service), just for conservation of strictly allocated resources and... probably ipv4 addresses (not even public per-se, we actually have pretty high demand internally at our company for private 10.x.y.z subnets and its getting hard to reserve even a /20) you'll have a few services on a box.

Docker you're basically running a 'machine' per service (or can, not much reason to bunch things together), so you're naturally going to need more IPs than normal. IP addresses are also really the only strictly allocated resources when using docker (at least in the flannel/fabric way). IPv6 definitely fixes this...

@goacid
Copy link

goacid commented Dec 22, 2015

+1

2 similar comments
@hwinkel
Copy link

hwinkel commented Jan 31, 2016

+1

@atta
Copy link

atta commented Jan 31, 2016

+1

@jonboulle jonboulle added this to the v1.0.0 milestone Jan 31, 2016
@glennswest
Copy link

Currently if you start doing sizing, you can get 300-400 hosts in a rack. (Supermicro microblades), two racks will go over the flannel/ipv4 limites. If you start looking at maximums, the flannel/ipv4 limit sounds ok, at around 261K containers max, but the reality its only 1-2 racks today. If you actually start applying devops/microservices into realworld apps, the container counts explode. I did a sizing for a day trading app im designing and its about 1.8million containers.

There only a very small number of container types (20+) yet there reused in alot of ways for the app.
And there 4000 equities, and a average of 450 plus containers per equity.
Hardware Config:
https://www.linkedin.com/pulse/building-data-center-deep-neural-networks-thinking-wall-glenn-west
App Overview
https://www.linkedin.com/pulse/using-containers-docker-change-world-glenn-west

If you consider real wall street system, or if you look at credit risk or a lot of typical commercial apps,
this is not alot of hardware, or even really big apps. The trading app would be alot bigger if you do equities on a global scale with multiple exchanges.

When looking at hardware preference, you would actually want to do this across multiple data centers, multiple floors in data centers, and multiple racks. When you start looking at this, the number of nodes needs to be bigger.

@choppsv1
Copy link

+1 Use case: no IPv4 in the network infrastructure. None of my internal servers (next gen internet provider infrastructure) have ipv4 addresses.

@phillipCouto
Copy link

+1

@stevemcquaid
Copy link

+1 - this is probably the most critical issue for my company.

@kkirsche
Copy link

kkirsche commented Jun 9, 2016

Instead of +1's, please use the github +1 feature using the smiley face in the upper right of the original post. This will help organize this and allow the actual discussions of how to solve this occur rather than polluting the issue with +1's. Thank you for your understanding and cooperation.

@kkirsche
Copy link

Mobile applications for Apple also require IPv6 — https://developer.apple.com/news/?id=05042016a which may increase the desire to transition for some use cases

@tomdee tomdee self-assigned this Mar 17, 2017
@tomdee tomdee removed their assignment May 19, 2017
@oneiros-de
Copy link

Just curious since this ticket will be two years old soon: Is anybody working on it?

@burton-scalefastr
Copy link

It's surprising that this isn't implemented yet since IPv6 is very attractive since most hosts have a full /64 to play with.

@tomdee
Copy link
Contributor

tomdee commented Jan 9, 2018

I'm trying to work out what to prioritize here, are a see a few different things that "IPv6 support" could mean

  1. Adding IPv6 support for the control plane. This means using IPv6 for contacting the etcd server or the kubernetes API server (I presume both of these support IPv6?)
  2. Using IPv6 addresses for containers with an IPv6 host network. This should work for almost all backends (though IPIP doesn't support IPv6 and maybe some of the cloud providers might not too).
  3. Using IPv6 addresses for containers with an IPv4 host network. This might be useful for running a large number of containers on a host when there is a limited IPv4 private address range available. This would only work with backends that encapsulate data (e.g. vxlan)
  4. Using IPv4 addresses for containers with an IPv6 host network. This would be useful for running in environments that only support IPv6 on the hosts. Again, this would only work on backends that support encapsulation.

For both 3) and 4) there would be the issue of getting traffic between containers and hosts outside the flannel network (NAT64 could be used).

There's also the possibility of providing containers with both IPv4 and IPv6 addresses. This could just be an extension of 2) or it could involve elements of 3) and/or 4) if it would it be useful to do this even when the hosts don't have both IPv4 and IPv6 connectivity.

Is doing just 1) and 2) (and maybe adding dual stack support) going to be enough for people, or is there any desire to get 3) and 4) done too. I'd love to hear people's thoughts on this.

@choppsv1
Copy link

@tomdee We have no IPv4 addresses, we don't need them or want the complexity of running a dual stack. So I guess this means: 1 and 2.

@abh
Copy link

abh commented Jan 11, 2018

For us the use case is similar. We can do pure IPv6 (and have a load balancer in front of the cluster for IPv4 ingress), but IPv4 itself is hard.

For a project at work we're out of IPv4 addresses. Even allocating a not-to-be-routed /19 in 10/8 is difficult.

@rahulwa
Copy link

rahulwa commented Jan 11, 2018

I believe 1 and 2 should be good enough.

@petrus-v
Copy link

In our context 1 and 2 would be great!

@onitake
Copy link

onitake commented Sep 25, 2018

Are you aware of https://github.com/leblancd/kube-v6 and kubernetes/enhancements#508 ?

I'm actually surprised that IPv6 wasn't chosen as the default networking layer for Kubernetes from the beginning. It makes a lot of sense due to the sheer size and structure of the address space and would virtually eliminate the need for NAT inside a cluster (NAT64/DNS64 access to IPv4 networks notwithstanding).

@CalvinHartwell
Copy link

@onitake my guess is that some workloads (running in the containers) might not support IPv6 so I guess they may have wanted it to be backwards compatible for some stuff. Annoying I know!

@onitake
Copy link

onitake commented Dec 14, 2018

@CalvinHartwell That's a valid concern; however, it's 2018 and there's not much of an excuse left for not supporting IPv6. In fact, I'm quite certain that most software these days is implicitly IPv6 capable.

@uablrek
Copy link

uablrek commented Nov 9, 2019

Since v1.16.0 K8s supports dual-stack. Any plans to support dual-stack in Flannel?

@xvzf
Copy link

xvzf commented Feb 2, 2020

If you don't mind I'll just start working on it at least for the vxlan backend!

@daniel-garcia
Copy link

any update on the backend support @xvzf ?

@yaoice
Copy link
Contributor

yaoice commented Dec 4, 2020

+1, any update?

This was referenced May 28, 2021
@yuanying
Copy link
Contributor

yuanying commented Aug 7, 2021

Hi, I'm using host-gw backend.
So I'm also interested in IPV6 support other than VXLAN.
Are there any plans to support IPv6 for other backends?

@agowa
Copy link
Contributor

agowa commented Jan 22, 2023

Wow, is this still open?
Is there any status update on when IPv6 will be supported?

@jamesharr
Copy link

You might want to want to check the docs , I think there's at least partial support for IPv6.

It appears that some backends might be missing v6 support based on comments on this issue, but it also seems like progress has been made too

@stale
Copy link

stale bot commented Jul 21, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jul 21, 2023
@stale stale bot closed this as completed Aug 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests