Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for ipv6 Kubernetes cluster #284

Closed
uablrek opened this issue Mar 29, 2019 · 77 comments
Closed

Support for ipv6 Kubernetes cluster #284

uablrek opened this issue Mar 29, 2019 · 77 comments
Assignees
Labels
kind/enhancement An improvement to existing functionality
Milestone

Comments

@uablrek
Copy link

uablrek commented Mar 29, 2019

Is your feature request related to a problem? Please describe.

Since 1.9 K8s supports ipv6-only but it is still in alpha after 5 minor releases and >1.5 years. In that sense it does not fit in the k3s concept with "no alpha features". However the main reason for the lingering alpha state is lack of e2e testing. This is aggressively addressed now for the upcoming dual-stack support in k8s.

To bring up a ipv6-only k8s cluster is currently not for the faint hearted and I think if the simplicity of k3s can also include ipv6 it would be greatly appreciated. Also dual-stack is on the way IMHO support for ipv6-only is an important pro-active step.

Describe the solution you'd like

A --ipv6 option 😄

This would setup node addresses, service and pod CIDRs etc with ipv6 addresses but keep the image loading (containerd) configured for ipv4. The image loading should still be ipv4 because the internet and ISP's are still mostly ipv4-only and for ipv6 users the way images gets loaded is of no concern.

A requirement will then be that the nodes running k3s must have a working dual-stack setup (but the k8s cluster would be ipv6-only).

Describe alternatives you've considered

The "not for the faint hearted" does not mean that setting up an ipv6-only k8s cluster is particularly complex, more that most users have a fear of the unknown and that support in the popular installation tools is lacking or not working. To setup k8s for ipv6-only is basically just to provide ipv6 addresses in all configuration and command line options. That may even be possible without modifications to k3s (I have not yet tried). It may be more complex to support the "extras" such as the internal lb and traefik, so I would initially say that those are not supported for ipv6. Coredns with ipv6 in k3s should be supported though (coredns supports ipv6 already).

The flannel CNI-plugin afaik does not support ipv6 (issue). So the --no-flannel flag must be specified and a CNI-plugin with ipv6 support must be used.

Additional context

I will start experimenting with this and possibly come up with some PR's. The ammount of time I can spend may be limited.

I am currently adding k3s in my xcluster environment where I already have ipv6-only support in my own k8s setup.

@erikwilson erikwilson added the kind/enhancement An improvement to existing functionality label Mar 29, 2019
@ibuildthecloud
Copy link
Contributor

I'm definitely in favor of this. If you can provide PRs we can figure out what we need to do to support it. Ideally I would like dual stack on by default for k3s. If ipv6 only is a good stepping stone for that then lets do it.

@uablrek
Copy link
Author

uablrek commented Apr 3, 2019

First ipv6 try without any modifications to k3s;

Server

k3s server --no-flannel --no-deploy coredns \
  --no-deploy servicelb --no-deploy traefik --disable-agent \
  --write-kubeconfig /etc/kubernetes/kubeconfig --tls-san 192.168.0.1 \
  --cluster-cidr 1000::2:11.0.0.0/112 --service-cidr fd00:4000::/112 \
  --node-ip 1000::1:192.168.1.1

When specifying an ipv6 --cluster-cidr 1000::2:11.0.0.0/112 the server this happens;

time="2019-04-03T12:22:46.467223177Z" level=info msg="k3s is up and running"
time="2019-04-03T12:22:48.468064815Z" level=info msg="Handling backend connection request [vm-004]"
time="2019-04-03T12:22:48.474068645Z" level=info msg="Handling backend connection request [vm-002]"
F0403 12:22:48.476902     207 node_ipam_controller.go:98] Controller: Invalid --cluster-cidr, mask size of cluster CIDR must be less than --node-cidr-mask-size
...  (program crash follows...)

The flag --node-cidr-mask-size=120 must be given to kube-controller-manager.

Is there a way to specify extra flags to kube-controller-manager?

While I am at it; I propose that if --no-flannel is specified do not set default --allocate-node-cidrs=true to kube-controller-manager since the CNI may not support it and just ignode the node cidr in the k8s node object. It works anyway but the node object will have an invalid value (my case).

Agent

k3s agent --server https://192.168.1.1:6443 \
  --no-flannel --containerd-config-template /etc/containerd.conf \
  --node-ip 1000::1:192.168.1.4

As you can see the server url is still ipv4. I think it doesn't matter since the server opens 6443 for ipv6 (:::6443) even in an ipv4-only cluster.

time="2019-04-03T12:22:48.547178739Z" level=error msg="Failed to connect to proxy" error="websocket: close 1006 (abnormal closure): unexpected EOF"
I0403 12:22:48.547264     207 log.go:172] http: proxy error: EOF
W0403 12:22:48.547412     207 node.go:103] Failed to retrieve node info: an error on the server ("") has prevented the request from succeeding (get nodes vm-004)
I0403 12:22:48.547425     207 server_others.go:148] Using iptables Proxier.
W0403 12:22:48.547488     207 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
F0403 12:22:48.547501     207 server.go:396] unable to create proxier: clusterCIDR 1000::2:b00:0/112 has incorrect IP version: expect isIPv6=false

I think the first fault is caused by the crashing server and the second problem is a cause of the first.

@uablrek
Copy link
Author

uablrek commented Apr 3, 2019

Is there a way to specify extra flags to kube-controller-manager?

Just saw #290

@uablrek
Copy link
Author

uablrek commented Apr 8, 2019

Applied PR #309 and started with;

# Server;
k3s server --no-flannel --no-deploy coredns \
 --no-deploy servicelb --no-deploy traefik --disable-agent \
 --write-kubeconfig /etc/kubernetes/kubeconfig --tls-san 1000::1:192.168.1.1 \
 --cluster-cidr 1000::2:11.0.0.0/112 --service-cidr fd00:4000::/112 \
 --node-ip 1000::1:192.168.1.1 \
 --kube-controller-args node-cidr-mask-size=120 
# Agent;
k3s agent --server https://[1000::1:192.168.1.1]:6443 --no-flannel \
 --containerd-config-template /etc/containerd.conf --node-ip 1000::1:192.168.1.2 \
 --kubelet-args address =:: --kubelet-args healthz-bind-address=::1 \
 --kube-proxy-args bind-address=::1

And;

#  k3s kubectl get nodes -o wide
NAME     STATUS     ROLES    AGE     VERSION         INTERNAL-IP        EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION   CONTAINER-RUNTIME
vm-002   NotReady   <none>   7m45s   v1.13.5-k3s.1   1000::1:c0a8:102   <none>        Xcluster   5.0.0            containerd://1.2.4+unknown

😄

There is still a problem with image loading, containerd insist on using ipv6 addresses;

E0408 09:01:40.897457     474 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to resolve image "k8s.gcr.io/pause:3.1": no available registry endpoint: failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.1: dial tcp [2a00:1450:4010:c05::52]:443: connect: network is unreachable

I will try to pre-load for testing and also configure containerd. Or, if it comes to that, resort to cri-o which I know can download images over ipv4 in an ipv6-only cluster.

@uablrek
Copy link
Author

uablrek commented Apr 8, 2019

My mistake. My routing from within the cluster was bad. Now it works;

# kubectl get pods -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP                NODE     NOMINATED NODE   READINESS GATES
alpine-deployment-797f999977-5spzw   1/1     Running   0          25s   1000::2:b00:302   vm-003   <none>           <none>
alpine-deployment-797f999977-npknn   1/1     Running   0          25s   1000::2:b00:402   vm-004   <none>           <none>
alpine-deployment-797f999977-pckfb   1/1     Running   0          25s   1000::2:b00:202   vm-002   <none>           <none>
alpine-deployment-797f999977-wqfxs   1/1     Running   0          25s   1000::2:b00:303   vm-003   <none>           <none>

Note the ipv6 addresses.

So it seems that k3s with PR #309 has support for ipv6 😁

I will now see what needs to be done for coredns and external access (nothing I hope).

@uablrek
Copy link
Author

uablrek commented Apr 8, 2019

External access to services with externalIPs: works fine.

But access to the kubernetes service from within a pod does not work. The endpoint for the kubernetes service is;

# kubectl get endpoints kubernetes
NAME         ENDPOINTS    AGE
kubernetes   [::1]:6445   90m

This is OK, but on the agent nodes port 6445 is only open for ipv4;

# netstat -putln | grep 6445
tcp        0      0 127.0.0.1:6445          0.0.0.0:*               LISTEN      217/k3s

Among other things this prevents coredns to work for cluster addresses since it can't connect to the api-server.

I am unsure where this address is set. Any hint is appeciated.

It's so close...

@uablrek
Copy link
Author

uablrek commented Apr 8, 2019

<sigh...>

# kubectl get nodes
Unable to connect to the server: x509: certificate is valid for 127.0.0.1, fd00:4000::1, not ::1

@uablrek
Copy link
Author

uablrek commented Apr 10, 2019

I made a PR #319 that fixes the problems above but access to the kubernetes api service [fd00:4000::1]:443 does not work from within a pod. From main netns you can connect;

# wget https://[fd00:4000::1]:443/
Connecting to [fd00:4000::1]:443 ([fd00:4000::1]:443)
wget: server returned error: HTTP/1.1 401 Unauthorized

@uablrek
Copy link
Author

uablrek commented Apr 23, 2019

It seems like when the DNAT rule kubernetes-service:443->[::1]:6443 is applied then the packets to the cni bridge interface (cbr0) is dropped. Since flannel does not support ipv6 "bridge" cni is used for ipv6. With tcpdump I can see packets coming to kubernetes-service:443 on the "veth" device from the pod, but they do not appear on the bridge device (cbr0).

I suspect that some sysctl for ipv6 needs to be set (like rp_filter for ipv4), but I have not found anything except forwarding (which is on).

A correstonding trace on ipv4 with flannel shows packets with translated addresses on the cni0 interface, e.g 127.0.0.1.6445 and untranslated addresses on the veth device, e.g. 10.43.0.1.443.

@uablrek
Copy link
Author

uablrek commented May 2, 2019

The problem seem to be that the localhost is set as destination which is detected as a martian destination by the kernel.

However, this also applies for ipv4 but the DNAT to 127.0.0.1:6443 works in k3s. I tried to setup this in a "bare VM" environment but could not get rid of the "martian destination" for ipv4. I could not figure out how k3s makes this work.

@ibuildthecloud Can you please explain how you manage to get rid of the "martian destination" problem for ipv4 in k3s?

Then I hope to be able to do the same thing for ipv6.

@erikwilson
Copy link
Contributor

Looking at https://en.wikipedia.org/wiki/Martian_packet, I am curious if it might have something to do with iptables or the flannel/cni setup.

@unixfox
Copy link

unixfox commented Jun 1, 2019

Hey I'm interested in IPv6 for K3S too because I plan to switch from docker swarm because they still haven't implemented IPv6 on swarm.

Do you have any news about ipv6 on k3s? I saw that project: https://docs.projectcalico.org/v3.7/usage/ipv6 but I don't know if it would work on K3S.

@uablrek
Copy link
Author

uablrek commented Jun 2, 2019

@unixfox I got other things to do so I couldn't work with ipv6 on k3s. I noticed that my PR for fixing the certificate problem does not work on 0.5.x. I think the problem is small. IFAIK the "martian packet" problem is still a stopper, but I think it's the only one. Almost all ipv6 works with just re-configuration as described above, but access to the API from agents needs a corrected certificate fix and access to the API from pod's are stopped by the martian-packet problem. "normal" ipv6 traffic via services works thogh.

About the CNI-plugin you must select one that supports ipv6, Calico is one as you have seen. The CNI-plugin is started and configured separate from k3s so you must use the instructions for the cni-plugin. You can configure Calico to hand out both ipv4 and ipv6 addresses so you get "dual-stack" on PODs even though k3s can't handle it. Actually I recommend that as a first step if you are using Calico.

@uablrek
Copy link
Author

uablrek commented Nov 21, 2019

This issue should migrate to; "Support for dual-stack Kubernetes cluster" or be closed.

The ipv6 "martian" problem will probably not be present in a k8s dual-stack cluster since the API-server communication will still be ipv4. So adapt to dual-stack is likely simpler than ipv6-only.

@zoza1982
Copy link

zoza1982 commented Dec 5, 2019

@uablrek K8s release with dual-stack is out and k3s can install it...is this now confirmed working and supported?

@uablrek
Copy link
Author

uablrek commented Dec 5, 2019

I don't know. If flannel is used, then no, since flannel only support ipv4. I am using k8s now since I need dual-stack.

@carpenike
Copy link

Removed my previous comment...

Looks like in order to enable this need to do two things,

  1. Configure Calico & BGP with router
  2. Configure two Pod/Services within the k3s launch per: Support for ipv6 Kubernetes cluster #284 (comment) and https://kubernetes.io/docs/concepts/services-networking/dual-stack/
  3. It's my understanding that K3S doesn't support gated features in order to remain thin, is that accurate? Will tear down / rebuild my cluster in the next couple of days to see if it actually works.

I've got an extra IPv6 /64 from Comcast that I'll use.

@carpenike
Copy link

Opened #1405 to track dual stack

@unixfox
Copy link

unixfox commented Mar 26, 2020

Isn't now IPV6 only cluster in beta for 1.18: kubernetes/enhancements#508?

@uablrek
Copy link
Author

uablrek commented Apr 29, 2020

Found the [::1]:6443 martian problem; net.ipv4.conf.all.route_localnet=1 is set by K8s, but there is no corresponding setting for ipv6. Please see; kubernetes/kubernetes#90259

@j-landru
Copy link

Trying to build a k3os IPv6 only cluster with no more success, and as I can't spend more time at the moment to try having a full IPv6 only cluster, I published some personal notes about "Unsuccessful attempt to deploy CIRRUS cluster IPv6 only with k3os" on a alternative git site.

see https://framagit.org/snippets/5803.

Hope this can be useful to go further in that quest...

@manuelbuil
Copy link
Contributor

Here is the command I'm running currently.

curl -sfL https://get.k3s.io | INSTALL_K3S_CHANNEL=v1.22 K3S_TOKEN="xxx::server:xxx" sh -s \
- server \
--datastore-endpoint="postgres://k3s:xxx@db.example:5432/k3s" \
--node-ip="172.16.15.21,fc15::21" \
--cluster-cidr="10.42.0.0/16,fc15:1::/56" \
--service-cidr="10.43.0.0/16,fc15:2::/112" \
--disable-network-policy \
--flannel-backend=none

Here is the requested output.

[root@k1 ~]# kubectl get nodes -o yaml | grep podCIDRs -n4
32-    resourceVersion: "2264877"
33-    uid: b3276572-839d-4c21-bf36-7bfc438a1bf6
34-  spec:
35-    podCIDR: 10.42.2.0/24
36:    podCIDRs:
37-    - 10.42.2.0/24
38-    providerID: k3s://k2
39-    taints:
40-    - effect: NoSchedule
--
138-    resourceVersion: "2288810"
139-    uid: d3eec022-714d-4edb-8eb8-e1bf40ee3582
140-  spec:
141-    podCIDR: 10.42.0.0/24
142:    podCIDRs:
143-    - 10.42.0.0/24
144-    providerID: k3s://k1
145-  status:
146-    addresses:
[root@k1 ~]# kubectl get nodes -o yaml | grep node-args
      k3s.io/node-args: '["server","--datastore-endpoint","********","--node-ip","172.16.15.22,fc15::22","--cluster-cidr","10.42.0.0/16,fc15:1::/120","--service-cidr","10.43.0.0/16,fc15:2::/120","--disable-network-policy"]'
      k3s.io/node-args: '["server","--datastore-endpoint","********","--node-ip","172.16.15.21,fc15::21","--cluster-cidr","10.42.0.0/16,fc15:1::/56","--service-cidr","10.43.0.0/16,fc15:2::/112","--disable-network-policy","--flannel-backend","none"]'

Not sure if related but you have two servers and you are defining two different configurations for each (check the output of kubectl get nodes -o yaml | grep node-args).

Can you deploy with just one server and show me again the output please? And I'd also like to see the journactl logs of k3s

@zachfi
Copy link

zachfi commented Nov 4, 2021

Oh yeah, the second node there is form a previous iteration of the commands. I wasn't sure how to clear the database to forget that node.

@zachfi
Copy link

zachfi commented Nov 4, 2021

Okay, after kubectl node delete k2, I uninstall and then run the install command above again.

Here are the k3s logs: https://gist.github.com/xaque208/f28674902374ad523c9741ac6c30a1f8

And here is the output from the requested commands after deleting the old k2 node.

[root@k1 ~]# kubectl get nodes -o yaml | grep podCIDRs -n4
34-    resourceVersion: "2292347"
35-    uid: d3eec022-714d-4edb-8eb8-e1bf40ee3582
36-  spec:
37-    podCIDR: 10.42.0.0/24
38:    podCIDRs:
39-    - 10.42.0.0/24
40-    providerID: k3s://k1
41-  status:
42-    addresses:
[root@k1 ~]# kubectl get nodes -o yaml | grep node-args
      k3s.io/node-args: '["server","--datastore-endpoint","********","--node-ip","172.16.15.21,fc15::21","--cluster-cidr","10.42.0.0/16,fc15:1::/56","--service-cidr","10.43.0.0/16,fc15:2::/112","--disable-network-policy","--flannel-backend","none"]'

@manuelbuil
Copy link
Contributor

manuelbuil commented Nov 4, 2021

Could you run k3s-uninstall.sh and then deploy again please? If you already had a server running that was deployed without dual-stack, it will not work

@brandond
Copy link
Member

brandond commented Nov 4, 2021

You can't add IPv6 to existing nodes - IPv6 needs to be enabled on the cluster at the time the PodCIDRs are assigned to the node; the IPAM controller won't add a missing IPv6 PodCIDR to the node after the fact.

In short, make sure that you're starting with a clean cluster datastore. I can tell you're not doing this because the node UID didn't change when you reportedly uninstalled and reinstalled.

@zachfi
Copy link

zachfi commented Nov 4, 2021

Oh I did run an uninstall before the last output. What mechanism to you suggest for clearing the datastore?

@brandond
Copy link
Member

brandond commented Nov 4, 2021

Running k3s-uninstall.sh should delete /var/lib/rancher/k3s which would include the cluster datastore. Can you ensure that this directory is gone after uninstalling on your node?

@zachfi
Copy link

zachfi commented Nov 4, 2021

The database bit was good information. I've dropped the database and started again. In the install commands above I was using a postgres database. I figured the uninstall script would have done something there too if it was needed.

After running the install again I can start a pod and it has a v6 address! Amazing.

[root@k1 ~]# k3s kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether 9a:b3:d4:96:4f:ee brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.10/24 brd 10.42.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fc15:1::a/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::98b3:d4ff:fe96:4fee/64 scope link 
       valid_lft forever preferred_lft forever

Can you also tell me if the traffic leaving the node will be NATed, or if I'll be able to route directly to those CIDRs?

@brandond
Copy link
Member

brandond commented Nov 4, 2021

Oh right, sorry - I missed that you were using an external DB. If using an external DB, uninstalling/reinstalling would definitely include (manually) dropping the table or db.

Flannel only supports ipv6 in vxlan mode so I believe all the traffic will be NATed; @manuelbuil would probably know better than I.

@zachfi
Copy link

zachfi commented Nov 4, 2021

Okay, nice. I appreciate the support here, thank you all.

My goal is to be able to use some external (to the cluster) ipv6-only services. I'll be giving that a try here shortly.

@manuelbuil
Copy link
Contributor

Traffic leaving the node will be NATed. There is an option in flannel that removes the natting --ip-masq but we currently don't support it in k3s when flannel is embedded. Although it should be easy to implement, could you open a different issue with that request please?

However, I am not sure how well will it work with backend=vxlan. I have never tried it

@narqo
Copy link
Contributor

narqo commented Nov 5, 2021

You can't add IPv6 to existing nodes - IPv6 needs to be enabled on the cluster at the time the PodCIDRs are assigned to the node; the IPAM controller won't add a missing IPv6 PodCIDR to the node after the fact.

Could you add this note to the "known issues"? I had the exact same panic, when I tried to add IPv6 to the existing home-cluster, following the options from the k3s docs.

@manuelbuil
Copy link
Contributor

You can't add IPv6 to existing nodes - IPv6 needs to be enabled on the cluster at the time the PodCIDRs are assigned to the node; the IPAM controller won't add a missing IPv6 PodCIDR to the node after the fact.

Could you add this note to the "known issues"? I had the exact same panic, when I tried to add IPv6 to the existing home-cluster, following the options from the k3s docs.

rancher/docs#3655 (review) :)

@olljanat
Copy link
Contributor

Describe the solution you'd like

A --ipv6 option 😄

FYI. I just created PR #4450 which adds experimental --ipv6-only flag so it is easier test IPv6 only configurations (as #3212 basically prevented it totally).

@ShylajaDevadiga
Copy link
Contributor

Validated in the scope of testing #2123

@ShylajaDevadiga
Copy link
Contributor

Pod to pod communication over ipv6

ubuntu@i-062680278d04b40bd:~$ kubectl get nodes 
NAME                  STATUS   ROLES                  AGE   VERSION
i-062680278d04b40bd   Ready    control-plane,master   48s   v1.23.5+k3s-483eadb5
i-0606f4d680e8588da   Ready    <none>                 13s   v1.23.5+k3s-483eadb5

Ping to the pod on another node using ipv6

ubuntu@i-062680278d04b40bd:~$ kubectl get pods -o wide 
NAME                                    READY   STATUS    RESTARTS   AGE   IP                  NODE                  NOMINATED NODE   READINESS GATES
test-deployment-7f575ddf5d-bt2mk   1/1     Running   0          16s   2001:cafe:42::9     i-062680278d04b40bd   <none>           <none>
test-deployment-7f575ddf5d-v66sh   1/1     Running   0          16s   2001:cafe:42:1::4   i-0606f4d680e8588da   <none>           <none>
test-deployment-7f575ddf5d-hhzpw   1/1     Running   0          16s   2001:cafe:42:1::3   i-0606f4d680e8588da   <none>           <none>
ubuntu@i-062680278d04b40bd:~$ kubectl exec -it test-deployment-7f575ddf5d-bt2mk -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue state UP group default 
    link/ether 2a:fb:b1:62:7d:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2001:cafe:42::9/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::28fb:b1ff:fe62:7d99/64 scope link 
       valid_lft forever preferred_lft forever
ubuntu@i-062680278d04b40bd:~$ kubectl exec -it test-deployment-7f575ddf5d-v66sh -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue state UP group default 
    link/ether d6:f8:5f:90:e2:e2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2001:cafe:42:1::4/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::d4f8:5fff:fe90:e2e2/64 scope link 
       valid_lft forever preferred_lft forever
ubuntu@i-062680278d04b40bd:~$ kubectl exec -it test-deployment-7f575ddf5d-v66sh -- ping 2001:cafe:42::9
PING 2001:cafe:42::9(2001:cafe:42::9) 56 data bytes
64 bytes from 2001:cafe:42::9: icmp_seq=1 ttl=62 time=0.599 ms
64 bytes from 2001:cafe:42::9: icmp_seq=2 ttl=62 time=0.240 ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement An improvement to existing functionality
Projects
None yet
Development

No branches or pull requests