Skip to content
This repository has been archived by the owner on Jun 20, 2024. It is now read-only.

Weave Net fails to start in minikube VM #3124

Closed
ceridwen opened this issue Sep 20, 2017 · 9 comments
Closed

Weave Net fails to start in minikube VM #3124

ceridwen opened this issue Sep 20, 2017 · 9 comments
Milestone

Comments

@ceridwen
Copy link

What you expected to happen?

I have 1.6 and 1.7 network policies that I expect to deny access. AFAICT, Weave is always allowing access irrespective of policies.

For 1.6, I have annotations set on namespaces:

metadata:
  annotations:
    net.beta.kubernetes.io/network-policy: |
      {
      "ingress": {
        "isolation": "DefaultDeny"
        }
      }

For 1.7, I have a default-deny network policy:

kind: NetworkPolicy
spec:
  podSelector:

For both, I also have a policy that should allow traffic to certain pods from certain pods and namespaces.

kind: NetworkPolicy
spec:
  podSelector:
    matchLabels:
      foo: baz
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          foo: bar
    - podSelector:
        matchLabels:
          foo: bar

I elided some irrelevant parts of the YAML to keep it short.

I expected pods in the namespaces associated with these policies to be unreachable by anything, in the case of pods not associated with the last network policy, and to be reachable only by pods with the right label and pods in namespaces with the right label in the case of pods associated with the last network policy.

What happened?

All traffic got through to the pods in the namespaces with policies and/or annotations set.

See below for logs.

How to reproduce it?

  1. Install minikube and start it with minikube start --network-plugin=cni.
  2. Install weave with kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  3. Create the network policies and/or annotations.
  4. Create some pods with labels and check TCP traffic between using cluster networking. (I used netcat.)

Anything else we need to know?

Minikube 0.22.0 (Kubernetes 1.7.5) running on Mac OS X 10.12.6.

Versions:

$ echo "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI3IiwgR2l0VmVyc2lvbjoidjEuNy4yIiwgR2l0Q29tbWl0OiI5MjJhODZjZmNkNjU5MTVhOWIyZjY5ZjNmMTkzYjg5MDdkNzQxZDljIiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxNy0wNy0yMVQxOTowNjoxOVoiLCBHb1ZlcnNpb246ImdvMS44LjMiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToiZGFyd2luL2FtZDY0In0KU2VydmVyIFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI3IiwgR2l0VmVyc2lvbjoidjEuNy41IiwgR2l0Q29tbWl0OiIxN2Q3MTgyYTdjY2JiMTY3MDc0YmU3YTg3ZjBhNjhiZDAwZDU4ZDk3IiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxNy0wOS0wN1QxODoyMDowMloiLCBHb1ZlcnNpb246ImdvMS44LjMiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQo=
$ docker version
Client:
 Version:      17.06.2-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   cec0b72
 Built:        Tue Sep  5 20:12:06 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      17.06.2-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   cec0b72
 Built:        Tue Sep  5 19:59:19 2017
 OS/Arch:      linux/amd64
 Experimental: true
$ uname -a
Darwin 16.7.0 Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64 x86_64
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T19:06:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-09-07T18:20:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Logs:

Logs from the weave container:

INFO: 2017/09/20 17:14:18.814283 Command line options: map[port:6783 conn-limit:30 docker-api: expect-npc:true host-root:/host http-addr:127.0.0.1:6784 ipalloc-init:consensus=1 ipalloc-range:10.32.0.0/12 nickname:minikube datapath:datapath db-prefix:/weavedb/weave-net no-dns:true status-addr:0.0.0.0:6782]
INFO: 2017/09/20 17:14:18.814380 weave  2.0.4
INFO: 2017/09/20 17:14:18.814704 Bridge type is bridge
INFO: 2017/09/20 17:14:18.814741 Communication between peers is unencrypted.

I stuck the weave-npc logs in a gist, https://gist.github.com/ceridwen/17455d98de7e93acfd42edefe61be97a , because they're long.

Network:

I ran these commands inside the minikube VM.

$ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 1024
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 1024
10.1.0.0/16 dev mybridge proto kernel scope link src 10.1.0.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.99.0/24 dev eth1 proto kernel scope link src 192.168.99.100
$ ip -4 -o addr
1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: eth0    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0\       valid_lft 84262sec preferred_lft 84262sec
3: eth1    inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1\       valid_lft 1158sec preferred_lft 1158sec
5: docker0    inet 172.17.0.1/16 scope global docker0\       valid_lft forever preferred_lft forever
6: mybridge    inet 10.1.0.1/16 scope global mybridge\       valid_lft forever preferred_lft forever
$ sudo iptables-save
# Generated by iptables-save v1.6.1 on Wed Sep 20 17:33:26 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [5:300]
:POSTROUTING ACCEPT [0:0]
:CNI-3cf022a67aca9a0101643713 - [0:0]
:CNI-41aa69c2bc9b74c96c1a8fc5 - [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-2AMR4GNIYH7ZRQLQ - [0:0]
:KUBE-SEP-D473HSCUOTTAL7JR - [0:0]
:KUBE-SEP-DGI5NESQEQNPZRTY - [0:0]
:KUBE-SEP-EN5A5LWWUOVCZLMI - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.1.0.0/16 -m comment --comment "name: \"rkt.kubernetes.io\" id: \"2fb697fd1d116c5d1870a6806abc54411ad89fff5f343003d68a99709e870675\"" -j CNI-3cf022a67aca9a0101643713
-A POSTROUTING -s 10.1.0.0/16 -m comment --comment "name: \"rkt.kubernetes.io\" id: \"04b66a1cc3cea160e8737d3831dfc25de6c41c3ae0a0cd3b994d2c64754ffd6f\"" -j CNI-41aa69c2bc9b74c96c1a8fc5
-A CNI-3cf022a67aca9a0101643713 -d 10.1.0.0/16 -m comment --comment "name: \"rkt.kubernetes.io\" id: \"2fb697fd1d116c5d1870a6806abc54411ad89fff5f343003d68a99709e870675\"" -j ACCEPT
-A CNI-3cf022a67aca9a0101643713 ! -d 224.0.0.0/4 -m comment --comment "name: \"rkt.kubernetes.io\" id: \"2fb697fd1d116c5d1870a6806abc54411ad89fff5f343003d68a99709e870675\"" -j MASQUERADE
-A CNI-41aa69c2bc9b74c96c1a8fc5 -d 10.1.0.0/16 -m comment --comment "name: \"rkt.kubernetes.io\" id: \"04b66a1cc3cea160e8737d3831dfc25de6c41c3ae0a0cd3b994d2c64754ffd6f\"" -j ACCEPT
-A CNI-41aa69c2bc9b74c96c1a8fc5 ! -d 224.0.0.0/4 -m comment --comment "name: \"rkt.kubernetes.io\" id: \"04b66a1cc3cea160e8737d3831dfc25de6c41c3ae0a0cd3b994d2c64754ffd6f\"" -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 30000 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 30000 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2AMR4GNIYH7ZRQLQ -s 10.0.2.15/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-2AMR4GNIYH7ZRQLQ -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-2AMR4GNIYH7ZRQLQ --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.0.2.15:8443
-A KUBE-SEP-D473HSCUOTTAL7JR -s 10.1.0.2/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ
-A KUBE-SEP-D473HSCUOTTAL7JR -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination 10.1.0.2:9090
-A KUBE-SEP-DGI5NESQEQNPZRTY -s 10.1.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-DGI5NESQEQNPZRTY -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.1.0.3:53
-A KUBE-SEP-EN5A5LWWUOVCZLMI -s 10.1.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-EN5A5LWWUOVCZLMI -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.1.0.3:53
-A KUBE-SERVICES -d 10.0.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.0.0.144/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 80 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES -d 10.0.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.0.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-EN5A5LWWUOVCZLMI
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-2AMR4GNIYH7ZRQLQ --mask 255.255.255.255 --rsource -j KUBE-SEP-2AMR4GNIYH7ZRQLQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-2AMR4GNIYH7ZRQLQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-DGI5NESQEQNPZRTY
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-D473HSCUOTTAL7JR
COMMIT
# Completed on Wed Sep 20 17:33:26 2017
# Generated by iptables-save v1.6.1 on Wed Sep 20 17:33:26 2017
*mangle
:PREROUTING ACCEPT [185628:328605102]
:INPUT ACCEPT [185306:328582526]
:FORWARD ACCEPT [318:20272]
:OUTPUT ACCEPT [180380:47405912]
:POSTROUTING ACCEPT [180707:47426670]
COMMIT
# Completed on Wed Sep 20 17:33:26 2017
# Generated by iptables-save v1.6.1 on Wed Sep 20 17:33:26 2017
*filter
:INPUT ACCEPT [353:63421]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [345:61979]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -m comment --comment "DefaultAllow isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]+@p dst -m comment --comment "DefaultAllow isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-4vtqMI+kx/2]jD%_c0S%thO%V dst -m comment --comment "DefaultAllow isolation for namespace: kube-public" -j ACCEPT
COMMIT
# Completed on Wed Sep 20 17:33:26 2017
@brb
Copy link
Contributor

brb commented Sep 25, 2017

Thanks for opening the issue.

It seems that the weave-net pod gets stuck and does not progress its initialization. Could you paste the output of kubectl describe ds weave-net -n=kube-system?

@ceridwen
Copy link
Author

Name:		weave-net
Selector:	name=weave-net
Node-Selector:	<none>
Labels:		name=weave-net
Annotations:	cloud.weave.works/launcher-info={
  "server-version": "master-3e85166",
  "original-request": {
    "url": "/k8s/v1.7/net.yaml?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yO...
	kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{"cloud.weave.works/launcher-info":"{\n  \"server-version\": \"master-3...
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
Number of Nodes Scheduled with Available Pods: 1
Number of Nodes Misscheduled: 0
Pods Status:	1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:		name=weave-net
  Service Account:	weave-net
  Containers:
   weave:
    Image:	weaveworks/weave-kube:2.0.4
    Port:	<none>
    Command:
      /home/weave/launch.sh
    Requests:
      cpu:	10m
    Liveness:	http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:	 (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /weavedb from weavedb (rw)
   weave-npc:
    Image:	weaveworks/weave-npc:2.0.4
    Port:	<none>
    Requests:
      cpu:	10m
    Environment:
      HOSTNAME:	 (v1:spec.nodeName)
    Mounts:	<none>
  Volumes:
   weavedb:
    Type:	HostPath (bare host directory volume)
    Path:	/var/lib/weave
   cni-bin:
    Type:	HostPath (bare host directory volume)
    Path:	/opt
   cni-bin2:
    Type:	HostPath (bare host directory volume)
    Path:	/home
   cni-conf:
    Type:	HostPath (bare host directory volume)
    Path:	/etc
   dbus:
    Type:	HostPath (bare host directory volume)
    Path:	/var/lib/dbus
   lib-modules:
    Type:	HostPath (bare host directory volume)
    Path:	/lib/modules
Events:
  FirstSeen	LastSeen	Count	From		SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----		-------------	--------	------			-------
  15m		15m		1	daemon-set			Normal		SuccessfulCreate	Created pod: weave-net-77dkq

@brb
Copy link
Contributor

brb commented Nov 8, 2017

Sorry for the delay. I was able to reproduce your issue. It seems that the weave-kube container exits with the err code 147 during initialization, and thus the required iptables rules for filtering weave traffic are not installed.

So, we should debug why weave-kube crashes on minikube (tested with k8s 1.8.0 and minikube 0.23.0).

@ceeaspb
Copy link

ceeaspb commented Jan 23, 2018

@brb was there any progress on this? thanks

@brb
Copy link
Contributor

brb commented Jan 25, 2018

Sorry, but no progress. My bet is that a kernel of minikube does not have a proper configuration.

@brb brb changed the title Network Policy not blocking access on minikube Weave fails to start in minikube VM Jan 29, 2018
@brb
Copy link
Contributor

brb commented Jun 6, 2018

I've just checked and found that minikube (0.27) is missing the following kernel configuration which prevents Weave Net (and thus, weave-kube) from starting on it:

CONFIG_DUMMY=m
CONFIG_OPENVSWITCH=m
CONFIG_OPENVSWITCH_VXLAN=m
CONFIG_VXLAN=m

It's possible to workaround the missing openvswitch ones by setting --no-fastdp to EXTRA_ARGS. Unfortunately, in any case, we try to create a dummy iface which fails due to missing the kernel module.

@brb brb changed the title Weave fails to start in minikube VM Weave Net fails to start in minikube VM Jun 6, 2018
@brb
Copy link
Contributor

brb commented Jun 6, 2018

Submitted PR: kubernetes/minikube#2876

@brb
Copy link
Contributor

brb commented Jun 16, 2018

FYI: the PR got merged and included into the recent minikube v0.28.

To run Weave Net on minikube, after upgrading minikube, you need to overwrite the default CNI config shipped with minikube: mkdir -p ~/.minikube/files/etc/cni/net.d/ && touch ~/.minikube/files/etc/cni.net.d/k8s.conf and then to start minikube with CNI enabled: minikube start --network-plugin=cni --extra-config=kubelet.network-plugin=cni. Afterwards, you can install Weave Net.

@hswong3i
Copy link

hswong3i commented Apr 30, 2019

In my case when running with LXD + Minikube + none driver + Weave (see https://github.com/alvistack/ansible-role-minikube/blob/master/molecule/ubuntu-18.04/playbook.yml), the key procedures are:

# Install CNI plugin.
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz" | tar -C /opt/cni/bin -xz

# `minikube start` with CNI support.
minikube start \
          --extra-config=kubeadm.ignore-preflight-errors=FileContent--proc-sys-net-bridge-bridge-nf-call-iptables,SystemVerification \
          --extra-config=kubelet.cgroup-driver=systemd \
          --extra-config=kubelet.network-plugin=cni \
          --kubernetes-version=v1.14.1 \
          --network-plugin=cni \
          --vm-driver=none

# Install Weave.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

# Check result.
minikube status
kubectl get pod --all-namespaces

P.S. no ~/.minikube/files/etc/cni/net.d/k8s.conf is required.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants