-
Notifications
You must be signed in to change notification settings - Fork 672
Weave Net fails to start in minikube VM #3124
Comments
Thanks for opening the issue. It seems that the weave-net pod gets stuck and does not progress its initialization. Could you paste the output of |
|
Sorry for the delay. I was able to reproduce your issue. It seems that the weave-kube container exits with the err code 147 during initialization, and thus the required iptables rules for filtering weave traffic are not installed. So, we should debug why weave-kube crashes on minikube (tested with k8s 1.8.0 and minikube 0.23.0). |
@brb was there any progress on this? thanks |
Sorry, but no progress. My bet is that a kernel of minikube does not have a proper configuration. |
I've just checked and found that minikube (0.27) is missing the following kernel configuration which prevents Weave Net (and thus, weave-kube) from starting on it:
It's possible to workaround the missing openvswitch ones by setting |
Submitted PR: kubernetes/minikube#2876 |
FYI: the PR got merged and included into the recent minikube v0.28. To run Weave Net on minikube, after upgrading minikube, you need to overwrite the default CNI config shipped with minikube: |
In my case when running with LXD + Minikube + none driver + Weave (see https://github.com/alvistack/ansible-role-minikube/blob/master/molecule/ubuntu-18.04/playbook.yml), the key procedures are:
P.S. no |
What you expected to happen?
I have 1.6 and 1.7 network policies that I expect to deny access. AFAICT, Weave is always allowing access irrespective of policies.
For 1.6, I have annotations set on namespaces:
For 1.7, I have a default-deny network policy:
For both, I also have a policy that should allow traffic to certain pods from certain pods and namespaces.
I elided some irrelevant parts of the YAML to keep it short.
I expected pods in the namespaces associated with these policies to be unreachable by anything, in the case of pods not associated with the last network policy, and to be reachable only by pods with the right label and pods in namespaces with the right label in the case of pods associated with the last network policy.
What happened?
All traffic got through to the pods in the namespaces with policies and/or annotations set.
See below for logs.
How to reproduce it?
minikube start --network-plugin=cni
.kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Anything else we need to know?
Minikube 0.22.0 (Kubernetes 1.7.5) running on Mac OS X 10.12.6.
Versions:
Logs:
Logs from the weave container:
I stuck the weave-npc logs in a gist, https://gist.github.com/ceridwen/17455d98de7e93acfd42edefe61be97a , because they're long.
Network:
I ran these commands inside the minikube VM.
The text was updated successfully, but these errors were encountered: