Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: Failing to load a module due to module being built-in (br_netfilter) #5900

Closed
battlesnake opened this issue Nov 13, 2019 · 8 comments
Closed
Labels
co/none-driver kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@battlesnake
Copy link

[

It looks like minikube is attempting to load a module that's built-in (in my case), and is then failing when the module isn't found.

The exact command to reproduce the issue:

$ sudo minikube start --network-plugin=cni  --container-runtime=containerd --enable-default-cni --vm-driver=none

The full output of the command that failed:

$ sudo minikube start --network-plugin=cni  --container-runtime=containerd --enable-default-cni --vm-driver=none
😄  minikube v1.5.2 on Arch rolling (vbox/amd64)
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🏃  Using the running none "minikube" VM ...
⌛  Waiting for the host to be provisioned ...

💣  Failed to enable container runtime: br_netfilter: command failed: sudo modprobe br_netfilter
stdout:
stderr: modprobe: FATAL: Module br_netfilter not found in directory /lib/modules/5.3.8-arch1-1
: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose
$ lsmod | grep br_netfilter
br_netfilter           28672  0
bridge                212992  1 br_netfilter

The output of the minikube logs command:

$ sudo minikube logs
E1113 13:57:05.724655 3932286 logs.go:175] Failed to list containers for "kube-apiserver": command failed: sudo crictl ps -a --name=kube-apiserver --state=Running --quiet
stdout:
stderr: sudo: crictl: command not found
: exit status 1
E1113 13:57:05.733915 3932286 logs.go:175] Failed to list containers for "coredns": command failed: sudo crictl ps -a --name=coredns --state=Running --quiet
stdout:
stderr: sudo: crictl: command not found
: exit status 1
E1113 13:57:05.743823 3932286 logs.go:175] Failed to list containers for "kube-scheduler": command failed: sudo crictl ps -a --name=kube-scheduler --state=Running --quiet
stdout:
stderr: sudo: crictl: command not found
: exit status 1
E1113 13:57:05.752668 3932286 logs.go:175] Failed to list containers for "kube-proxy": command failed: sudo crictl ps -a --name=kube-proxy --state=Running --quiet
stdout:
stderr: sudo: crictl: command not found
: exit status 1
E1113 13:57:05.762507 3932286 logs.go:175] Failed to list containers for "kube-addon-manager": command failed: sudo crictl ps -a --name=kube-addon-manager --state=Running --quiet
stdout:
stderr: sudo: crictl: command not found
: exit status 1
E1113 13:57:05.773576 3932286 logs.go:175] Failed to list containers for "kubernetes-dashboard": command failed: sudo crictl ps -a --name=kubernetes-dashboard --state=Running --quiet
stdout:
stderr: sudo: crictl: command not found
: exit status 1
E1113 13:57:05.784208 3932286 logs.go:175] Failed to list containers for "storage-provisioner": command failed: sudo crictl ps -a --name=storage-provisioner --state=Running --quiet
stdout:
stderr: sudo: crictl: command not found
: exit status 1
E1113 13:57:05.793814 3932286 logs.go:175] Failed to list containers for "kube-controller-manager": command failed: sudo crictl ps -a --name=kube-controller-manager --state=Running --quiet
stdout:
stderr: sudo: crictl: command not found
: exit status 1
==> container status <==
sudo: crictl: command not found
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS                     PORTS                                  NAMES
d6802db5d40b        4689081edb10                    "/storage-provisioner"   4 minutes ago       Exited (1) 4 minutes ago                                          k8s_storage-provisioner_storage-provisioner_kube-system_17a1381e-2f55-4551-a1b6-d2d855c78065_11
890c0e5e633c        kindest/node:v1.15.3            "/usr/local/bin/entr…"   6 minutes ago       Up 6 minutes               37197/tcp, 127.0.0.1:37197->6443/tcp   kind-control-plane
89d8a48e2825        kindest/node:v1.15.3            "/usr/local/bin/entr…"   6 minutes ago       Up 6 minutes                                                      kind-worker2
05147909e7c5        kindest/node:v1.15.3            "/usr/local/bin/entr…"   6 minutes ago       Up 6 minutes                                                      kind-worker3
9a7e2905af0c        kindest/node:v1.15.3            "/usr/local/bin/entr…"   6 minutes ago       Up 6 minutes                                                      kind-worker4
bd40465422ca        kindest/node:v1.15.3            "/usr/local/bin/entr…"   6 minutes ago       Up 6 minutes                                                      kind-worker
143f499f1dad        bf261d157914                    "/coredns -conf /etc…"   41 minutes ago      Up 41 minutes                                                     k8s_coredns_coredns-5644d7b6d9-7hb8s_kube-system_b53df018-ceb6-42ca-9eda-f260632a78b5_0
93ffe45961dc        bf261d157914                    "/coredns -conf /etc…"   41 minutes ago      Up 41 minutes                                                     k8s_coredns_coredns-5644d7b6d9-ks6ph_kube-system_92620e64-7c64-4bc0-8e24-78ea113d0955_0
c2a64c39284b        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_coredns-5644d7b6d9-ks6ph_kube-system_92620e64-7c64-4bc0-8e24-78ea113d0955_0
908b73f0700a        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_coredns-5644d7b6d9-7hb8s_kube-system_b53df018-ceb6-42ca-9eda-f260632a78b5_0
e196121fc001        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_storage-provisioner_kube-system_17a1381e-2f55-4551-a1b6-d2d855c78065_0
612adfe44040        8454cbe08dc9                    "/usr/local/bin/kube…"   41 minutes ago      Up 41 minutes                                                     k8s_kube-proxy_kube-proxy-wkrtb_kube-system_a79f85c0-036f-49be-b0be-879f4b102bec_0
4f642bbfa8f6        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_kube-proxy-wkrtb_kube-system_a79f85c0-036f-49be-b0be-879f4b102bec_0
12a76ea8a702        k8s.gcr.io/kube-addon-manager   "/opt/kube-addons.sh"    41 minutes ago      Up 41 minutes                                                     k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_0
01b17d22d4d5        6e4bffa46d70                    "kube-controller-man…"   41 minutes ago      Up 41 minutes                                                     k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_cec95c8ee9f7d436d746b17938071307_0
1978453002ae        b2756210eeab                    "etcd --advertise-cl…"   41 minutes ago      Up 41 minutes                                                     k8s_etcd_etcd-minikube_kube-system_3d3b4f5c1d684e445b994fc826dddcab_0
cc4ba4bc3cdd        c2c9a0406787                    "kube-apiserver --ad…"   41 minutes ago      Up 41 minutes                                                     k8s_kube-apiserver_kube-apiserver-minikube_kube-system_31d06106f855bb3a799f808e1fec4749_0
dec403fec203        ebac1ae204a2                    "kube-scheduler --au…"   41 minutes ago      Up 41 minutes                                                     k8s_kube-scheduler_kube-scheduler-minikube_kube-system_74dea8da17aa6241e5e4f7b2ba4e1d8e_0
9b22d868eddf        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_kube-controller-manager-minikube_kube-system_cec95c8ee9f7d436d746b17938071307_0
18cb5051b72a        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_etcd-minikube_kube-system_3d3b4f5c1d684e445b994fc826dddcab_0
e237414cb44b        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_0
3455d85b8b0a        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_kube-scheduler-minikube_kube-system_74dea8da17aa6241e5e4f7b2ba4e1d8e_0
77bb82f4e0de        k8s.gcr.io/pause:3.1            "/pause"                 41 minutes ago      Up 41 minutes                                                     k8s_POD_kube-apiserver-minikube_kube-system_31d06106f855bb3a799f808e1fec4749_0

==> containerd <==
-- Logs begin at Tue 2019-09-17 13:49:45 EEST, end at Wed 2019-11-13 13:57:05 EET. --
-- No entries --

==> dmesg <==
[Nov13 13:31] kauditd_printk_skb: 110 callbacks suppressed
[Nov13 13:32] kauditd_printk_skb: 58 callbacks suppressed
[  +5.002784] kauditd_printk_skb: 61 callbacks suppressed
[  +5.098898] kauditd_printk_skb: 56 callbacks suppressed
[  +5.648524] kauditd_printk_skb: 57 callbacks suppressed
[  +5.002550] kauditd_printk_skb: 58 callbacks suppressed
[  +5.147141] kauditd_printk_skb: 55 callbacks suppressed
[  +5.599690] kauditd_printk_skb: 55 callbacks suppressed
[  +5.002480] kauditd_printk_skb: 60 callbacks suppressed
[  +5.074269] kauditd_printk_skb: 57 callbacks suppressed
[Nov13 13:33] kauditd_printk_skb: 58 callbacks suppressed
[Nov13 13:35] kauditd_printk_skb: 2 callbacks suppressed
[ +14.992885] kauditd_printk_skb: 105 callbacks suppressed
[  +5.141991] kauditd_printk_skb: 63 callbacks suppressed
[  +5.090905] kauditd_printk_skb: 60 callbacks suppressed
[  +5.020349] kauditd_printk_skb: 54 callbacks suppressed
[  +5.139090] kauditd_printk_skb: 56 callbacks suppressed
[  +5.005240] kauditd_printk_skb: 56 callbacks suppressed
[  +5.111186] kauditd_printk_skb: 49 callbacks suppressed
[Nov13 13:36] kauditd_printk_skb: 55 callbacks suppressed
[  +5.250151] kauditd_printk_skb: 58 callbacks suppressed
[  +5.002100] kauditd_printk_skb: 58 callbacks suppressed
[  +5.067000] kauditd_printk_skb: 56 callbacks suppressed
[ +24.301627] kauditd_printk_skb: 51 callbacks suppressed
[Nov13 13:38] kauditd_printk_skb: 10 callbacks suppressed
[ +15.150845] kauditd_printk_skb: 107 callbacks suppressed
[Nov13 13:39] kauditd_printk_skb: 65 callbacks suppressed
[  +5.002684] kauditd_printk_skb: 56 callbacks suppressed
[  +5.093082] kauditd_printk_skb: 55 callbacks suppressed
[  +5.153834] kauditd_printk_skb: 55 callbacks suppressed
[  +5.003331] kauditd_printk_skb: 58 callbacks suppressed
[  +5.097125] kauditd_printk_skb: 55 callbacks suppressed
[  +5.149524] kauditd_printk_skb: 55 callbacks suppressed
[  +5.002793] kauditd_printk_skb: 59 callbacks suppressed
[  +5.092161] kauditd_printk_skb: 55 callbacks suppressed
[  +5.015215] kauditd_printk_skb: 54 callbacks suppressed
[Nov13 13:40] kauditd_printk_skb: 59 callbacks suppressed
[Nov13 13:46] kauditd_printk_skb: 2 callbacks suppressed
[Nov13 13:50] kauditd_printk_skb: 101 callbacks suppressed
[  +5.750267] kauditd_printk_skb: 61 callbacks suppressed
[  +5.002785] kauditd_printk_skb: 62 callbacks suppressed
[Nov13 13:51] kauditd_printk_skb: 58 callbacks suppressed
[  +5.006687] kauditd_printk_skb: 54 callbacks suppressed
[  +5.005064] kauditd_printk_skb: 57 callbacks suppressed
[  +5.008353] kauditd_printk_skb: 53 callbacks suppressed
[  +6.154426] kauditd_printk_skb: 58 callbacks suppressed
[  +5.003357] kauditd_printk_skb: 61 callbacks suppressed
[  +5.002410] kauditd_printk_skb: 56 callbacks suppressed
[ +23.245641] kauditd_printk_skb: 44 callbacks suppressed

==> kernel <==
 13:57:05 up 6 days,  2:29,  5 users,  load average: 0.78, 2.43, 3.24
Linux markvm 5.3.8-arch1-1 #1 SMP PREEMPT @1572357769 x86_64 GNU/Linux
PRETTY_NAME="Arch Linux"

==> kubelet <==
-- Logs begin at Tue 2019-09-17 13:49:45 EEST, end at Wed 2019-11-13 13:57:05 EET. --
Nov 13 13:47:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:47:36 markvm kubelet[3781153]: E1113 13:47:36.014622 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:47:47 markvm kubelet[3781153]: E1113 13:47:47.014817 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:47:58 markvm kubelet[3781153]: E1113 13:47:58.016118 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:48:10 markvm kubelet[3781153]: E1113 13:48:10.022618 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:48:24 markvm kubelet[3781153]: E1113 13:48:24.015690 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:48:30 markvm kubelet[3781153]: E1113 13:48:30.401537 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:48:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:48:38 markvm kubelet[3781153]: E1113 13:48:38.014352 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:48:51 markvm kubelet[3781153]: E1113 13:48:51.014284 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:49:04 markvm kubelet[3781153]: E1113 13:49:04.014519 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:49:18 markvm kubelet[3781153]: E1113 13:49:18.014094 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:49:30 markvm kubelet[3781153]: E1113 13:49:30.409826 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:49:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:49:31 markvm kubelet[3781153]: E1113 13:49:31.014041 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:49:46 markvm kubelet[3781153]: E1113 13:49:46.014401 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:49:59 markvm kubelet[3781153]: E1113 13:49:59.014192 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:50:13 markvm kubelet[3781153]: E1113 13:50:13.014362 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:50:26 markvm kubelet[3781153]: E1113 13:50:26.014402 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:50:30 markvm kubelet[3781153]: E1113 13:50:30.420532 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:50:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:50:39 markvm kubelet[3781153]: E1113 13:50:39.014241 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:50:51 markvm kubelet[3781153]: E1113 13:50:51.013883 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:51:03 markvm kubelet[3781153]: E1113 13:51:03.016205 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:51:16 markvm kubelet[3781153]: E1113 13:51:16.014008 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:51:30 markvm kubelet[3781153]: E1113 13:51:30.023240 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:51:30 markvm kubelet[3781153]: E1113 13:51:30.428415 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:51:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:51:43 markvm kubelet[3781153]: E1113 13:51:43.014059 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:51:57 markvm kubelet[3781153]: E1113 13:51:57.014379 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:52:10 markvm kubelet[3781153]: E1113 13:52:10.017248 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:52:30 markvm kubelet[3781153]: E1113 13:52:30.438052 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:52:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:52:56 markvm kubelet[3781153]: E1113 13:52:56.754156 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:53:09 markvm kubelet[3781153]: E1113 13:53:09.014007 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:53:24 markvm kubelet[3781153]: E1113 13:53:24.014368 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:53:30 markvm kubelet[3781153]: E1113 13:53:30.446859 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:53:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:53:37 markvm kubelet[3781153]: E1113 13:53:37.014183 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:53:49 markvm kubelet[3781153]: E1113 13:53:49.014421 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:54:02 markvm kubelet[3781153]: E1113 13:54:02.014642 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:54:16 markvm kubelet[3781153]: E1113 13:54:16.014725 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:54:28 markvm kubelet[3781153]: E1113 13:54:28.014632 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:54:30 markvm kubelet[3781153]: E1113 13:54:30.457778 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:54:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:54:40 markvm kubelet[3781153]: E1113 13:54:40.014723 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:54:55 markvm kubelet[3781153]: E1113 13:54:55.013962 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:55:08 markvm kubelet[3781153]: E1113 13:55:08.014526 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:55:23 markvm kubelet[3781153]: E1113 13:55:23.014036 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:55:30 markvm kubelet[3781153]: E1113 13:55:30.466960 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:55:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:55:38 markvm kubelet[3781153]: E1113 13:55:38.014028 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:55:53 markvm kubelet[3781153]: E1113 13:55:53.013983 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:56:06 markvm kubelet[3781153]: E1113 13:56:06.014126 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:56:17 markvm kubelet[3781153]: E1113 13:56:17.014917 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:56:30 markvm kubelet[3781153]: E1113 13:56:30.476208 3781153 kubelet_network_linux.go:57] Failed to ensure marking rule for KUBE-MARK-DROP: error checking rule: exit status 2: iptables v1.8.3 (legacy): unknown option "--set-xmark"
Nov 13 13:56:30 markvm kubelet[3781153]: Try `iptables -h' or 'iptables --help' for more information.
Nov 13 13:56:31 markvm kubelet[3781153]: E1113 13:56:31.014104 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:56:42 markvm kubelet[3781153]: E1113 13:56:42.014448 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"
Nov 13 13:56:56 markvm kubelet[3781153]: E1113 13:56:56.014208 3781153 pod_workers.go:191] Error syncing pod 17a1381e-2f55-4551-a1b6-d2d855c78065 ("storage-provisioner_kube-system(17a1381e-2f55-4551-a1b6-d2d855c78065)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(17a1381e-2f55-4551-

The operating system version:

Arch Linux, kernel 5.3.8-arch1-1, running in Virtualbox.

]

@tstromberg tstromberg changed the title Failing to load a module due to module being built-in (br_netfilter) none: Failing to load a module due to module being built-in (br_netfilter) Nov 15, 2019
@tstromberg
Copy link
Contributor

We haven't done much testing with the none driver and containerd. If you want to try, I think I have a possible solution. Try editing this line here:

c := exec.Command("sudo", "modprobe", "br_netfilter")

And add a check like this:

	c := exec.Command("sudo", "sysctl", " net.netfilter.nf_conntrack_count")
	if _, err := cr.RunCmd(c); err != nil { 
            .. then modprobe!
	}

Please let me know if it works.

Related issue: moby/libnetwork#1210

@tstromberg tstromberg added co/none-driver kind/bug Categorizes issue or PR as related to a bug. triage/needs-information Indicates an issue needs more information in order to work on it. labels Nov 15, 2019
@battlesnake
Copy link
Author

battlesnake commented Nov 24, 2019

Thanks, I'll test that next week. I think the issue was that br_netfilter is a built-in module on my system, so is already loaded and can't be modprobe'd - so modprobe was failing and causing minikube to fail, even though the module was loaded.

@medyagh
Copy link
Member

medyagh commented Dec 16, 2019

@battlesnake have you had any chance to try ? is this something we need to fix or work on?

@afbjorklund
Copy link
Collaborator

We could use lsmod, to check if it is an external module or not...

Still weird that the modprobe call fails in the first place, though ?

modules.builtin
--------------------------------------------------
This file lists all modules that are built into the kernel. This is used
by modprobe to not fail when trying to load something builtin.

@medyagh
Copy link
Member

medyagh commented Jan 29, 2020

I confirm this issue, we have the same problem for kic driver (docker) on cloud shell.
I created an issue for this, there is no need to exit fail on modprobe we could just log the error and try to continue

please reffer to this issue on the updates, I will make a PR to address this.
#6404

@medyagh medyagh closed this as completed Jan 29, 2020
@medyagh medyagh reopened this Jan 29, 2020
@medyagh medyagh added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 29, 2020
@medyagh
Copy link
Member

medyagh commented Jan 29, 2020

and aitäh @battlesnake for creating this issue, this will actually help minikube run on different platforms much better

@medyagh
Copy link
Member

medyagh commented Jan 30, 2020

@battlesnake this issue should fixed by the PR and it will be in the next release. please feel free to re-open this issue if you still have the issue

@medyagh medyagh closed this as completed Jan 30, 2020
@battlesnake
Copy link
Author

Thanks. I was using Minikube for a proof of concept, so haven't done much with it since opening this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

4 participants