Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k3s v1.20.0+k3s2 with external containerd does not work anymore #2779

Closed
luckyluc74 opened this issue Jan 6, 2021 · 5 comments
Closed

k3s v1.20.0+k3s2 with external containerd does not work anymore #2779

luckyluc74 opened this issue Jan 6, 2021 · 5 comments

Comments

@luckyluc74
Copy link

luckyluc74 commented Jan 6, 2021

Environmental Info:
K3s Version:
v1.20.0+k3s2

Node(s) CPU architecture, OS, and Version:
Arm64 (raspberry pi 4 8GB), Ubuntu 20.10 with zfs

# uname -a
Linux mnode3 5.8.0-1011-raspi #14-Ubuntu SMP PREEMPT Tue Dec 15 08:53:29 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
# zfs version
zfs-0.8.4-1ubuntu11
zfs-kmod-0.8.4-1ubuntu11

Cluster Configuration:
1 master

Describe the bug:
With k3s v1.20.0+k3s2 with external containerd with zfs snapshotter does not work anywork.
With k3s v1.19.5+k3s2 it works fine

Steps To Reproduce:

install containerd

apt install containerd

add containerd config

mkdir /etc/containerd

add config /etc/containerd/config.toml

vi /etc/containerd/config.toml

add following

version = 2
[plugins."io.containerd.grpc.v1.cri".containerd]
  snapshotter = "zfs"
[plugins."io.containerd.internal.v1.opt"]
  path = "/var/lib/containerd/opt"

create containerd dataset

zfs create -o mountpoint=/var/lib/containerd/io.containerd.snapshotter.v1.zfs rpool/containerd

install k3s

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.20.0+k3s2 K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --disable-network-policy --cluster-init  --container-runtime-endpoint unix:///run/containerd/containerd.sock" sh -

Expected behavior:
k3s working with external containerd with zfs snapshotter

also following command should not give an error

#crictl images
IMAGE                                      TAG                 IMAGE ID            SIZE
docker.io/calico/cni                       v3.17.1             5605570997ccd       39.9MB
docker.io/calico/kube-controllers          v3.17.1             e46d483883d95       20.5MB
docker.io/calico/node                      v3.17.1             723ee8f82f0ef       38.8MB
docker.io/calico/pod2daemon-flexvol        v3.17.1             85a7dc8c60a03       4.73MB
docker.io/rancher/coredns-coredns          1.6.9               af51a588dff59       12.2MB
docker.io/rancher/klipper-helm             v0.3.0              4001cb2c385ce       45.7MB
docker.io/rancher/klipper-lb               v0.1.2              9be4f056f04b7       2.54MB
docker.io/rancher/library-traefik          1.7.19              1cdb7e2bd5e25       22.2MB
docker.io/rancher/local-path-provisioner   v0.0.14             2b703ea309660       12.3MB
docker.io/rancher/metrics-server           v0.3.6              f9499facb1e8c       9.87MB
k8s.gcr.io/pause                           3.1                 6cf7c80fe4444       265kB

Actual behavior:
k3s does not work with external containerd with zfs snapshotter

following command gives following error

# crictl images
FATA[2021-01-06T20:51:11.562033249+01:00] load config file: stat /var/lib/rancher/k3s/data/477a397d5ed8de53c8cee46d5f565f81e93de8dadbff870dc27452b47238f6f5/bin/crictl.yaml: no such file or directory

k3s error log line related to issue

Jan  6 20:50:56 mnode3 k3s[2966]: time="2021-01-06T20:50:56.840499832+01:00" level=error msg="Failed to retrieve agent config: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: invalid argument"

k3s complete log

Jan  6 20:50:11 mnode3 k3s[2966]: time="2021-01-06T20:50:11+01:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Jan  6 20:50:11 mnode3 k3s[2966]: time="2021-01-06T20:50:11+01:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/477a397d5ed8de53c8cee46d5f565f81e93de8dadbff870dc27452b47238f6f5"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.685339694+01:00" level=info msg="Starting k3s v1.20.0+k3s2 (2ea6b163)"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.688199795+01:00" level=info msg="Managed etcd cluster initializing"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.729779002+01:00" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.733589323+01:00" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.736700904+01:00" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.739961986+01:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.743169586+01:00" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.746070446+01:00" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.749128972+01:00" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.754126807+01:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.759018642+01:00" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.763394867+01:00" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.766031988+01:00" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:20 mnode3 k3s[2966]: time="2021-01-06T20:50:20.770266380+01:00" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:20 +0000 UTC"
Jan  6 20:50:21 mnode3 k3s[2966]: time="2021-01-06T20:50:21.951781441+01:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:21 +0000 UTC"
Jan  6 20:50:21 mnode3 k3s[2966]: time="2021-01-06T20:50:21.954580691+01:00" level=info msg="Active TLS secret  (ver=) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.111.35:192.168.111.35 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=5FEC7A00A9985952578C43840691997F07CBFBA7]"
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.955+0100","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.111.35:2380"]}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.956+0100","caller":"embed/etcd.go:468","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.958+0100","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.111.35:2379"]}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.959+0100","caller":"embed/etcd.go:302","msg":"starting an etcd server","etcd-version":"3.4.13","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.15.5","go-os":"linux","go-arch":"arm64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":false,"name":"mnode3-d8bca3db","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.111.35:2380"],"listen-peer-urls":["https://192.168.111.35:2380"],"advertise-client-urls":["https://192.168.111.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.111.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"mnode3-d8bca3db=https://192.168.111.35:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":false,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":""}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.968+0100","caller":"etcdserver/backend.go:80","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"6.777495ms"}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.986+0100","caller":"etcdserver/raft.go:486","msg":"starting local member","local-member-id":"5dff5810cfcd2643","cluster-id":"1a102b39dd4f88cf"}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.987+0100","caller":"raft/raft.go:1530","msg":"5dff5810cfcd2643 switched to configuration voters=()"}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.988+0100","caller":"raft/raft.go:700","msg":"5dff5810cfcd2643 became follower at term 0"}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.988+0100","caller":"raft/raft.go:383","msg":"newRaft 5dff5810cfcd2643 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.988+0100","caller":"raft/raft.go:700","msg":"5dff5810cfcd2643 became follower at term 1"}
Jan  6 20:50:21 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:21.989+0100","caller":"raft/raft.go:1530","msg":"5dff5810cfcd2643 switched to configuration voters=(6773229193817564739)"}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"warn","ts":"2021-01-06T20:50:22.003+0100","caller":"auth/store.go:1366","msg":"simple token is not cryptographically signed"}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.056+0100","caller":"etcdserver/quota.go:98","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.094+0100","caller":"etcdserver/server.go:803","msg":"starting etcd server","local-member-id":"5dff5810cfcd2643","local-server-version":"3.4.13","cluster-version":"to_be_decided"}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.097+0100","caller":"etcdserver/server.go:669","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"5dff5810cfcd2643","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"election-timeout":"5s"}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.101+0100","caller":"raft/raft.go:1530","msg":"5dff5810cfcd2643 switched to configuration voters=(6773229193817564739)"}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.102+0100","caller":"membership/cluster.go:392","msg":"added member","cluster-id":"1a102b39dd4f88cf","local-member-id":"5dff5810cfcd2643","added-peer-id":"5dff5810cfcd2643","added-peer-peer-urls":["https://192.168.111.35:2380"]}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.110+0100","caller":"embed/etcd.go:711","msg":"starting with client TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.110+0100","caller":"embed/etcd.go:579","msg":"serving peer traffic","address":"192.168.111.35:2380"}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.111+0100","caller":"embed/etcd.go:244","msg":"now serving peer/client/metrics","local-member-id":"5dff5810cfcd2643","initial-advertise-peer-urls":["https://192.168.111.35:2380"],"listen-peer-urls":["https://192.168.111.35:2380"],"advertise-client-urls":["https://192.168.111.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.111.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
Jan  6 20:50:22 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:22.111+0100","caller":"embed/etcd.go:781","msg":"serving metrics","address":"http://127.0.0.1:2381"}
Jan  6 20:50:22 mnode3 k3s[2966]: time="2021-01-06T20:50:22.113047755+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jan  6 20:50:24 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:24.990+0100","caller":"raft/raft.go:923","msg":"5dff5810cfcd2643 is starting a new election at term 1"}
Jan  6 20:50:24 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:24.991+0100","caller":"raft/raft.go:713","msg":"5dff5810cfcd2643 became candidate at term 2"}
Jan  6 20:50:24 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:24.991+0100","caller":"raft/raft.go:824","msg":"5dff5810cfcd2643 received MsgVoteResp from 5dff5810cfcd2643 at term 2"}
Jan  6 20:50:24 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:24.991+0100","caller":"raft/raft.go:765","msg":"5dff5810cfcd2643 became leader at term 2"}
Jan  6 20:50:24 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:24.991+0100","caller":"raft/node.go:325","msg":"raft.node: 5dff5810cfcd2643 elected leader 5dff5810cfcd2643 at term 2"}
Jan  6 20:50:24 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:24.994+0100","caller":"etcdserver/server.go:2530","msg":"setting up initial cluster version","cluster-version":"3.4"}
Jan  6 20:50:24 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:24.996+0100","caller":"etcdserver/server.go:2039","msg":"published local member to cluster through raft","local-member-id":"5dff5810cfcd2643","local-member-attributes":"{Name:mnode3-d8bca3db ClientURLs:[https://192.168.111.35:2379]}","request-path":"/0/members/5dff5810cfcd2643/attributes","cluster-id":"1a102b39dd4f88cf","publish-timeout":"15s"}
Jan  6 20:50:25 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:24.999+0100","caller":"membership/cluster.go:558","msg":"set initial cluster version","cluster-id":"1a102b39dd4f88cf","local-member-id":"5dff5810cfcd2643","cluster-version":"3.4"}
Jan  6 20:50:25 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:25.000+0100","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}
Jan  6 20:50:25 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:25.001+0100","caller":"etcdserver/server.go:2562","msg":"cluster version is updated","cluster-version":"3.4"}
Jan  6 20:50:25 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:25.004+0100","caller":"embed/serve.go:191","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
Jan  6 20:50:25 mnode3 k3s[2966]: {"level":"info","ts":"2021-01-06T20:50:25.004+0100","caller":"embed/serve.go:191","msg":"serving client traffic securely","address":"192.168.111.35:2379"}
Jan  6 20:50:25 mnode3 k3s[2966]: time="2021-01-06T20:50:25.034490087+01:00" level=info msg="etcd data store connection OK"
Jan  6 20:50:25 mnode3 k3s[2966]: Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
Jan  6 20:50:25 mnode3 k3s[2966]: I0106 20:50:25.042792    2966 server.go:659] external host was not specified, using 192.168.111.35
Jan  6 20:50:25 mnode3 k3s[2966]: I0106 20:50:25.044557    2966 server.go:196] Version: v1.20.0+k3s2
Jan  6 20:50:26 mnode3 k3s[2966]: I0106 20:50:26.940991    2966 shared_informer.go:240] Waiting for caches to sync for node_authorizer
Jan  6 20:50:26 mnode3 k3s[2966]: I0106 20:50:26.947039    2966 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan  6 20:50:26 mnode3 k3s[2966]: I0106 20:50:26.947115    2966 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan  6 20:50:26 mnode3 k3s[2966]: I0106 20:50:26.951632    2966 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan  6 20:50:26 mnode3 k3s[2966]: I0106 20:50:26.951710    2966 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan  6 20:50:27 mnode3 k3s[2966]: I0106 20:50:27.065702    2966 instance.go:289] Using reconciler: lease
Jan  6 20:50:27 mnode3 k3s[2966]: I0106 20:50:27.329569    2966 rest.go:131] the default service ipfamily for this cluster is: IPv4
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.720953    2966 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.755250    2966 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.794466    2966 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.823667    2966 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.836094    2966 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.857457    2966 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.866514    2966 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.885399    2966 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: W0106 20:50:28.885485    2966 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
Jan  6 20:50:28 mnode3 k3s[2966]: I0106 20:50:28.923336    2966 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan  6 20:50:28 mnode3 k3s[2966]: I0106 20:50:28.923414    2966 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan  6 20:50:28 mnode3 k3s[2966]: time="2021-01-06T20:50:28.981342862+01:00" level=info msg="Waiting for API server to become available"
Jan  6 20:50:28 mnode3 k3s[2966]: time="2021-01-06T20:50:28.981516287+01:00" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0"
Jan  6 20:50:28 mnode3 k3s[2966]: time="2021-01-06T20:50:28.983924705+01:00" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.038875224+01:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.039310926+01:00" level=info msg="To join node to cluster: k3s agent -s https://192.168.111.35:6443 -t ${NODE_TOKEN}"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.044013225+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.044906926+01:00" level=info msg="Run: k3s kubectl"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.046063348+01:00" level=info msg="Module overlay was already loaded"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.087984689+01:00" level=info msg="Module br_netfilter was already loaded"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.408609685+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:50:29 http: TLS handshake error from 127.0.0.1:42666: remote error: tls: bad certificate"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.428587618+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:50:29 http: TLS handshake error from 127.0.0.1:42672: remote error: tls: bad certificate"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.470933068+01:00" level=error msg="runtime core not ready"
Jan  6 20:50:29 mnode3 k3s[2966]: time="2021-01-06T20:50:29.472029157+01:00" level=error msg="Failed to retrieve agent config: https://127.0.0.1:6443/v1-k3s/serving-kubelet.crt: 500 Internal Server Error"
Jan  6 20:50:34 mnode3 k3s[2966]: time="2021-01-06T20:50:34.478368328+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:50:34 http: TLS handshake error from 127.0.0.1:42684: remote error: tls: bad certificate"
Jan  6 20:50:34 mnode3 k3s[2966]: time="2021-01-06T20:50:34.520805909+01:00" level=error msg="runtime core not ready"
Jan  6 20:50:34 mnode3 k3s[2966]: time="2021-01-06T20:50:34.521640018+01:00" level=error msg="Failed to retrieve agent config: https://127.0.0.1:6443/v1-k3s/serving-kubelet.crt: 500 Internal Server Error"
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.458404    2966 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.458408    2966 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.459219    2966 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.461214    2966 secure_serving.go:197] Serving securely on 127.0.0.1:6444
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.461274    2966 tlsconfig.go:240] Starting DynamicServingCertificateController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.461610    2966 apf_controller.go:249] Starting API Priority and Fairness config controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.461930    2966 customresource_discovery_controller.go:209] Starting DiscoveryController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.463414    2966 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.463475    2966 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.465801    2966 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466069    2966 available_controller.go:475] Starting AvailableConditionController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466191    2966 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466205    2966 autoregister_controller.go:141] Starting autoregister controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466234    2966 cache.go:32] Waiting for caches to sync for autoregister controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466309    2966 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466345    2966 apiservice_controller.go:97] Starting APIServiceRegistrationController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466368    2966 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466395    2966 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.466468    2966 controller.go:83] Starting OpenAPI AggregationController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.479755    2966 controller.go:86] Starting OpenAPI controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.479897    2966 naming_controller.go:291] Starting NamingConditionController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.479946    2966 establishing_controller.go:76] Starting EstablishingController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.480071    2966 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.480119    2966 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.480207    2966 crd_finalizer.go:266] Starting CRDFinalizer
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.496150    2966 crdregistration_controller.go:111] Starting crd-autoregister controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.496224    2966 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.649346    2966 shared_informer.go:247] Caches are synced for node_authorizer
Jan  6 20:50:35 mnode3 k3s[2966]: time="2021-01-06T20:50:35.659897439+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.661440    2966 controller.go:606] quota admission added evaluator for: namespaces
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.662796    2966 apf_controller.go:253] Running API Priority and Fairness config worker
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.665745    2966 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.666550    2966 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.667780    2966 cache.go:39] Caches are synced for AvailableConditionController controller
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.667880    2966 cache.go:39] Caches are synced for autoregister controller
Jan  6 20:50:35 mnode3 k3s[2966]: E0106 20:50:35.671576    2966 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "cloud-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope
Jan  6 20:50:35 mnode3 k3s[2966]: I0106 20:50:35.696379    2966 shared_informer.go:247] Caches are synced for crd-autoregister
Jan  6 20:50:35 mnode3 k3s[2966]: E0106 20:50:35.783702    2966 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
Jan  6 20:50:35 mnode3 k3s[2966]: E0106 20:50:35.796838    2966 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.111.35, ResourceVersion: 0, AdditionalErrorMsg:
Jan  6 20:50:36 mnode3 k3s[2966]: I0106 20:50:36.458222    2966 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Jan  6 20:50:36 mnode3 k3s[2966]: I0106 20:50:36.458365    2966 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Jan  6 20:50:36 mnode3 k3s[2966]: I0106 20:50:36.491424    2966 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
Jan  6 20:50:36 mnode3 k3s[2966]: I0106 20:50:36.505668    2966 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
Jan  6 20:50:36 mnode3 k3s[2966]: I0106 20:50:36.505737    2966 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
Jan  6 20:50:36 mnode3 k3s[2966]: time="2021-01-06T20:50:36.685170225+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:50:37 mnode3 k3s[2966]: E0106 20:50:37.039336    2966 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "cloud-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope
Jan  6 20:50:37 mnode3 k3s[2966]: time="2021-01-06T20:50:37.722309741+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:50:37 mnode3 k3s[2966]: I0106 20:50:37.870274    2966 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.062084    2966 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
Jan  6 20:50:38 mnode3 k3s[2966]: W0106 20:50:38.329256    2966 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.111.35]
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.333107    2966 controller.go:606] quota admission added evaluator for: endpoints
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.348438    2966 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
Jan  6 20:50:38 mnode3 k3s[2966]: time="2021-01-06T20:50:38.665741961+01:00" level=info msg="Kube API server is now running"
Jan  6 20:50:38 mnode3 k3s[2966]: time="2021-01-06T20:50:38.666587384+01:00" level=info msg="k3s is up and running"
Jan  6 20:50:38 mnode3 k3s[2966]: Flag --address has been deprecated, see --bind-address instead.
Jan  6 20:50:38 mnode3 k3s[2966]: time="2021-01-06T20:50:38.692645854+01:00" level=info msg="Creating CRD addons.k3s.cattle.io"
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.701727    2966 controllermanager.go:176] Version: v1.20.0+k3s2
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.702809    2966 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.703182    2966 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
Jan  6 20:50:38 mnode3 k3s[2966]: time="2021-01-06T20:50:38.711733199+01:00" level=info msg="Creating CRD helmcharts.helm.cattle.io"
Jan  6 20:50:38 mnode3 k3s[2966]: time="2021-01-06T20:50:38.734341588+01:00" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
Jan  6 20:50:38 mnode3 k3s[2966]: W0106 20:50:38.772200    2966 authorization.go:47] Authorization is disabled
Jan  6 20:50:38 mnode3 k3s[2966]: W0106 20:50:38.772284    2966 authentication.go:40] Authentication is disabled
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.772321    2966 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
Jan  6 20:50:38 mnode3 k3s[2966]: time="2021-01-06T20:50:38.801673164+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:50:38 mnode3 k3s[2966]: time="2021-01-06T20:50:38.804179896+01:00" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.812588    2966 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.818640    2966 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.819289    2966 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="mnode3_54e02ab8-2d94-4c94-ad14-d0c0501df51a became leader"
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.872645    2966 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...
Jan  6 20:50:38 mnode3 k3s[2966]: I0106 20:50:38.903886    2966 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler
Jan  6 20:50:39 mnode3 k3s[2966]: time="2021-01-06T20:50:39.310906960+01:00" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
Jan  6 20:50:39 mnode3 k3s[2966]: time="2021-01-06T20:50:39.310984126+01:00" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.462615    2966 shared_informer.go:240] Waiting for caches to sync for tokens
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.481934    2966 controller.go:606] quota admission added evaluator for: serviceaccounts
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.523141    2966 controllermanager.go:554] Started "horizontalpodautoscaling"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.523649    2966 horizontal.go:169] Starting HPA controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.523695    2966 shared_informer.go:240] Waiting for caches to sync for HPA
Jan  6 20:50:39 mnode3 k3s[2966]: time="2021-01-06T20:50:39.528377101+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:50:39 http: TLS handshake error from 127.0.0.1:42760: remote error: tls: bad certificate"
Jan  6 20:50:39 mnode3 k3s[2966]: E0106 20:50:39.548433    2966 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "cloud-controller-manager" cannot list resource "nodes" in API group "" at the cluster scope
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.563266    2966 controllermanager.go:554] Started "statefulset"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.563861    2966 stateful_set.go:146] Starting stateful set controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.563913    2966 shared_informer.go:240] Waiting for caches to sync for stateful set
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.564293    2966 shared_informer.go:247] Caches are synced for tokens
Jan  6 20:50:39 mnode3 k3s[2966]: time="2021-01-06T20:50:39.577078997+01:00" level=error msg="runtime core not ready"
Jan  6 20:50:39 mnode3 k3s[2966]: time="2021-01-06T20:50:39.579831580+01:00" level=error msg="Failed to retrieve agent config: https://127.0.0.1:6443/v1-k3s/serving-kubelet.crt: 500 Internal Server Error"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.599413    2966 controllermanager.go:554] Started "csrcleaner"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.600195    2966 cleaner.go:82] Starting CSR cleaner controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.649635    2966 controllermanager.go:554] Started "endpointslicemirroring"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.651526    2966 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.652052    2966 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.690446    2966 controllermanager.go:554] Started "daemonset"
Jan  6 20:50:39 mnode3 k3s[2966]: W0106 20:50:39.690532    2966 controllermanager.go:533] "tokencleaner" is disabled
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.691402    2966 daemon_controller.go:285] Starting daemon sets controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.691464    2966 shared_informer.go:240] Waiting for caches to sync for daemon sets
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.728092    2966 node_lifecycle_controller.go:380] Sending events to api server.
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.729798    2966 taint_manager.go:163] Sending events to api server.
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.730131    2966 node_lifecycle_controller.go:508] Controller will reconcile labels.
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.730270    2966 controllermanager.go:554] Started "nodelifecycle"
Jan  6 20:50:39 mnode3 k3s[2966]: W0106 20:50:39.730306    2966 controllermanager.go:533] "service" is disabled
Jan  6 20:50:39 mnode3 k3s[2966]: W0106 20:50:39.730420    2966 controllermanager.go:546] Skipping "ephemeral-volume"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.730891    2966 node_lifecycle_controller.go:542] Starting node controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.730942    2966 shared_informer.go:240] Waiting for caches to sync for taint
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.772192    2966 controllermanager.go:554] Started "endpointslice"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.772849    2966 endpointslice_controller.go:237] Starting endpoint slice controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.772900    2966 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
Jan  6 20:50:39 mnode3 k3s[2966]: time="2021-01-06T20:50:39.826646755+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:50:39 mnode3 k3s[2966]: time="2021-01-06T20:50:39.833958935+01:00" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
Jan  6 20:50:39 mnode3 k3s[2966]: time="2021-01-06T20:50:39.834110434+01:00" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.901913    2966 controllermanager.go:554] Started "namespace"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.902148    2966 namespace_controller.go:200] Starting namespace controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.902189    2966 shared_informer.go:240] Waiting for caches to sync for namespace
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.926952    2966 controllermanager.go:554] Started "job"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.927132    2966 job_controller.go:148] Starting job controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.927169    2966 shared_informer.go:240] Waiting for caches to sync for job
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.958793    2966 controllermanager.go:554] Started "replicationcontroller"
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.959245    2966 replica_set.go:182] Starting replicationcontroller controller
Jan  6 20:50:39 mnode3 k3s[2966]: I0106 20:50:39.959308    2966 shared_informer.go:240] Waiting for caches to sync for ReplicationController
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.004029    2966 controllermanager.go:554] Started "serviceaccount"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.004630    2966 serviceaccounts_controller.go:117] Starting service account controller
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.004695    2966 shared_informer.go:240] Waiting for caches to sync for service account
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.030782    2966 controllermanager.go:554] Started "ttl"
Jan  6 20:50:40 mnode3 k3s[2966]: W0106 20:50:40.030941    2966 controllermanager.go:533] "cloud-node-lifecycle" is disabled
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.031446    2966 ttl_controller.go:121] Starting TTL controller
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.031490    2966 shared_informer.go:240] Waiting for caches to sync for TTL
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.184477    2966 controllermanager.go:554] Started "persistentvolume-binder"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.185614    2966 pv_controller_base.go:307] Starting persistent volume controller
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.186265    2966 shared_informer.go:240] Waiting for caches to sync for persistent volume
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.329317    2966 controllermanager.go:554] Started "deployment"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.329483    2966 deployment_controller.go:153] Starting deployment controller
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.329514    2966 shared_informer.go:240] Waiting for caches to sync for deployment
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.353960941+01:00" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.409133779+01:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.410964255+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.411896548+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.412714434+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.413497024+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.414224189+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.415091908+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.415839683+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.417163142+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.418026121+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.418803025+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.419568078+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.420413020+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.478331    2966 controllermanager.go:554] Started "replicaset"
Jan  6 20:50:40 mnode3 k3s[2966]: W0106 20:50:40.478430    2966 controllermanager.go:533] "route" is disabled
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.478564    2966 replica_set.go:182] Starting replicaset controller
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.478595    2966 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.523138    2966 leaderelection.go:243] attempting to acquire leader lease kube-system/k3s...
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.523888701+01:00" level=info msg="Starting /v1, Kind=Node controller"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.523974145+01:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.524007886+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.544425801+01:00" level=info msg="Cluster dns configmap has been set successfully"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.551337    2966 leaderelection.go:253] successfully acquired lease kube-system/k3s
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.667595    2966 controllermanager.go:554] Started "clusterrole-aggregation"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.668234    2966 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.668287    2966 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.711505    2966 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.784284    2966 controllermanager.go:554] Started "endpoint"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.784531    2966 endpoints_controller.go:184] Starting endpoint controller
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.784569    2966 shared_informer.go:240] Waiting for caches to sync for endpoint
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.855332509+01:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.855922618+01:00" level=info msg="Running cloud-controller-manager for provider k3s with args --profiling=false"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.856022044+01:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.856158636+01:00" level=info msg="Starting /v1, Kind=Pod controller"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.857204    2966 controllermanager.go:127] Version: v1.20.0+k3s2
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.857996    2966 leaderelection.go:243] attempting to acquire leader lease kube-system/cloud-controller-manager...
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.856043080+01:00" level=info msg="Starting batch/v1, Kind=Job controller"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.856093821+01:00" level=info msg="Starting /v1, Kind=Service controller"
Jan  6 20:50:40 mnode3 k3s[2966]: time="2021-01-06T20:50:40.856109525+01:00" level=info msg="Starting /v1, Kind=Endpoints controller"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.905620    2966 leaderelection.go:253] successfully acquired lease kube-system/cloud-controller-manager
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.910765    2966 event.go:291] "Event occurred" object="kube-system/cloud-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="mnode3_5f837b9f-3009-4862-b76f-a3c15c194368 became leader"
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.940391    2966 controllermanager.go:554] Started "podgc"
Jan  6 20:50:40 mnode3 k3s[2966]: W0106 20:50:40.940490    2966 controllermanager.go:533] "bootstrapsigner" is disabled
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.940678    2966 gc_controller.go:89] Starting GC controller
Jan  6 20:50:40 mnode3 k3s[2966]: I0106 20:50:40.940708    2966 shared_informer.go:240] Waiting for caches to sync for GC
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.024638    2966 controller.go:606] quota admission added evaluator for: deployments.apps
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.192924    2966 controllermanager.go:554] Started "pv-protection"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.200907    2966 pv_protection_controller.go:83] Starting PV protection controller
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.201013    2966 shared_informer.go:240] Waiting for caches to sync for PV protection
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.309886    2966 controllermanager.go:554] Started "root-ca-cert-publisher"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.310431    2966 publisher.go:98] Starting root CA certificate configmap publisher
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.310486    2966 shared_informer.go:240] Waiting for caches to sync for crt configmap
Jan  6 20:50:41 mnode3 k3s[2966]: time="2021-01-06T20:50:41.385195820+01:00" level=info msg="Starting /v1, Kind=Secret controller"
Jan  6 20:50:41 mnode3 k3s[2966]: time="2021-01-06T20:50:41.419284431+01:00" level=info msg="Active TLS secret k3s-serving (ver=293) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.111.35:192.168.111.35 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=5FEC7A00A9985952578C43840691997F07CBFBA7]"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.430874    2966 controllermanager.go:554] Started "garbagecollector"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.432056    2966 garbagecollector.go:142] Starting garbage collector controller
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.432183    2966 shared_informer.go:240] Waiting for caches to sync for garbage collector
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.432407    2966 graph_builder.go:289] GraphBuilder running
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.478937    2966 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.479019    2966 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.479153    2966 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.479201    2966 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.479301    2966 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.480354    2966 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.481022    2966 controllermanager.go:554] Started "csrsigning"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.481372    2966 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.481425    2966 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.481500    2966 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.481540    2966 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.481506    2966 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.481546    2966 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.524996    2966 node_controller.go:115] Sending events to api server.
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.525411    2966 controllermanager.go:238] Started "cloud-node"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.527144    2966 node_controller.go:154] Waiting for informer caches to sync
Jan  6 20:50:41 mnode3 k3s[2966]: time="2021-01-06T20:50:41.529829553+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.537748    2966 node_lifecycle_controller.go:77] Sending events to api server
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.537964    2966 controllermanager.go:238] Started "cloud-node-lifecycle"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.754065    2966 controllermanager.go:554] Started "attachdetach"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.754968    2966 attach_detach_controller.go:328] Starting attach detach controller
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.755528    2966 shared_informer.go:240] Waiting for caches to sync for attach detach
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.913289    2966 controllermanager.go:554] Started "pvc-protection"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.913363    2966 pvc_protection_controller.go:110] Starting PVC protection controller
Jan  6 20:50:41 mnode3 k3s[2966]: W0106 20:50:41.913380    2966 controllermanager.go:546] Skipping "ttl-after-finished"
Jan  6 20:50:41 mnode3 k3s[2966]: I0106 20:50:41.913474    2966 shared_informer.go:240] Waiting for caches to sync for PVC protection
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.032408    2966 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.113022    2966 controller.go:606] quota admission added evaluator for: jobs.batch
Jan  6 20:50:42 mnode3 k3s[2966]: time="2021-01-06T20:50:42.530518485+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:42 mnode3 k3s[2966]: E0106 20:50:42.706065    2966 resource_quota_controller.go:162] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.706960    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.707532    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.707673    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708075    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708234    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708303    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708369    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708478    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708580    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708638    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708692    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.708913    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.709008    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.709199    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.709259    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.709355    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.709571    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.709822    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.709943    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.710006    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.710066    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.710227    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.710292    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.710341    2966 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.710376    2966 controllermanager.go:554] Started "resourcequota"
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.710969    2966 resource_quota_controller.go:273] Starting resource quota controller
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.711138    2966 shared_informer.go:240] Waiting for caches to sync for resource quota
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.711522    2966 resource_quota_monitor.go:304] QuotaMonitor running
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.742641    2966 controllermanager.go:554] Started "disruption"
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.743206    2966 disruption.go:331] Starting disruption controller
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.743263    2966 shared_informer.go:240] Waiting for caches to sync for disruption
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.782483    2966 controllermanager.go:554] Started "cronjob"
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.784858    2966 cronjob_controller.go:96] Starting CronJob Manager
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.827877    2966 controllermanager.go:554] Started "csrapproving"
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.828855    2966 certificate_controller.go:118] Starting certificate controller "csrapproving"
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.828957    2966 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
Jan  6 20:50:42 mnode3 k3s[2966]: I0106 20:50:42.874846    2966 node_ipam_controller.go:91] Sending events to api server.
Jan  6 20:50:43 mnode3 k3s[2966]: time="2021-01-06T20:50:43.531465108+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:43 mnode3 k3s[2966]: I0106 20:50:43.852692    2966 request.go:655] Throttling request took 1.043177914s, request: GET:https://127.0.0.1:6444/apis/extensions/v1beta1?timeout=32s
Jan  6 20:50:44 mnode3 k3s[2966]: time="2021-01-06T20:50:44.531686943+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:44 mnode3 k3s[2966]: time="2021-01-06T20:50:44.587375986+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:50:44 http: TLS handshake error from 127.0.0.1:42780: remote error: tls: bad certificate"
Jan  6 20:50:44 mnode3 k3s[2966]: E0106 20:50:44.806322    2966 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource
Jan  6 20:50:44 mnode3 k3s[2966]: W0106 20:50:44.861132    2966 handler_proxy.go:102] no RequestInfo found in the context
Jan  6 20:50:44 mnode3 k3s[2966]: E0106 20:50:44.861301    2966 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan  6 20:50:44 mnode3 k3s[2966]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jan  6 20:50:44 mnode3 k3s[2966]: I0106 20:50:44.861337    2966 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jan  6 20:50:44 mnode3 k3s[2966]: time="2021-01-06T20:50:44.977120669+01:00" level=info msg="certificate CN=mnode3 signed by CN=k3s-server-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:44 +0000 UTC"
Jan  6 20:50:45 mnode3 k3s[2966]: time="2021-01-06T20:50:45.326208950+01:00" level=info msg="certificate CN=system:node:mnode3,O=system:nodes signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:45 +0000 UTC"
Jan  6 20:50:45 mnode3 k3s[2966]: time="2021-01-06T20:50:45.354853283+01:00" level=error msg="Failed to retrieve agent config: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: invalid argument"
Jan  6 20:50:45 mnode3 k3s[2966]: time="2021-01-06T20:50:45.531994360+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:46 mnode3 k3s[2966]: time="2021-01-06T20:50:46.532308208+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:47 mnode3 k3s[2966]: time="2021-01-06T20:50:47.532517656+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:48 mnode3 k3s[2966]: time="2021-01-06T20:50:48.533166978+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:49 mnode3 k3s[2966]: time="2021-01-06T20:50:49.533886288+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:50 mnode3 k3s[2966]: time="2021-01-06T20:50:50.365834884+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:50:50 http: TLS handshake error from 127.0.0.1:42798: remote error: tls: bad certificate"
Jan  6 20:50:50 mnode3 k3s[2966]: time="2021-01-06T20:50:50.540876972+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:50 mnode3 k3s[2966]: time="2021-01-06T20:50:50.758223999+01:00" level=info msg="certificate CN=mnode3 signed by CN=k3s-server-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:50 +0000 UTC"
Jan  6 20:50:51 mnode3 k3s[2966]: time="2021-01-06T20:50:51.069505707+01:00" level=info msg="certificate CN=system:node:mnode3,O=system:nodes signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:51 +0000 UTC"
Jan  6 20:50:51 mnode3 k3s[2966]: time="2021-01-06T20:50:51.095104440+01:00" level=error msg="Failed to retrieve agent config: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: invalid argument"
Jan  6 20:50:51 mnode3 k3s[2966]: time="2021-01-06T20:50:51.543085121+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:52 mnode3 k3s[2966]: time="2021-01-06T20:50:52.543367652+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.962274    2966 range_allocator.go:82] Sending events to api server.
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.962768    2966 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.962901    2966 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.962981    2966 controllermanager.go:554] Started "nodeipam"
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.963020    2966 node_ipam_controller.go:159] Starting ipam controller
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.963066    2966 shared_informer.go:240] Waiting for caches to sync for node
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.985515    2966 controllermanager.go:554] Started "persistentvolume-expander"
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.986360    2966 expand_controller.go:310] Starting expand controller
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.986418    2966 shared_informer.go:240] Waiting for caches to sync for expand
Jan  6 20:50:52 mnode3 k3s[2966]: I0106 20:50:52.996766    2966 shared_informer.go:240] Waiting for caches to sync for resource quota
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.052785    2966 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
Jan  6 20:50:53 mnode3 k3s[2966]: E0106 20:50:53.058205    2966 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.063852    2966 shared_informer.go:247] Caches are synced for node
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.063961    2966 range_allocator.go:172] Starting range CIDR allocator
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.063986    2966 shared_informer.go:240] Waiting for caches to sync for cidrallocator
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.064006    2966 shared_informer.go:247] Caches are synced for cidrallocator
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.069225    2966 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.079616    2966 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.081072    2966 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.081519    2966 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.081694    2966 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.086563    2966 shared_informer.go:247] Caches are synced for expand
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.102337    2966 shared_informer.go:247] Caches are synced for PV protection
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.103458    2966 shared_informer.go:247] Caches are synced for namespace
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.105651    2966 shared_informer.go:247] Caches are synced for service account
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.110570    2966 shared_informer.go:247] Caches are synced for crt configmap
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.129130    2966 shared_informer.go:247] Caches are synced for certificate-csrapproving
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.143099    2966 shared_informer.go:247] Caches are synced for TTL
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.246394    2966 shared_informer.go:247] Caches are synced for disruption
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.246468    2966 disruption.go:339] Sending events to api server.
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.256075    2966 shared_informer.go:247] Caches are synced for attach detach
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.259801    2966 shared_informer.go:247] Caches are synced for ReplicationController
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.264051    2966 shared_informer.go:247] Caches are synced for stateful set
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.274049    2966 shared_informer.go:247] Caches are synced for endpoint_slice
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.278781    2966 shared_informer.go:247] Caches are synced for ReplicaSet
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.285054    2966 shared_informer.go:247] Caches are synced for endpoint
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.289874    2966 shared_informer.go:247] Caches are synced for persistent volume
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.291594    2966 shared_informer.go:247] Caches are synced for daemon sets
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.297931    2966 shared_informer.go:247] Caches are synced for resource quota
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.311301    2966 shared_informer.go:247] Caches are synced for resource quota
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.313798    2966 shared_informer.go:247] Caches are synced for PVC protection
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.323803    2966 shared_informer.go:247] Caches are synced for HPA
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.327270    2966 shared_informer.go:247] Caches are synced for job
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.330059    2966 shared_informer.go:247] Caches are synced for deployment
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.332816    2966 shared_informer.go:247] Caches are synced for taint
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.333708    2966 taint_manager.go:187] Starting NoExecuteTaintManager
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.340847    2966 shared_informer.go:247] Caches are synced for GC
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.360219    2966 controller.go:606] quota admission added evaluator for: replicasets.apps
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.366226    2966 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-86cbb8457f to 1"
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.377671    2966 event.go:291] "Event occurred" object="kube-system/local-path-provisioner" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set local-path-provisioner-7c458769fb to 1"
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.378164    2966 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-854c77959c to 1"
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.424272    2966 controller.go:606] quota admission added evaluator for: events.events.k8s.io
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.426806    2966 event.go:291] "Event occurred" object="kube-system/helm-install-traefik" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: helm-install-traefik-6jvs6"
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.521258    2966 event.go:291] "Event occurred" object="kube-system/local-path-provisioner-7c458769fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: local-path-provisioner-7c458769fb-xjqsv"
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.542403    2966 event.go:291] "Event occurred" object="kube-system/metrics-server-86cbb8457f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-86cbb8457f-lspn5"
Jan  6 20:50:53 mnode3 k3s[2966]: time="2021-01-06T20:50:53.547205547+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.549043    2966 event.go:291] "Event occurred" object="kube-system/coredns-854c77959c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-854c77959c-8q7bk"
Jan  6 20:50:53 mnode3 k3s[2966]: E0106 20:50:53.641984    2966 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.653360    2966 shared_informer.go:240] Waiting for caches to sync for garbage collector
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.732440    2966 shared_informer.go:247] Caches are synced for garbage collector
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.732514    2966 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Jan  6 20:50:53 mnode3 k3s[2966]: I0106 20:50:53.758069    2966 shared_informer.go:247] Caches are synced for garbage collector
Jan  6 20:50:54 mnode3 k3s[2966]: W0106 20:50:54.332402    2966 handler_proxy.go:102] no RequestInfo found in the context
Jan  6 20:50:54 mnode3 k3s[2966]: E0106 20:50:54.332566    2966 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
Jan  6 20:50:54 mnode3 k3s[2966]: , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
Jan  6 20:50:54 mnode3 k3s[2966]: I0106 20:50:54.332596    2966 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Jan  6 20:50:54 mnode3 k3s[2966]: time="2021-01-06T20:50:54.549089065+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:55 mnode3 k3s[2966]: time="2021-01-06T20:50:55.549384259+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:56 mnode3 k3s[2966]: time="2021-01-06T20:50:56.105896900+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:50:56 http: TLS handshake error from 127.0.0.1:42818: remote error: tls: bad certificate"
Jan  6 20:50:56 mnode3 k3s[2966]: time="2021-01-06T20:50:56.461794532+01:00" level=info msg="certificate CN=mnode3 signed by CN=k3s-server-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:56 +0000 UTC"
Jan  6 20:50:56 mnode3 k3s[2966]: time="2021-01-06T20:50:56.550092623+01:00" level=info msg="Waiting for control-plane node agent startup"
Jan  6 20:50:56 mnode3 k3s[2966]: time="2021-01-06T20:50:56.814888358+01:00" level=info msg="certificate CN=system:node:mnode3,O=system:nodes signed by CN=k3s-client-ca@1609962620: notBefore=2021-01-06 19:50:20 +0000 UTC notAfter=2022-01-06 19:50:56 +0000 UTC"
Jan  6 20:50:56 mnode3 k3s[2966]: time="2021-01-06T20:50:56.840499832+01:00" level=error msg="Failed to retrieve agent config: \"overlayfs\" snapshotter cannot be enabled for \"/var/lib/rancher/k3s/agent/containerd\", try using \"fuse-overlayfs\" or \"native\": failed to mount overlay: invalid argument"
@luckyluc74
Copy link
Author

luckyluc74 commented Jan 6, 2021

Extra info

to get network with external containerd with k3s v1.19.5+k3s2 working I installed calico.yaml

download calico manifest

curl https://docs.projectcalico.org/manifests/calico.yaml -O

add following to calico.yaml to get traefik working with external ip

          "container_settings": {
              "allow_ip_forwarding": true
          },

apply calico.yaml

kubectl apply -f calico.yaml

This is the log of a working setup of k3s (v1.19.3+k3s2) with external containerd with zfs snapshotter

Jan  6 20:59:04 mnode3 k3s[9667]: time="2021-01-06T20:59:04+01:00" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Jan  6 20:59:04 mnode3 k3s[9667]: time="2021-01-06T20:59:04+01:00" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/a5648e818e3ef8be3322f2e0efc36df7a26c00729c9e43abf0f670ebc0deacac"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.221600661+01:00" level=info msg="Starting k3s v1.19.5+k3s2 (746cf403)"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.224289745+01:00" level=info msg="Managed etcd cluster initializing"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.267430194+01:00" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.271477162+01:00" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.274777003+01:00" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.278154455+01:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.281263519+01:00" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.283771455+01:00" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.286761223+01:00" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.291853687+01:00" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.297227281+01:00" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.301670192+01:00" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.304116758+01:00" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:13 mnode3 k3s[9667]: time="2021-01-06T20:59:13.308355281+01:00" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:13 +0000 UTC"
Jan  6 20:59:14 mnode3 k3s[9667]: time="2021-01-06T20:59:14.207497228+01:00" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:14 +0000 UTC"
Jan  6 20:59:14 mnode3 k3s[9667]: time="2021-01-06T20:59:14.210002683+01:00" level=info msg="Active TLS secret  (ver=) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.111.35:192.168.111.35 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=289B8513ED3EF56944317F9723B17859B972EEF6]"
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.211+0100","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.111.35:2380"]}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.212+0100","caller":"embed/etcd.go:468","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.214+0100","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.111.35:2379"]}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.215+0100","caller":"embed/etcd.go:302","msg":"starting an etcd server","etcd-version":"3.4.13","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.15.5","go-os":"linux","go-arch":"arm64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":false,"name":"mnode3-28ec316e","data-dir":"/var/lib/rancher/k3s/server/db/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/rancher/k3s/server/db/etcd/member","force-new-cluster":false,"heartbeat-interval":"500ms","election-timeout":"5s","initial-election-tick-advance":true,"snapshot-count":100000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.111.35:2380"],"listen-peer-urls":["https://192.168.111.35:2380"],"advertise-client-urls":["https://192.168.111.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.111.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"mnode3-28ec316e=https://192.168.111.35:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":false,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":""}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.226+0100","caller":"etcdserver/backend.go:80","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"9.69569ms"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.239+0100","caller":"etcdserver/raft.go:486","msg":"starting local member","local-member-id":"5dff5810cfcd2643","cluster-id":"1a102b39dd4f88cf"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.241+0100","caller":"raft/raft.go:1530","msg":"5dff5810cfcd2643 switched to configuration voters=()"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.241+0100","caller":"raft/raft.go:700","msg":"5dff5810cfcd2643 became follower at term 0"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.241+0100","caller":"raft/raft.go:383","msg":"newRaft 5dff5810cfcd2643 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.242+0100","caller":"raft/raft.go:700","msg":"5dff5810cfcd2643 became follower at term 1"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.242+0100","caller":"raft/raft.go:1530","msg":"5dff5810cfcd2643 switched to configuration voters=(6773229193817564739)"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"warn","ts":"2021-01-06T20:59:14.252+0100","caller":"auth/store.go:1366","msg":"simple token is not cryptographically signed"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.285+0100","caller":"etcdserver/quota.go:98","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.301+0100","caller":"etcdserver/server.go:803","msg":"starting etcd server","local-member-id":"5dff5810cfcd2643","local-server-version":"3.4.13","cluster-version":"to_be_decided"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.304+0100","caller":"etcdserver/server.go:669","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"5dff5810cfcd2643","forward-ticks":9,"forward-duration":"4.5s","election-ticks":10,"election-timeout":"5s"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.306+0100","caller":"raft/raft.go:1530","msg":"5dff5810cfcd2643 switched to configuration voters=(6773229193817564739)"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.306+0100","caller":"membership/cluster.go:392","msg":"added member","cluster-id":"1a102b39dd4f88cf","local-member-id":"5dff5810cfcd2643","added-peer-id":"5dff5810cfcd2643","added-peer-peer-urls":["https://192.168.111.35:2380"]}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.310+0100","caller":"embed/etcd.go:711","msg":"starting with client TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/server-client.crt, key = /var/lib/rancher/k3s/server/tls/etcd/server-client.key, trusted-ca = /var/lib/rancher/k3s/server/tls/etcd/server-ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.310+0100","caller":"embed/etcd.go:579","msg":"serving peer traffic","address":"192.168.111.35:2380"}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.311+0100","caller":"embed/etcd.go:244","msg":"now serving peer/client/metrics","local-member-id":"5dff5810cfcd2643","initial-advertise-peer-urls":["https://192.168.111.35:2380"],"listen-peer-urls":["https://192.168.111.35:2380"],"advertise-client-urls":["https://192.168.111.35:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.111.35:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
Jan  6 20:59:14 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:14.311+0100","caller":"embed/etcd.go:781","msg":"serving metrics","address":"http://127.0.0.1:2381"}
Jan  6 20:59:14 mnode3 k3s[9667]: time="2021-01-06T20:59:14.312347029+01:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-cafile=/var/lib/rancher/k3s/server/tls/etcd/server-ca.crt --etcd-certfile=/var/lib/rancher/k3s/server/tls/etcd/client.crt --etcd-keyfile=/var/lib/rancher/k3s/server/tls/etcd/client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.250+0100","caller":"raft/raft.go:923","msg":"5dff5810cfcd2643 is starting a new election at term 1"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.250+0100","caller":"raft/raft.go:713","msg":"5dff5810cfcd2643 became candidate at term 2"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.251+0100","caller":"raft/raft.go:824","msg":"5dff5810cfcd2643 received MsgVoteResp from 5dff5810cfcd2643 at term 2"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.251+0100","caller":"raft/raft.go:765","msg":"5dff5810cfcd2643 became leader at term 2"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.251+0100","caller":"raft/node.go:325","msg":"raft.node: 5dff5810cfcd2643 elected leader 5dff5810cfcd2643 at term 2"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.253+0100","caller":"etcdserver/server.go:2530","msg":"setting up initial cluster version","cluster-version":"3.4"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.253+0100","caller":"etcdserver/server.go:2039","msg":"published local member to cluster through raft","local-member-id":"5dff5810cfcd2643","local-member-attributes":"{Name:mnode3-28ec316e ClientURLs:[https://192.168.111.35:2379]}","request-path":"/0/members/5dff5810cfcd2643/attributes","cluster-id":"1a102b39dd4f88cf","publish-timeout":"15s"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.261+0100","caller":"membership/cluster.go:558","msg":"set initial cluster version","cluster-id":"1a102b39dd4f88cf","local-member-id":"5dff5810cfcd2643","cluster-version":"3.4"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.261+0100","caller":"embed/serve.go:191","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.261+0100","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.261+0100","caller":"etcdserver/server.go:2562","msg":"cluster version is updated","cluster-version":"3.4"}
Jan  6 20:59:15 mnode3 k3s[9667]: {"level":"info","ts":"2021-01-06T20:59:15.261+0100","caller":"embed/serve.go:191","msg":"serving client traffic securely","address":"192.168.111.35:2379"}
Jan  6 20:59:15 mnode3 k3s[9667]: time="2021-01-06T20:59:15.283287072+01:00" level=info msg="etcd data store connection OK"
Jan  6 20:59:15 mnode3 k3s[9667]: I0106 20:59:15.290374    9667 server.go:652] external host was not specified, using 192.168.111.35
Jan  6 20:59:15 mnode3 k3s[9667]: I0106 20:59:15.291511    9667 server.go:177] Version: v1.19.5+k3s2
Jan  6 20:59:16 mnode3 k3s[9667]: I0106 20:59:16.398561    9667 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan  6 20:59:16 mnode3 k3s[9667]: I0106 20:59:16.398639    9667 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan  6 20:59:16 mnode3 k3s[9667]: I0106 20:59:16.403268    9667 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan  6 20:59:16 mnode3 k3s[9667]: I0106 20:59:16.403347    9667 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan  6 20:59:16 mnode3 k3s[9667]: I0106 20:59:16.516488    9667 master.go:271] Using reconciler: lease
Jan  6 20:59:18 mnode3 k3s[9667]: W0106 20:59:18.224487    9667 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
Jan  6 20:59:18 mnode3 k3s[9667]: W0106 20:59:18.271785    9667 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:59:18 mnode3 k3s[9667]: W0106 20:59:18.327360    9667 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:59:18 mnode3 k3s[9667]: W0106 20:59:18.393806    9667 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:59:18 mnode3 k3s[9667]: W0106 20:59:18.409798    9667 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:59:18 mnode3 k3s[9667]: W0106 20:59:18.471175    9667 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Jan  6 20:59:18 mnode3 k3s[9667]: W0106 20:59:18.554478    9667 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
Jan  6 20:59:18 mnode3 k3s[9667]: W0106 20:59:18.554560    9667 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
Jan  6 20:59:18 mnode3 k3s[9667]: I0106 20:59:18.599211    9667 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Jan  6 20:59:18 mnode3 k3s[9667]: I0106 20:59:18.599291    9667 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.666779036+01:00" level=info msg="Waiting for API server to become available"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.666909850+01:00" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --profiling=false --secure-port=0"
Jan  6 20:59:18 mnode3 k3s[9667]: I0106 20:59:18.668998    9667 registry.go:173] Registering SelectorSpread plugin
Jan  6 20:59:18 mnode3 k3s[9667]: I0106 20:59:18.669085    9667 registry.go:173] Registering SelectorSpread plugin
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.670319468+01:00" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.677337241+01:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.677766351+01:00" level=info msg="To join node to cluster: k3s agent -s https://192.168.111.35:6443 -t ${NODE_TOKEN}"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.682347169+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.682564317+01:00" level=info msg="Run: k3s kubectl"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.683042408+01:00" level=info msg="Module overlay was already loaded"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.683138500+01:00" level=info msg="Module nf_conntrack was already loaded"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.683185111+01:00" level=info msg="Module br_netfilter was already loaded"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.683226555+01:00" level=info msg="Module iptable_nat was already loaded"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.864915046+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:59:18 http: TLS handshake error from 127.0.0.1:43936: remote error: tls: bad certificate"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.886331938+01:00" level=info msg="Cluster-Http-Server 2021/01/06 20:59:18 http: TLS handshake error from 127.0.0.1:43944: remote error: tls: bad certificate"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.933281950+01:00" level=info msg="certificate CN=mnode3 signed by CN=k3s-server-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:18 +0000 UTC"
Jan  6 20:59:18 mnode3 k3s[9667]: time="2021-01-06T20:59:18.945368150+01:00" level=info msg="certificate CN=system:node:mnode3,O=system:nodes signed by CN=k3s-client-ca@1609963153: notBefore=2021-01-06 19:59:13 +0000 UTC notAfter=2022-01-06 19:59:18 +0000 UTC"
Jan  6 20:59:19 mnode3 k3s[9667]: time="2021-01-06T20:59:19.000448264+01:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Jan  6 20:59:19 mnode3 k3s[9667]: time="2021-01-06T20:59:19.010593138+01:00" level=info msg="Handling backend connection request [mnode3]"
Jan  6 20:59:19 mnode3 k3s[9667]: time="2021-01-06T20:59:19.013735387+01:00" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime=remote --containerd=unix:///run/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=mnode3 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --network-plugin=cni --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/run/systemd/resolve/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Jan  6 20:59:19 mnode3 k3s[9667]: time="2021-01-06T20:59:19.016323841+01:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=mnode3 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Jan  6 20:59:19 mnode3 k3s[9667]: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Jan  6 20:59:19 mnode3 k3s[9667]: Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Jan  6 20:59:19 mnode3 k3s[9667]: W0106 20:59:19.017773    9667 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Jan  6 20:59:19 mnode3 k3s[9667]: time="2021-01-06T20:59:19.059003756+01:00" level=info msg="Waiting for kubelet to be ready on node mnode3: nodes \"mnode3\" not found"
Jan  6 20:59:19 mnode3 k3s[9667]: I0106 20:59:19.073786    9667 server.go:407] Version: v1.19.5+k3s2
Jan  6 20:59:19 mnode3 k3s[9667]: E0106 20:59:19.193229    9667 node.go:125] Failed to retrieve node info: nodes "mnode3" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Jan  6 20:59:19 mnode3 k3s[9667]: I0106 20:59:19.212210    9667 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Jan  6 20:59:20 mnode3 k3s[9667]: E0106 20:59:20.351563    9667 node.go:125] Failed to retrieve node info: nodes "mnode3" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Jan  6 20:59:22 mnode3 k3s[9667]: E0106 20:59:22.672286    9667 node.go:125] Failed to retrieve node info: nodes "mnode3" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Jan  6 20:59:24 mnode3 k3s[9667]: time="2021-01-06T20:59:24.004776743+01:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.236793    9667 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.238016    9667 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.238139    9667 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu1/online: open /sys/devices/system/cpu/cpu1/online: no such file or directory
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.238214    9667 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu2/online: open /sys/devices/system/cpu/cpu2/online: no such file or directory
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.238287    9667 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu3/online: open /sys/devices/system/cpu/cpu3/online: no such file or directory
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.239258    9667 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.239363    9667 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.239419    9667 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.239473    9667 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.239572    9667 machine.go:72] Cannot read number of physical cores correctly, number of cores set to 0
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.239940    9667 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu0 online state, skipping
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.240001    9667 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu1 online state, skipping
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.240052    9667 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu2 online state, skipping
Jan  6 20:59:24 mnode3 k3s[9667]: W0106 20:59:24.240102    9667 machine.go:253] Cannot determine CPU /sys/bus/cpu/devices/cpu3 online state, skipping
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.240120    9667 machine.go:86] Cannot read number of sockets correctly, number of sockets set to 0
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.242489    9667 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.244180    9667 container_manager_linux.go:289] container manager verified user specified cgroup-root exists: []
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.244285    9667 container_manager_linux.go:294] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.245460    9667 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.245498    9667 container_manager_linux.go:324] [topologymanager] Initializing Topology Manager with none policy
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.245527    9667 container_manager_linux.go:329] Creating device plugin manager: true
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.247498    9667 kubelet.go:261] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.248051    9667 kubelet.go:273] Watching apiserver
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.274434    9667 kuberuntime_manager.go:214] Container runtime containerd initialized, version: 1.3.7-0ubuntu3, apiVersion: v1alpha2
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.280970    9667 server.go:1148] Started kubelet
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.292439    9667 server.go:152] Starting to listen on 0.0.0.0:10250
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.298743    9667 server.go:424] Adding debug handlers to kubelet server.
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.293768    9667 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/containerd/io.containerd.snapshotter.v1.zfs": unable to find data in memory cache.
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.299637    9667 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.312801    9667 kubelet.go:1218] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.313547    9667 volume_manager.go:265] Starting Kubelet Volume Manager
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.313656    9667 desired_state_of_world_populator.go:139] Desired state populator starts to run
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.327811    9667 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.384774    9667 controller.go:228] failed to get node "mnode3" when trying to set owner ref to the node lease: nodes "mnode3" not found
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.415149    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.469904    9667 kubelet_node_status.go:71] Attempting to register node mnode3
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.515328    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.629498    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.734121    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.811337    9667 status_manager.go:158] Starting to sync pod status with apiserver
Jan  6 20:59:24 mnode3 k3s[9667]: I0106 20:59:24.811542    9667 kubelet.go:1741] Starting kubelet main sync loop.
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.811860    9667 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.838317    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.912082    9667 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet
Jan  6 20:59:24 mnode3 k3s[9667]: E0106 20:59:24.938816    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:25 mnode3 k3s[9667]: E0106 20:59:25.039941    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.091628    9667 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.091768    9667 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.092585    9667 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.094383    9667 secure_serving.go:197] Serving securely on 127.0.0.1:6444
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.094482    9667 tlsconfig.go:240] Starting DynamicServingCertificateController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.094827    9667 apiservice_controller.go:97] Starting APIServiceRegistrationController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.094920    9667 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.096414    9667 controller.go:83] Starting OpenAPI AggregationController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.097170    9667 customresource_discovery_controller.go:209] Starting DiscoveryController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.099189    9667 autoregister_controller.go:141] Starting autoregister controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.114406    9667 cache.go:32] Waiting for caches to sync for autoregister controller
Jan  6 20:59:25 mnode3 k3s[9667]: E0106 20:59:25.121701    9667 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.122885    9667 controller.go:86] Starting OpenAPI controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.123719    9667 naming_controller.go:291] Starting NamingConditionController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.124505    9667 establishing_controller.go:76] Starting EstablishingController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.125262    9667 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.125529    9667 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.100155    9667 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.100951    9667 available_controller.go:457] Starting AvailableConditionController
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.127968    9667 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.108926    9667 crdregistration_controller.go:111] Starting crd-autoregister controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.131176    9667 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.125619    9667 crd_finalizer.go:266] Starting CRDFinalizer
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.125694    9667 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.125718    9667 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.099222    9667 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.131751    9667 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
Jan  6 20:59:25 mnode3 k3s[9667]: E0106 20:59:25.153695    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:25 mnode3 k3s[9667]: time="2021-01-06T20:59:25.190730802+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.214714    9667 cache.go:39] Caches are synced for autoregister controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.224490    9667 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.228052    9667 cache.go:39] Caches are synced for AvailableConditionController controller
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.242051    9667 cpu_manager.go:184] [cpumanager] starting with none policy
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.244873    9667 cpu_manager.go:185] [cpumanager] reconciling every 10s
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.245568    9667 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.244143    9667 shared_informer.go:247] Caches are synced for crd-autoregister
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.253542    9667 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
Jan  6 20:59:25 mnode3 k3s[9667]: E0106 20:59:25.258509    9667 kubelet.go:2183] node "mnode3" not found
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.269174    9667 policy_none.go:43] [cpumanager] none policy: Start
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.321983    9667 trace.go:205] Trace[1299329831]: "Create" url:/api/v1/nodes,user-agent:k3s/v1.19.5+k3s2 (linux/arm64) kubernetes/746cf40,client:127.0.0.1 (06-Jan-2021 20:59:24.669) (total time: 652ms):
Jan  6 20:59:25 mnode3 k3s[9667]: Trace[1299329831]: ---"Object stored in database" 650ms (20:59:00.321)
Jan  6 20:59:25 mnode3 k3s[9667]: Trace[1299329831]: [652.437435ms] [652.437435ms] END
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.326164    9667 kubelet_node_status.go:74] Successfully registered node mnode3
Jan  6 20:59:25 mnode3 k3s[9667]: W0106 20:59:25.345873    9667 manager.go:596] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.348992    9667 plugin_manager.go:114] Starting Kubelet Plugin Manager
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.404349    9667 trace.go:205] Trace[1330845630]: "Create" url:/api/v1/namespaces/default/events,user-agent:k3s/v1.19.5+k3s2 (linux/arm64) kubernetes/746cf40,client:127.0.0.1 (06-Jan-2021 20:59:24.323) (total time: 1080ms):
Jan  6 20:59:25 mnode3 k3s[9667]: Trace[1330845630]: ---"Object stored in database" 1070ms (20:59:00.404)
Jan  6 20:59:25 mnode3 k3s[9667]: Trace[1330845630]: [1.080354353s] [1.080354353s] END
Jan  6 20:59:25 mnode3 k3s[9667]: E0106 20:59:25.406769    9667 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
Jan  6 20:59:25 mnode3 k3s[9667]: E0106 20:59:25.414622    9667 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.111.35, ResourceVersion: 0, AdditionalErrorMsg:
Jan  6 20:59:25 mnode3 k3s[9667]: I0106 20:59:25.634935    9667 reconciler.go:157] Reconciler: start to sync state
Jan  6 20:59:26 mnode3 k3s[9667]: I0106 20:59:26.091155    9667 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Jan  6 20:59:26 mnode3 k3s[9667]: I0106 20:59:26.091609    9667 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Jan  6 20:59:26 mnode3 k3s[9667]: I0106 20:59:26.114994    9667 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
Jan  6 20:59:26 mnode3 k3s[9667]: I0106 20:59:26.153056    9667 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
Jan  6 20:59:26 mnode3 k3s[9667]: I0106 20:59:26.153808    9667 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
Jan  6 20:59:26 mnode3 k3s[9667]: time="2021-01-06T20:59:26.165596931+01:00" level=info msg="labels have been set successfully on node: mnode3"
Jan  6 20:59:26 mnode3 k3s[9667]: time="2021-01-06T20:59:26.212452090+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:59:27 mnode3 k3s[9667]: time="2021-01-06T20:59:27.232040590+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.456514    9667 node.go:136] Successfully retrieved node IP: 192.168.111.35
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.456859    9667 server_others.go:143] kube-proxy node IP is an IPv4 address (192.168.111.35), assume IPv4 operation
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.461749    9667 server_others.go:186] Using iptables Proxier.
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.462871    9667 server.go:650] Version: v1.19.5+k3s2
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.463840    9667 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.465016    9667 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.465182    9667 conntrack.go:52] Setting nf_conntrack_max to 131072
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.465454    9667 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.465703    9667 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.466447    9667 config.go:315] Starting service config controller
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.466531    9667 shared_informer.go:240] Waiting for caches to sync for service config
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.466447    9667 config.go:224] Starting endpoint slice config controller
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.466620    9667 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.566688    9667 shared_informer.go:247] Caches are synced for endpoint slice config
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.567723    9667 shared_informer.go:247] Caches are synced for service config
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.629694    9667 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
Jan  6 20:59:27 mnode3 k3s[9667]: W0106 20:59:27.802962    9667 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.111.35]
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.806774    9667 controller.go:606] quota admission added evaluator for: endpoints
Jan  6 20:59:27 mnode3 k3s[9667]: I0106 20:59:27.822536    9667 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.201430386+01:00" level=info msg="Kube API server is now running"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.201593885+01:00" level=info msg="k3s is up and running"
Jan  6 20:59:28 mnode3 k3s[9667]: Flag --address has been deprecated, see --bind-address instead.
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.227509170+01:00" level=info msg="Creating CRD addons.k3s.cattle.io"
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.237692    9667 controllermanager.go:175] Version: v1.19.5+k3s2
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.238977    9667 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.239346    9667 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-controller-manager...
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.254230730+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.255024    9667 registry.go:173] Registering SelectorSpread plugin
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.255815    9667 registry.go:173] Registering SelectorSpread plugin
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.256652351+01:00" level=info msg="Creating CRD helmcharts.helm.cattle.io"
Jan  6 20:59:28 mnode3 k3s[9667]: W0106 20:59:28.266482    9667 authorization.go:47] Authorization is disabled
Jan  6 20:59:28 mnode3 k3s[9667]: W0106 20:59:28.266682    9667 authentication.go:40] Authentication is disabled
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.266746    9667 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.307498979+01:00" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.311929    9667 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.342241    9667 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.344629    9667 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="mnode3_6d2682ff-f31d-466f-9e5e-fa7d15f23cee became leader"
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.344844    9667 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="mnode3_6d2682ff-f31d-466f-9e5e-fa7d15f23cee became leader"
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.373257    9667 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-scheduler...
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.396870795+01:00" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.482908    9667 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.909188657+01:00" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.965061250+01:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.966914948+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.967028    9667 shared_informer.go:240] Waiting for caches to sync for tokens
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.967730001+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.968545887+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.969390403+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.970088345+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.971118490+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.972169745+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.973379649+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.974320572+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.975198735+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.976068603+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: time="2021-01-06T20:59:28.977004174+01:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
Jan  6 20:59:28 mnode3 k3s[9667]: I0106 20:59:28.989492    9667 controller.go:606] quota admission added evaluator for: serviceaccounts
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.000099    9667 controllermanager.go:549] Started "job"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.000273    9667 job_controller.go:148] Starting job controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.000314    9667 shared_informer.go:240] Waiting for caches to sync for job
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.008135812+01:00" level=warning msg="Unable to watch for tunnel endpoints: unknown (get endpoints)"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.048009    9667 controllermanager.go:549] Started "horizontalpodautoscaling"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.048379    9667 horizontal.go:169] Starting HPA controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.048417    9667 shared_informer.go:240] Waiting for caches to sync for HPA
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.067924    9667 shared_informer.go:247] Caches are synced for tokens
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.071347    9667 controllermanager.go:549] Started "cronjob"
Jan  6 20:59:29 mnode3 k3s[9667]: W0106 20:59:29.071580    9667 core.go:244] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
Jan  6 20:59:29 mnode3 k3s[9667]: W0106 20:59:29.071613    9667 controllermanager.go:541] Skipping "route"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.072262    9667 cronjob_controller.go:96] Starting CronJob Manager
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.079657    9667 leaderelection.go:243] attempting to acquire leader lease  kube-system/k3s...
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.080268277+01:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.081189478+01:00" level=info msg="Starting /v1, Kind=Node controller"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.165096    9667 leaderelection.go:253] successfully acquired lease kube-system/k3s
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.199177347+01:00" level=warning msg="Unable to fetch coredns config map: configmap \"coredns\" not found"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.213814    9667 controllermanager.go:549] Started "clusterrole-aggregation"
Jan  6 20:59:29 mnode3 k3s[9667]: W0106 20:59:29.213954    9667 controllermanager.go:541] Skipping "ephemeral-volume"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.215329    9667 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.215442    9667 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.246513394+01:00" level=info msg="Cluster dns configmap has been set successfully"
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.316662588+01:00" level=warning msg="Unable to fetch coredns config map: configmap \"coredns\" not found"
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.324275100+01:00" level=info msg="Waiting for cloudcontroller rbac role to be created"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.381093    9667 controller.go:606] quota admission added evaluator for: addons.k3s.cattle.io
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.406073    9667 controllermanager.go:549] Started "endpoint"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.406834    9667 endpoints_controller.go:184] Starting endpoint controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.406905    9667 shared_informer.go:240] Waiting for caches to sync for endpoint
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.438798    9667 controllermanager.go:549] Started "daemonset"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.439385    9667 daemon_controller.go:285] Starting daemon sets controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.439439    9667 shared_informer.go:240] Waiting for caches to sync for daemon sets
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.485649271+01:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.485797307+01:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.485870363+01:00" level=info msg="Starting batch/v1, Kind=Job controller"
Jan  6 20:59:29 mnode3 k3s[9667]: E0106 20:59:29.496479    9667 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
Jan  6 20:59:29 mnode3 k3s[9667]: W0106 20:59:29.496604    9667 controllermanager.go:541] Skipping "service"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.545016    9667 node_lifecycle_controller.go:77] Sending events to api server
Jan  6 20:59:29 mnode3 k3s[9667]: E0106 20:59:29.545180    9667 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided
Jan  6 20:59:29 mnode3 k3s[9667]: W0106 20:59:29.545217    9667 controllermanager.go:541] Skipping "cloud-node-lifecycle"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.607322    9667 controllermanager.go:549] Started "attachdetach"
Jan  6 20:59:29 mnode3 k3s[9667]: W0106 20:59:29.607460    9667 controllermanager.go:541] Skipping "root-ca-cert-publisher"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.608263    9667 attach_detach_controller.go:322] Starting attach detach controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.608362    9667 shared_informer.go:240] Waiting for caches to sync for attach detach
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.657298    9667 controllermanager.go:549] Started "podgc"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.657959    9667 gc_controller.go:89] Starting GC controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.658015    9667 shared_informer.go:240] Waiting for caches to sync for GC
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.667855    9667 controller.go:606] quota admission added evaluator for: deployments.apps
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.712474    9667 controllermanager.go:549] Started "replicationcontroller"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.713602    9667 replica_set.go:182] Starting replicationcontroller controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.714144    9667 shared_informer.go:240] Waiting for caches to sync for ReplicationController
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.756960    9667 controllermanager.go:549] Started "deployment"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.757579    9667 deployment_controller.go:153] Starting deployment controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.757631    9667 shared_informer.go:240] Waiting for caches to sync for deployment
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.822418    9667 controllermanager.go:549] Started "ttl"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.822626    9667 ttl_controller.go:118] Starting TTL controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.823328    9667 shared_informer.go:240] Waiting for caches to sync for TTL
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.886827723+01:00" level=info msg="Starting /v1, Kind=Pod controller"
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.887012130+01:00" level=info msg="Starting /v1, Kind=Endpoints controller"
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.887268221+01:00" level=info msg="Starting /v1, Kind=Secret controller"
Jan  6 20:59:29 mnode3 k3s[9667]: time="2021-01-06T20:59:29.887506424+01:00" level=info msg="Starting /v1, Kind=Service controller"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.908509    9667 controllermanager.go:549] Started "endpointslice"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.909862    9667 endpointslice_controller.go:237] Starting endpoint slice controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.909929    9667 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.983107    9667 controllermanager.go:549] Started "replicaset"
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.983282    9667 replica_set.go:182] Starting replicaset controller
Jan  6 20:59:29 mnode3 k3s[9667]: I0106 20:59:29.983314    9667 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.190539    9667 controllermanager.go:549] Started "statefulset"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.192608    9667 stateful_set.go:146] Starting stateful set controller
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.193604    9667 shared_informer.go:240] Waiting for caches to sync for stateful set
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.227762    9667 node_lifecycle_controller.go:380] Sending events to api server.
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.232574    9667 taint_manager.go:163] Sending events to api server.
Jan  6 20:59:30 mnode3 k3s[9667]: time="2021-01-06T20:59:30.234397463+01:00" level=info msg="Active TLS secret k3s-serving (ver=260) (count 7): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.111.35:192.168.111.35 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=289B8513ED3EF56944317F9723B17859B972EEF6]"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.235929    9667 node_lifecycle_controller.go:508] Controller will reconcile labels.
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.236986    9667 controllermanager.go:549] Started "nodelifecycle"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.237779    9667 node_lifecycle_controller.go:542] Starting node controller
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.237854    9667 shared_informer.go:240] Waiting for caches to sync for taint
Jan  6 20:59:30 mnode3 k3s[9667]: time="2021-01-06T20:59:30.356793966+01:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --node-status-update-frequency=1m --profiling=false --secure-port=0"
Jan  6 20:59:30 mnode3 k3s[9667]: Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.359859    9667 controllermanager.go:549] Started "persistentvolume-binder"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.360311    9667 pv_controller_base.go:303] Starting persistent volume controller
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.360352    9667 shared_informer.go:240] Waiting for caches to sync for persistent volume
Jan  6 20:59:30 mnode3 k3s[9667]: E0106 20:59:30.370727    9667 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.400475    9667 controllermanager.go:127] Version: v1.19.5+k3s2
Jan  6 20:59:30 mnode3 k3s[9667]: W0106 20:59:30.400638    9667 controllermanager.go:139] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.400843    9667 leaderelection.go:243] attempting to acquire leader lease  kube-system/cloud-controller-manager...
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.463752    9667 leaderelection.go:253] successfully acquired lease kube-system/cloud-controller-manager
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.464480    9667 event.go:291] "Event occurred" object="kube-system/cloud-controller-manager" kind="Endpoints" apiVersion="v1" type="Normal" reason="LeaderElection" message="mnode3_b5ee72ac-4a13-4ffe-88f6-e2532eafe8c7 became leader"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.480471    9667 event.go:291] "Event occurred" object="kube-system/cloud-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="mnode3_b5ee72ac-4a13-4ffe-88f6-e2532eafe8c7 became leader"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.505788    9667 node_controller.go:108] Sending events to api server.
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.506129    9667 controllermanager.go:254] Started "cloud-node"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.517616    9667 node_lifecycle_controller.go:77] Sending events to api server
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.517758    9667 controllermanager.go:254] Started "cloud-node-lifecycle"
Jan  6 20:59:30 mnode3 k3s[9667]: E0106 20:59:30.529204    9667 core.go:97] Failed to start service controller: the cloud provider does not support external load balancers
Jan  6 20:59:30 mnode3 k3s[9667]: W0106 20:59:30.529279    9667 controllermanager.go:251] Skipping "service"
Jan  6 20:59:30 mnode3 k3s[9667]: W0106 20:59:30.529310    9667 core.go:115] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.
Jan  6 20:59:30 mnode3 k3s[9667]: W0106 20:59:30.529330    9667 controllermanager.go:251] Skipping "route"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.562572    9667 node_controller.go:329] Initializing node mnode3 with cloud provider
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.703807    9667 node_controller.go:397] Successfully initialized node mnode3 with cloud provider
Jan  6 20:59:30 mnode3 k3s[9667]: time="2021-01-06T20:59:30.728481738+01:00" level=info msg="Updated coredns node hosts entry [192.168.111.35 mnode3]"
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.879853    9667 controller.go:606] quota admission added evaluator for: helmcharts.helm.cattle.io
Jan  6 20:59:30 mnode3 k3s[9667]: I0106 20:59:30.983213    9667 controller.go:606] quota admission added evaluator for: jobs.batch

@luckyluc74
Copy link
Author

luckyluc74 commented Jan 8, 2021

I tracked down the cause to commit (sha1 eb72d50) of 25/11/2020 03:50:24 by @AkihiroSuda
in file pkg\agent\config\config.go

If I reverted only change in diff below then the external zfs snapshotter works again in k3s 1.20.0+k3s2

 pkg/agent/config/config.go | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/pkg/agent/config/config.go b/pkg/agent/config/config.go
index c5495e8b37..7525198893 100644
--- a/pkg/agent/config/config.go
+++ b/pkg/agent/config/config.go
@@ -19,6 +19,8 @@ import (
 	"strings"
 	"time"
 
+	fuseoverlayfs "github.com/AkihiroSuda/containerd-fuse-overlayfs"
+	"github.com/containerd/containerd/snapshots/overlay"
 	"github.com/pkg/errors"
 	"github.com/rancher/k3s/pkg/agent/proxy"
 	"github.com/rancher/k3s/pkg/cli/cmds"
@@ -429,6 +431,18 @@ func get(envInfo *cmds.Agent, proxy proxy.Proxy) (*config.Node, error) {
 	nodeConfig.CACerts = info.CACerts
 	nodeConfig.Containerd.Config = filepath.Join(envInfo.DataDir, "agent", "etc", "containerd", "config.toml")
 	nodeConfig.Containerd.Root = filepath.Join(envInfo.DataDir, "agent", "containerd")
+	switch nodeConfig.AgentConfig.Snapshotter {
+	case "overlayfs":
+		if err := overlay.Supported(nodeConfig.Containerd.Root); err != nil {
+			return nil, errors.Wrapf(err, "\"overlayfs\" snapshotter cannot be enabled for %q, try using \"fuse-overlayfs\" or \"native\"",
+				nodeConfig.Containerd.Root)
+		}
+	case "fuse-overlayfs":
+		if err := fuseoverlayfs.Supported(nodeConfig.Containerd.Root); err != nil {
+			return nil, errors.Wrapf(err, "\"fuse-overlayfs\" snapshotter cannot be enabled for %q, try using \"native\"",
+				nodeConfig.Containerd.Root)
+		}
+	}
 	nodeConfig.Containerd.Opt = filepath.Join(envInfo.DataDir, "agent", "containerd")
 	if !envInfo.Debug {
 		nodeConfig.Containerd.Log = filepath.Join(envInfo.DataDir, "agent", "containerd", "containerd.log")

It seems that change above does not take into account when parameter --container-runtime-endpoint is used in create k3s cluster.

this is the whole command I used for installing k3s

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.20.0+k3s2 K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --disable-network-policy --cluster-init  --container-runtime-endpoint unix:///run/containerd/containerd.sock" sh -

@brandond
Copy link
Member

brandond commented Jan 8, 2021

have you tried passing --snapshotter=zfs instead of manually altering the config template? As you noticed, the agent startup code path checks the snapshotter, but you're changing it in the template which means that k3s and containerd are out of sync.

@luckyluc74
Copy link
Author

@brandond Thanks, that works :)

FYI I checked the site documentation on --snapshotter options but was not there.

But if I check k3s server --help on k3s v1.20.0+k3s2 it is there

   --snapshotter value                        (agent/runtime) Override default containerd snapshotter (default: "overlayfs")

This is the working k3s server create command

curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.20.0+k3s2 K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --disable-network-policy --cluster-init  --snapshotter=zfs --container-runtime-endpoint unix:///run/containerd/containerd.sock" sh -

@brandond
Copy link
Member

brandond commented Jan 8, 2021

Yes, the documentation probably needs to be re-syncd to the current flags. --help will always give you the latest, although there are a couple unsupported hidden options that you'll have to go dig through the source to find...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants