Skip to content
This repository has been archived by the owner on Oct 6, 2022. It is now read-only.

kubelet isn't running. #7

Open
rohitsakala opened this issue Oct 14, 2019 · 6 comments
Open

kubelet isn't running. #7

rohitsakala opened this issue Oct 14, 2019 · 6 comments

Comments

@rohitsakala
Copy link

rohitsakala commented Oct 14, 2019

I installed concourse and then used the example job in the readme. Seems like the kubelet is not running.

- name: kind
  plan:
  - in_parallel:
    - get: k8s-git
    - get: kind-on-c
    - get: kind-release
      params:
        globs:
        - kind-linux-amd64
  - task: run-kind
    privileged: true
    file: kind-on-c/kind.yaml
    params:
      KIND_TESTS: |
        # your actual tests go here!
        kubectl get nodes -o wide

resources:
- name: k8s-git
  type: git
  source:
    uri: https://github.com/kubernetes/kubernetes
- name: kind-release
  type: github-release
  source:
    owner: kubernetes-sigs
    repository: kind
    access_token: <some github token>
    pre_release: true
- name: kind-on-c
  type: git
  source:
    uri: https://github.com/pivotal-k8s/kind-on-c```

The logs are :- 

```[INF] Setting up Docker environment...
[INF] Starting Docker...
[INF] Waiting 60 seconds for Docker to be available...
[INF] Docker available after 2 seconds.
[INF] /tmp/build/dd1bc04d/bin/kind: v0.5.0
[INF] will use kind upstream's node image
[INF] /tmp/build/dd1bc04d/bin/kubectl: Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
[INF] kmsg-linker starting in the background
DEBU[18:19:45] Running: /bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}}] 
Creating cluster "kind" ...
DEBU[18:19:45] Running: /bin/docker [docker inspect --type=image kindest/node:v1.15.3] 
INFO[18:19:45] Pulling image: kindest/node:v1.15.3 ...      
DEBU[18:19:45] Running: /bin/docker [docker pull kindest/node:v1.15.3] 
 ✓ Ensuring node image (kindest/node:v1.15.3) 🖼 
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[18:20:33] Running: /bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-worker2 --name kind-worker2 --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=worker kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157] 
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-worker --name kind-worker --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=worker kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157] 
DEBU[18:20:33] Running: /bin/docker [docker run --detach --tty --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=control-plane --expose 43587 --publish=127.0.0.1:43587:6443/TCP kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157] 
 ✓ Preparing nodes 📦📦📦 
DEBU[18:21:16] Running: /bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=kind] 
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-control-plane] 
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-worker2] 
DEBU[18:21:16] Running: /bin/docker [docker inspect -f {{index .Config.Labels "io.k8s.sigs.kind.role"}} kind-worker] 
DEBU[18:21:16] Running: /bin/docker [docker exec --privileged kind-control-plane cat /kind/version] 
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane] 
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker] 
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-control-plane] 
DEBU[18:21:17] Running: /bin/docker [docker inspect -f {{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}} kind-worker2] 
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.4 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} 
DEBU[18:21:18] Configuration generated:
 # config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
  name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
  certSANs: [localhost, "127.0.0.1"]
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
    # configure ipv6 default addresses for IPv6 clusters
    
scheduler:
  extraArgs:
    # configure ipv6 default addresses for IPv6 clusters
    
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
  name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network. 
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
  advertiseAddress: "172.17.0.4"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.4"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
  name: config

nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.4"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
  name: config
# configure ipv6 addresses in IPv6 mode

# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
  name: config 
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 false 172.17.0.2 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} 
DEBU[18:21:18] Configuration generated:
 # config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
  name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
  certSANs: [localhost, "127.0.0.1"]
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
    # configure ipv6 default addresses for IPv6 clusters
    
scheduler:
  extraArgs:
    # configure ipv6 default addresses for IPv6 clusters
    
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
  name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network. 
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
  advertiseAddress: "172.17.0.2"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.2"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
  name: config

nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.2"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
  name: config
# configure ipv6 addresses in IPv6 mode

# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
  name: config 
DEBU[18:21:18] Configuration Input data: {kind v1.15.3 172.17.0.3:6443 6443 127.0.0.1 true 172.17.0.3 abcdef.0123456789abcdef 10.244.0.0/16 10.96.0.0/12 false {}} 
DEBU[18:21:18] Configuration generated:
 # config generated by kind
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
  name: config
kubernetesVersion: v1.15.3
clusterName: "kind"
controlPlaneEndpoint: "172.17.0.3:6443"
# on docker for mac we have to expose the api server via port forward,
# so we need to ensure the cert is valid for localhost so we can talk
# to the cluster after rewriting the kubeconfig to point to localhost
apiServer:
  certSANs: [localhost, "127.0.0.1"]
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
    # configure ipv6 default addresses for IPv6 clusters
    
scheduler:
  extraArgs:
    # configure ipv6 default addresses for IPv6 clusters
    
networking:
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
  name: config
# we use a well know token for TLS bootstrap
bootstrapTokens:
- token: "abcdef.0123456789abcdef"
# we use a well know port for making the API server discoverable inside docker network. 
# from the host machine such port will be accessible via a random local port instead.
localAPIEndpoint:
  advertiseAddress: "172.17.0.3"
  bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.3"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
metadata:
  name: config
controlPlane:
  localAPIEndpoint:
    advertiseAddress: "172.17.0.3"
    bindPort: 6443
nodeRegistration:
  criSocket: "/run/containerd/containerd.sock"
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: "172.17.0.3"
discovery:
  bootstrapToken:
    apiServerEndpoint: "172.17.0.3:6443"
    token: "abcdef.0123456789abcdef"
    unsafeSkipCAVerification: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
metadata:
  name: config
# configure ipv6 addresses in IPv6 mode

# disable disk resource management by default
# kubelet will see the host disk that the inner container runtime
# is ultimately backed by and attempt to recover disk space. we don't want that.
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
# no-op entry that exists solely so it can be patched
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metadata:
  name: config 
DEBU[18:21:19] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration 
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-worker2 mkdir -p /kind] 
DEBU[18:21:19] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.3
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration 
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-control-plane mkdir -p /kind] 
DEBU[18:21:19] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.3:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.3:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration 
DEBU[18:21:19] Running: /bin/docker [docker exec --privileged kind-worker mkdir -p /kind] 
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-worker2 cp /dev/stdin /kind/kubeadm.conf] 
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-worker cp /dev/stdin /kind/kubeadm.conf] 
DEBU[18:21:24] Running: /bin/docker [docker exec --privileged -i kind-control-plane cp /dev/stdin /kind/kubeadm.conf] 
⠈⠁ Creating kubeadm config 📜 [INF] kmsg-linker successful, shutting down
 ✓ Creating kubeadm config 📜 
DEBU[18:21:27] Running: /bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6] 
DEBU[18:23:36] I1014 18:21:28.578980      82 initconfiguration.go:189] loading configuration from "/kind/kubeadm.conf"
I1014 18:21:28.587691      82 feature_gate.go:216] feature gates: &{map[]}
	[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
I1014 18:21:28.588302      82 checks.go:581] validating Kubernetes and kubeadm version
I1014 18:21:28.588342      82 checks.go:172] validating if the firewall is enabled and active
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
I1014 18:21:28.981511      82 checks.go:209] validating availability of port 6443
I1014 18:21:28.982062      82 checks.go:209] validating availability of port 10251
I1014 18:21:28.982256      82 checks.go:209] validating availability of port 10252
I1014 18:21:28.982557      82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1014 18:21:28.982867      82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1014 18:21:28.983087      82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1014 18:21:28.983181      82 checks.go:292] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1014 18:21:28.983325      82 checks.go:439] validating if the connectivity type is via proxy or direct
I1014 18:21:28.983484      82 checks.go:475] validating http connectivity to first IP address in the CIDR
I1014 18:21:28.983921      82 checks.go:475] validating http connectivity to first IP address in the CIDR
I1014 18:21:28.984031      82 checks.go:105] validating the container runtime
I1014 18:21:30.543266      82 checks.go:382] validating the presence of executable crictl
I1014 18:21:30.543727      82 checks.go:341] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1014 18:21:30.656489      82 checks.go:341] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1014 18:21:30.784627      82 checks.go:653] validating whether swap is enabled or not
I1014 18:21:30.785055      82 checks.go:382] validating the presence of executable ip
I1014 18:21:30.825742      82 checks.go:382] validating the presence of executable iptables
I1014 18:21:30.830747      82 checks.go:382] validating the presence of executable mount
I1014 18:21:30.831014      82 checks.go:382] validating the presence of executable nsenter
I1014 18:21:30.959989      82 checks.go:382] validating the presence of executable ebtables
I1014 18:21:30.960602      82 checks.go:382] validating the presence of executable ethtool
I1014 18:21:30.960791      82 checks.go:382] validating the presence of executable socat
I1014 18:21:30.961037      82 checks.go:382] validating the presence of executable tc
I1014 18:21:30.961232      82 checks.go:382] validating the presence of executable touch
I1014 18:21:30.970741      82 checks.go:524] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1051-aws
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.15.0-1051-aws/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1051-aws\n", err: exit status 1
I1014 18:21:31.147227      82 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I1014 18:21:31.170004      82 checks.go:622] validating kubelet version
I1014 18:21:31.597558      82 checks.go:131] validating if the service is enabled and active
I1014 18:21:31.703504      82 checks.go:209] validating availability of port 10250
I1014 18:21:31.703815      82 checks.go:209] validating availability of port 2379
I1014 18:21:31.703992      82 checks.go:209] validating availability of port 2380
I1014 18:21:31.704178      82 checks.go:254] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1014 18:21:32.057312      82 checks.go:842] image exists: k8s.gcr.io/kube-apiserver:v1.15.3
I1014 18:21:32.071703      82 checks.go:842] image exists: k8s.gcr.io/kube-controller-manager:v1.15.3
I1014 18:21:32.084891      82 checks.go:842] image exists: k8s.gcr.io/kube-scheduler:v1.15.3
I1014 18:21:32.091814      82 checks.go:842] image exists: k8s.gcr.io/kube-proxy:v1.15.3
I1014 18:21:32.100361      82 checks.go:842] image exists: k8s.gcr.io/pause:3.1
I1014 18:21:32.106837      82 checks.go:842] image exists: k8s.gcr.io/etcd:3.3.10
I1014 18:21:32.114283      82 checks.go:842] image exists: k8s.gcr.io/coredns:1.3.1
I1014 18:21:32.114593      82 kubelet.go:61] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1014 18:21:32.151542      82 kubelet.go:79] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1014 18:21:32.817114      82 certs.go:104] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.3 127.0.0.1 ::1]
I1014 18:21:36.402593      82 certs.go:104] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.3 172.17.0.3 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1014 18:21:37.468092      82 certs.go:104] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I1014 18:21:38.247122      82 certs.go:70] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1014 18:21:38.446304      82 kubeconfig.go:79] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1014 18:21:38.799561      82 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1014 18:21:39.030137      82 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1014 18:21:39.102549      82 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1014 18:21:39.263012      82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.350543      82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1014 18:21:39.356141      82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.379764      82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1014 18:21:39.389670      82 manifests.go:115] [control-plane] getting StaticPodSpecs
I1014 18:21:39.390652      82 manifests.go:131] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1014 18:21:39.405349      82 local.go:60] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1014 18:21:39.405622      82 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy
I1014 18:21:39.406957      82 loader.go:359] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1014 18:21:39.418315      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:39.924496      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:40.424106      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:40.924056      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:41.423989      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:41.924116      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:42.424104      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:42.924058      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:43.423971      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:43.924120      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:44.424030      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:44.924132      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:45.424130      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:45.937824      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:46.424131      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:46.924095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:47.424053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:47.924090      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:48.424072      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:48.924102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:49.424124      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:49.924065      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:50.424075      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:50.923981      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:51.424053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:51.924046      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:52.424027      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:52.924088      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:53.424100      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:53.925096      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:54.424117      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:54.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:55.430311      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:55.924114      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:56.424090      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:56.924058      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:57.424100      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:57.924114      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:58.424088      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:58.924090      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:59.424102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:21:59.924043      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:00.426483      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:00.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:01.424103      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:01.924039      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:02.424061      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:02.924109      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:03.424084      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:03.924015      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:04.424112      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:04.924128      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:05.423935      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:05.924112      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:06.424093      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:06.924053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:07.424095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:07.924067      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:08.424097      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:08.924143      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:09.424113      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:09.924114      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:10.424055      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:10.924135      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:11.424110      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:11.924094      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:12.424070      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:12.930947      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 7 milliseconds
I1014 18:22:13.424082      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:13.949873      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 26 milliseconds
I1014 18:22:14.425145      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:14.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:15.424118      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:15.933121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:16.424085      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:16.924198      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:17.424274      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:17.925109      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:18.424108      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:18.924062      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] Initial timeout of 40s passed.
I1014 18:22:19.474587      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 24 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:19.928234      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:20.424095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:20.926108      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:21.424087      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:21.924117      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:22.424099      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:22.924072      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:23.424089      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:23.928696      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:24.424099      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:24.924085      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:25.429034      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:25.924102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:26.424066      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:26.929100      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:27.424066      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:27.924093      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:28.425166      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:28.924084      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:29.424101      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:29.924069      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:30.424078      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:30.924053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:31.424133      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:31.924214      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:32.424087      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:32.924092      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:33.424578      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:33.924123      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:34.424114      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:34.924870      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:35.424009      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:35.924089      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:36.424899      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:36.924112      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:37.424071      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:37.924018      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:38.424094      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:38.924068      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:39.424052      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:39.924064      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:40.424096      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:40.924139      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:41.424074      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:41.924102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:42.426563      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:42.924231      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:43.424119      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:43.925037      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:44.424099      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:44.924115      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:45.424039      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:45.924188      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:46.424103      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:46.924064      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:47.424120      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:47.924043      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:48.424061      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:48.924092      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:49.424124      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:49.924013      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:50.424048      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:50.924097      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:51.424014      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:51.924118      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:52.425079      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:52.924061      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:53.424251      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:53.924106      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:54.424121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1014 18:22:54.924041      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:55.424029      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:55.924109      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:56.424110      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:56.925229      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:57.425302      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:57.924109      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:58.426970      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:58.924046      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:59.424100      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:22:59.924009      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:00.424081      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:00.924127      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:01.424042      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:01.924124      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:02.424137      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:02.930307      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:03.424095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:03.924145      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:04.424049      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:04.924101      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:05.424126      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:05.924108      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:06.424095      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:06.924051      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:07.424091      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:07.924068      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:08.424067      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:08.924083      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:09.424053      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:09.924093      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:10.424132      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:10.924038      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:11.424141      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:11.924061      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:12.445096      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:12.924152      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:13.424183      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:13.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:14.424082      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:14.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:15.426411      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 2 milliseconds
I1014 18:23:15.925376      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:16.424088      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:16.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:17.424093      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:17.924050      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:18.424049      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:18.932107      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:19.436940      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:19.924270      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:20.424604      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:20.925099      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:21.424134      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:21.924120      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:22.424047      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:22.924071      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:23.424144      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:23.924033      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:24.424124      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:24.924121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:25.428832      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:25.924102      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:26.424110      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:26.924041      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:27.424096      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:27.924133      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:28.424652      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:28.924040      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:29.424117      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:29.924041      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:30.424092      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:30.924104      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:31.424121      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:31.924155      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:32.424078      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:32.924138      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:33.424119      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:33.924118      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
I1014 18:23:34.424120      82 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s  in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster 
 ✗ Starting control-plane 🕹️ 
Error: failed to create cluster: failed to init node with kubeadm: exit status 1
[ERR] Build failed (1), not stopping docker.```

Can you help me with this issue ?
@hoegaarden
Copy link
Contributor

Hello @rohitsakala .

To maybe reproduce that issue please let me know:

  • the version of concourse you are using
  • where/how your concourse is running (locally on docker via docker-compose, deployed via BOSH release, ...)
  • a minimal pipeline that triggers this problem

Thanks!

@rohitsakala
Copy link
Author

  • o

Hi @hoegaarden,

  1. The version of the concourse is v5.3.0
  2. The concourse is deployed using BOSH a release. [0]
  3. I used the example pipeline in the Readme [2] of the repo. Example pipeline -

[0] https://github.com/cloudfoundry-incubator/cf-operator-ci/blob/master/docs/concourse-deployment-steps.md
[1] https://ci.flintstone.cf.cloud.ibm.com/teams/quarks/pipelines/kind-test/jobs/kind/builds/1
[2] https://github.com/pivotal-k8s/kind-on-c#build-and-run-your-own-kubernetes-

@hoegaarden
Copy link
Contributor

hoegaarden commented Oct 28, 2019

Mh ... One thing I found was that running a cluster inside a task requires quite some resources, so when the workers are not quite beefy enough and other pipelines were running on the same concourse I saw some flakes similar to that.

Could you maybe test with a single node cluster? You could do that by either setting the KIND_CONFIG on your task or with a one-off task like this:

cat <<'EOF' > /tmp/kind.yml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
EOF

KIND_CONFIG="$(</tmp/kind.yml)" KIND_TESTS='kubectl get nodes -o wide' \
  fly -t <flyTarget> execute \
    --config kind.yaml \
    --privileged \
    --inputs-from 'kind-test/kind'

Currently, we run kind-on-c tests only on k8s (here) and it seems pretty stable (all of the recent failed & aborted runs are either fixed in kind-on-c or where infrastructure flakes). We also ran the same pipeline on a BOSH deployed concourse for a while, but we didn't see any fundamental difference and decommissioned that recently.

So I am not sure where your issue comes from. In the logs you provided I don't see anything standing out, but it might be useful to inspect the kubelet logs (something like: docker exec -ti kind-control-plane journalctl -f -u kubelet) of the failed cluster.

@klxfeiyang
Copy link

klxfeiyang commented Jun 5, 2020

Hi @hoegaarden, I have also encountered this same issue in my concourse deployment. To jump start this discussion again, some additional context and logging are provided from my environment.

Issue

kind create cluster fail with the following errors. It fails at the control plan creation step, indicating that kubelet is not up and running:

✗ Starting control-plane 🕹️
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
...
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Troubleshooting

systemctl indicates that the kubelet process is running:

docker exec -ti kind-control-plane systemctl status
● kind-control-plane
    State: running
     Jobs: 0 queued
   Failed: 0 units
    Since: Thu 2020-06-04 23:25:09 UTC; 2min 45s ago
   CGroup: /docker/9cdfcd0517038a3885423d143b6e23b7038d4fab39654f3c5650a4d35efa0
b1c
           ├─539 systemctl status
           ├─544 pager
           ├─init.scope
           │ └─1 /sbin/init
           └─system.slice
             ├─systemd-journald.service
             │ └─69 /lib/systemd/systemd-journald
             ├─kubelet.service
             │ └─524 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootst
rap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kub
elet/config.yaml --container-runtime=remote --container-runtime-endpoint=/run/co
ntainerd/containerd.sock --fail-swap-on=false --node-ip=172.17.0.2 --fail-swap-o
n=false
             └─containerd.service
               └─79 /usr/local/bin/containerd

Also indicated by the kubelet /var/lib log

# stat /var/lib/kubelet
  File: /var/lib/kubelet
  Size: 204       	Blocks: 0          IO Block: 4096   directory
Device: 2000beh/2097342d	Inode: 35912       Links: 1
Access: (0700/drwx------)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2020-06-05 00:08:51.298551513 +0000
Modify: 2020-06-05 00:09:02.118571364 +0000
Change: 2020-06-05 00:09:02.118571364 +0000
 Birth: -

journalctl -f -u kubelet shows the following log snippet:

Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659121   15769 factory.go:170] Factory "raw" can handle container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-r5229eff0ab4f46ce8b0ee03fa4417af3.scope", but ignoring.
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659130   15769 manager.go:908] ignoring container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-r5229eff0ab4f46ce8b0ee03fa4417af3.scope"
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659142   15769 factory.go:177] Factory "containerd" was unable to handle container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-r744d16eb23134609a5f07d2fe3a37df7.scope"
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659156   15769 factory.go:166] Error trying to work out if we can handle /docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-r744d16eb23134609a5f07d2fe3a37df7.scope: /docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-r744d16eb23134609a5f07d2fe3a37df7.scope not handled by systemd handler
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659160   15769 factory.go:177] Factory "systemd" was unable to handle container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-r744d16eb23134609a5f07d2fe3a37df7.scope"
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659167   15769 factory.go:170] Factory "raw" can handle container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-r744d16eb23134609a5f07d2fe3a37df7.scope", but ignoring.
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659175   15769 manager.go:908] ignoring container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-r744d16eb23134609a5f07d2fe3a37df7.scope"
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659181   15769 factory.go:177] Factory "containerd" was unable to handle container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-re8f9ae6db0a945e99650562669d98d37.scope"
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659188   15769 factory.go:166] Error trying to work out if we can handle /docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-re8f9ae6db0a945e99650562669d98d37.scope: /docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-re8f9ae6db0a945e99650562669d98d37.scope not handled by systemd handler
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659195   15769 factory.go:177] Factory "systemd" was unable to handle container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-re8f9ae6db0a945e99650562669d98d37.scope"
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659203   15769 factory.go:170] Factory "raw" can handle container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-re8f9ae6db0a945e99650562669d98d37.scope", but ignoring.
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659210   15769 manager.go:908] ignoring container "/docker/fade85e441a38c12f4541f4ede401da21129e7864485d1060f8de7ca811f349b/system.slice/run-re8f9ae6db0a945e99650562669d98d37.scope"
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659215   15769 factory.go:177] Factory "containerd" was unable to handle container "/system.slice/run-r589264aff0c046e5879f457098db6c92.scope"
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659221   15769 factory.go:166] Error trying to work out if we can handle /system.slice/run-r589264aff0c046e5879f457098db6c92.scope: /system.slice/run-r589264aff0c046e5879f457098db6c92.scope not handled by systemd handler
Jun 05 01:09:56 kind-control-plane kubelet[15769]: I0605 01:09:56.659224   15769 factory.go:177] Factory "systemd" was unable to handle container "/system.slice/run-r589264aff0c046e5879f457098db6c92.scope

Full log can be found here: https://publicly-exposed.s3-us-west-2.amazonaws.com/exit

Conclusion

It appears that the none of the supported container runtimes including systemd, containerd or raw can handle creating the necessary containers.

Reproduce

kind version: kind v0.8.0 go1.14.2 linux/amd64
concourse version: 5.8.0
concourse deployment: https://runway-ci.svc-stage.eng.vmware.com/teams/tkg/pipelines/kindonc
docker info:

docker info
Client:
 Debug Mode: true

Server:
 Containers: 2
  Running: 1
  Paused: 0
  Stopped: 1
 Images: 3
 Server Version: 19.03.9
 Storage Driver: btrfs
  Build Version: Btrfs v4.4
  Library Version: 101
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.4.0-142-generic
 Operating System: Ubuntu 16.04.6 LTS (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.796GiB
 Name: 23b9a397-3424-4d30-7e46-ee78908c227a
 ID: 5ATL:2CW7:AABY:DXVD:7SN3:WTTH:YGE3:S4IH:ZS7E:AAVF:JMFH:QFOY
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 37
  Goroutines: 55
  System Time: 2020-06-05T18:26:08.892984508Z
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

git branch: https://github.com/fyangd/kind-on-c/tree/etcd_failure_repro

cc. @mauilion @figo

@hoegaarden
Copy link
Contributor

@fyangd -- sorry for the late reply.

One thing I found, is that kind-on-c / kind had issues with on btrfs. A workaround has been implemented in be0268d, with that kind-on-c generally works on runway. However, especially compared to hush-house, kind-on-c is veeeerry falky on runway. I didn't get around digging deeper on why exactly that is.

If you are still interested, can you try to run your test with a recent version of kind-on-c?

@klxfeiyang
Copy link

hey @hoegaarden, thanks for getting back to me.

yes, I have seen the btrfs fix, and tried it on runway. It appears to be failing for some other reason. good to know that it's very flaky on runway.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants