Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't start minikube after initial install (could not unmarshal the JSON output of 'docker info') #11174

Open
SteveBisnett opened this issue Apr 23, 2021 · 20 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code os/linux

Comments

@SteveBisnett
Copy link

SteveBisnett commented Apr 23, 2021

Steps to reproduce the issue:

Minikube version: 1.18.1 (need to use this version as AWX has a bug related to 1.19)
Docker version: 19.03.15, build 99e3ed8919

  1. Followed directions to install minikube and docker.
  2. Executed the minikube start --driver=docker
  3. Received the following results and minikube won't start

Full output of failed command:
[ansible@control-plane ~]$ minikube start

  • minikube v1.18.1 on Centos 8.3.2011
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Restarting existing none bare metal machine for "minikube" ...
  • OS release is CentOS Linux 8
  • Preparing Kubernetes v1.20.2 on Docker ...
    ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
    stdout:
    [init] Using Kubernetes version: v1.20.2
    [preflight] Running pre-flight checks
    [preflight] The system verification failed. Printing the output from the verification:
    KERNEL_VERSION: 4.18.0-240.22.1.el8_3.x86_64
    CONFIG_NAMESPACES: enabled
    CONFIG_NET_NS: enabled
    CONFIG_PID_NS: enabled
    CONFIG_IPC_NS: enabled
    CONFIG_UTS_NS: enabled
    CONFIG_CGROUPS: enabled
    CONFIG_CGROUP_CPUACCT: enabled
    CONFIG_CGROUP_DEVICE: enabled
    CONFIG_CGROUP_FREEZER: enabled
    CONFIG_CGROUP_SCHED: enabled
    CONFIG_CPUSETS: enabled
    CONFIG_MEMCG: enabled
    CONFIG_INET: enabled
    CONFIG_EXT4_FS: enabled (as module)
    CONFIG_PROC_FS: enabled
    CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
    CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
    CONFIG_OVERLAY_FS: enabled (as module)
    CONFIG_AUFS_FS: not set - Required for aufs.
    CONFIG_BLK_DEV_DM: enabled (as module)
    OS: Linux
    CGROUPS_CPU: enabled
    CGROUPS_CPUACCT: enabled
    CGROUPS_CPUSET: enabled
    CGROUPS_DEVICES: enabled
    CGROUPS_FREEZER: enabled
    CGROUPS_MEMORY: enabled
    CGROUPS_PIDS: enabled
    CGROUPS_HUGETLB: enabled

stderr:
[WARNING IsDockerSystemdCheck]: detected "" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info':

: unexpected end of JSON input
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

I have verified that docker is running:

[ansible@control-plane ~]$ sudo docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:44 2021
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:15:19 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.18.0
GitCommit: fec3683
[ansible@control-plane ~]$ sudo docker info
Client:
Debug Mode: false

Server:
Containers: 8
Running: 0
Paused: 0
Stopped: 8
Images: 8
Server Version: 19.03.15
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.18.0-240.22.1.el8_3.x86_64
Operating System: CentOS Linux 8
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 15.46GiB
Name: control-plane.minikube.internal
ID: EW3X:QRSM:A5XC:2HFJ:CNQP:2H3K:2TE4:7CJL:XUZJ:E37A:3LMN:35TR
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

[ansible@control-plane ~]$ minikube version
minikube version: v1.18.1
commit: 09ee84d

@SteveBisnett SteveBisnett changed the title Can't start minikube after initial install 18.0.1 Can't start minikube after initial install Apr 24, 2021
@SteveBisnett SteveBisnett changed the title Can't start minikube after initial install Can't start minikube after initial install (could not unmarshal the JSON output of 'docker info') Apr 24, 2021
@medyagh
Copy link
Member

medyagh commented Apr 25, 2021

@SteveBisnett do u mind sharing the output of
minikube logs

alternatively I am curious if this flag helps you?
minikube start --force-systemd

is this running inside a VM or inside another container ?

if this is running inside a container, one option would be using the none driver

@medyagh
Copy link
Member

medyagh commented Apr 25, 2021

the original error comes from kubeadm init

[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info':

: unexpected end of JSON input

another thing to try would be trying the containerd runtime

would this help ?
minikube delete --all
minikube start --container-runtime=containerd

@SteveBisnett
Copy link
Author

the original error comes from kubeadm init

[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info':

: unexpected end of JSON input

another thing to try would be trying the containerd runtime

would this help ?
minikube delete --all
minikube start --container-runtime=containerd

So I have attempted to start in '--driver=none' since this is a VM and I get the same results. It is as though Docker is not running, despite being able to get a status and running "Hello World".

Here is the output of the --container-runtime=containerd command

[root@control-plane ~]# minikube start --container-runtime=containerd

  • minikube v1.18.1 on Centos 8.3.2011
  • Using the docker driver based on user configuration

X Exiting due to PROVIDER_DOCKER_NOT_RUNNING: expected version string format is "-". but got

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 26, 2021

X Exiting due to PROVIDER_DOCKER_NOT_RUNNING: expected version string format is "-". but got

Can you post the output of docker version --format "{{.Server.Os}}-{{.Server.Version}}" ?

@SteveBisnett
Copy link
Author

X Exiting due to PROVIDER_DOCKER_NOT_RUNNING: expected version string format is "-". but got

Can you post the output of docker version ?

[root@control-plane ~]# sudo docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:44 2021
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:15:19 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.18.0
GitCommit: fec3683

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 26, 2021

Without the sudo.

Something like:

$ docker version --format "{{.Server.Os}}-{{.Server.Version}}"
linux-20.10.6

@SteveBisnett
Copy link
Author

Without the sudo.

I can't. Despite following the instructions on "Manage Docker as a non-root user" found here (https://docs.docker.com/engine/install/linux-postinstall/) it will only respond when I use SUDO.

@afbjorklund afbjorklund added co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. os/linux labels Apr 26, 2021
@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 26, 2021

minikube is supposed to be able to detect the docker error, so for some reason we get an "OK" error code - but no output ?

Possible we need to look out for "" results from docker version and docker info, but I don't think that has been seen before

@SteveBisnett
Copy link
Author

Here is the output of 'docker info'... Of course with SUDO:

[root@control-plane ~]# sudo docker info
Client:
Debug Mode: false

Server:
Containers: 9
Running: 0
Paused: 0
Stopped: 9
Images: 8
Server Version: 19.03.15
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.18.0-240.22.1.el8_3.x86_64
Operating System: CentOS Linux 8
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 15.46GiB
Name: control-plane.minikube.internal
ID: EW3X:QRSM:A5XC:2HFJ:CNQP:2H3K:2TE4:7CJL:XUZJ:E37A:3LMN:35TR
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: API is accessible on http://127.0.0.1:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface

@afbjorklund
Copy link
Collaborator

Here is the output of 'docker info'... Of course with SUDO:

We don't use sudo for docker, only for running podman...

It is kinda arbitrary, and some people prefer using "sudo docker" over adding their user to a root-equivalent group.

But it is a common setup: https://docs.docker.com/engine/install/linux-postinstall/ (sudo usermod -aG docker $USER)

What is the output and exit code of running docker without ?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 27, 2021

Anyway, can't reproduce this.

Here is what I get, after downgrading Docker from 20.10 to 19.03:

[admin@localhost ~]$ more /etc/redhat-release
CentOS Linux release 8.3.2011
[admin@localhost ~]$ docker version
Client: Docker Engine - Community
 Version:           19.03.15
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        99e3ed8919
 Built:             Sat Jan 30 03:16:44 2021
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.15
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       99e3ed8919
  Built:            Sat Jan 30 03:15:19 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.9
  GitCommit:        ea765aba0d05254012b0b9e595e995c09186427f
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

https://docs.docker.com/engine/install/centos/

yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 containerd.io-1.3.9


Here is the expected output, from a non-admin (unprivileged) user:

[luser@localhost ~]$ docker version
Client: Docker Engine - Community
 Version:           19.03.15
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        99e3ed8919
 Built:             Sat Jan 30 03:16:44 2021
 OS/Arch:           linux/amd64
 Experimental:      false
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
[luser@localhost ~]$ echo $?
1

Running docker requires* user to have admin/docker/root privileges.

* except for rootless, which isn't yet supported in minikube

@SteveBisnett
Copy link
Author

Here is the output of 'docker info'... Of course with SUDO:

We don't use sudo for docker, only for running podman...

It is kinda arbitrary, and some people prefer using "sudo docker" over adding their user to a root-equivalent group.

But it is a common setup: https://docs.docker.com/engine/install/linux-postinstall/ (sudo usermod -aG docker $USER)

What is the output and exit code of running docker without ?

So, I already executed that command, but when running 'docker info' without sudo it shows this:

[root@control-plane ~]# sudo usermod -aG docker $USER
[root@control-plane ~]# docker info
[root@control-plane ~]#

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 28, 2021

So you get these issues inside, when you run the commands with minikube ssh on the node ?

And not outside on the host, as part of the verification before running the minikube start command

As you are running as root (and not "docker" $USER) here, it should not be about permissions.

Still trying to duplicate. Why is it running as "root", and where did the "control-plane" host come from ?

@SteveBisnett
Copy link
Author

I get these when accessing the console directly and logging in as root.

Based upon your last posts, I reinstalled Docker and after rebooting the system, I used the sudo -i and attempted to start minikube with the following command: minikube start --driver=none. This time I received a different response, but the cluster still did not start up....

[root@control-plane ~]# minikube start --driver=none

  • minikube v1.18.1 on Centos 8.3.2011
  • Using the none driver based on user configuration
  • Starting control plane node minikube in cluster minikube
  • Running on localhost (CPUs=2, Memory=15833MB, Disk=495606MB) ...
  • OS release is CentOS Linux 8
  • Preparing Kubernetes v1.20.2 on Docker 19.03.15 ...
    • Generating certificates and keys ...

    • Booting up control plane ...
      ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
      stdout:
      [init] Using Kubernetes version: v1.20.2
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      [certs] Using certificateDir folder "/var/lib/minikube/certs"
      [certs] Using existing ca certificate authority
      [certs] Using existing apiserver certificate and key on disk
      [certs] Generating "apiserver-kubelet-client" certificate and key
      [certs] Generating "front-proxy-ca" certificate and key
      [certs] Generating "front-proxy-client" certificate and key
      [certs] Generating "etcd/ca" certificate and key
      [certs] Generating "etcd/server" certificate and key
      [certs] etcd/server serving cert is signed for DNS names [control-plane.minikube.internal localhost] and IPs [172.30.228.212 127.0.0.1 ::1]
      [certs] Generating "etcd/peer" certificate and key
      [certs] etcd/peer serving cert is signed for DNS names [control-plane.minikube.internal localhost] and IPs [172.30.228.212 127.0.0.1 ::1]
      [certs] Generating "etcd/healthcheck-client" certificate and key
      [certs] Generating "apiserver-etcd-client" certificate and key
      [certs] Generating "sa" key and public key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
      [kubelet-check] Initial timeout of 40s passed.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      [kubelet-check] It seems like the kubelet isn't running or healthy.
      [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

      Unfortunately, an error has occurred:
              timed out waiting for the condition
      
      This error is likely caused by:
              - The kubelet is not running
              - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
      
      If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
              - 'systemctl status kubelet'
              - 'journalctl -xeu kubelet'
      
      Additionally, a control plane component may have crashed or exited when started by the container runtime.
      To troubleshoot, list all containers using your preferred container runtimes CLI.
      
      Here is one example how you may list all Kubernetes containers running in docker:
              - 'docker ps -a | grep kube | grep -v pause'
              Once you have found the failing container, you can inspect its logs with:
              - 'docker logs CONTAINERID'
      

stderr:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

  • Generating certificates and keys ...
  • Booting up control plane ...

X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
stdout:

#########################################################

Minikube attempted 3 times to access the kublet, but never was successful. It errored out with the following:

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 28, 2021

The none driver is very different from the docker driver.

For instance, you need to remember to disable SELinux and Firewalld.

https://minikube.sigs.k8s.io/docs/drivers/none/

It also doesn't see much testing in CI on Fedora or CentOS, #3552

@SteveBisnett
Copy link
Author

FirewallD is offline and disabled.

This is running in a VM and was recommended to run it using the --none driver. Starting with Docker, I am still getting the same errors as before.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 28, 2021

This is running in a VM and was recommended to run it using the --none driver.

Sure, either should work. Just can be a bit hard to follow when mixing drivers...
I still don't know what configuration would lead to docker outputting "empty" ?

But this part is a bit strange, makes you wonder what else was modified:
WARNING: API is accessible on http://127.0.0.1:2375 without encryption.


If I enable SELinux again (setenforce 1), then I get the same kind of timeout.

This is why it is a suspect. Enabling firewalld did get a proper warning message.

@afbjorklund
Copy link
Collaborator

But at least I could reproduce the bug where the none driver sets the hostname...

@andriyDev andriyDev added the long-term-support Long-term support issues that can't be fixed in code label Jun 23, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 21, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 21, 2021
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Nov 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. long-term-support Long-term support issues that can't be fixed in code os/linux
Projects
None yet
Development

No branches or pull requests

7 participants