Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube didnt start #14477

Closed
arpitmishra-eaton opened this issue Jun 30, 2022 · 49 comments
Closed

Minikube didnt start #14477

arpitmishra-eaton opened this issue Jun 30, 2022 · 49 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@arpitmishra-eaton
Copy link

What Happened?

when hitting command "minikube start" on local machine, getting the below error. However, i tried minikube delete and start again but didnt resolve the issue.

Docker Desktop is installed.

Error: minikube start

  • minikube v1.26.0 on Microsoft Windows 10 Enterprise 10.0.19042 Build 19042
  • Using the docker driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Pulling base image ...
  • Updating the running docker "minikube" container ...
    ! This container is having trouble accessing https://k8s.gcr.io
  • To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
  • Preparing Kubernetes v1.24.1 on Docker 20.10.17 ...
    ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
    • Generating certificates and keys ...
    • Booting up control plane ...
      ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
      stdout:
      [init] Using Kubernetes version: v1.24.1
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
      [certs] Using certificateDir folder "/var/lib/minikube/certs"
      [certs] Using existing ca certificate authority
      [certs] Using existing apiserver certificate and key on disk
      [certs] Using existing apiserver-kubelet-client certificate and key on disk
      [certs] Using existing front-proxy-ca certificate authority
      [certs] Using existing front-proxy-client certificate and key on disk
      [certs] Using existing etcd/ca certificate authority
      [certs] Using existing etcd/server certificate and key on disk
      [certs] Using existing etcd/peer certificate and key on disk
      [certs] Using existing etcd/healthcheck-client certificate and key on disk
      [certs] Using existing apiserver-etcd-client certificate and key on disk
      [certs] Using the existing "sa" key
      [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
      [kubeconfig] Writing "admin.conf" kubeconfig file
      [kubeconfig] Writing "kubelet.conf" kubeconfig file
      [kubeconfig] Writing "controller-manager.conf" kubeconfig file
      [kubeconfig] Writing "scheduler.conf" kubeconfig file
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Starting the kubelet
      [control-plane] Using manifest folder "/etc/kubernetes/manifests"
      [control-plane] Creating static Pod manifest for "kube-apiserver"
      [control-plane] Creating static Pod manifest for "kube-controller-manager"
      [control-plane] Creating static Pod manifest for "kube-scheduler"
      [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
      [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
      [kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0630 04:39:08.115388 20761 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

  • Generating certificates and keys ...
  • Booting up control plane ...

X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0630 04:43:10.333772 22525 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0630 04:43:10.333772 22525 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

Attach the log file

minikubelogs.txt

Operating System

Windows

Driver

Docker

@RA489
Copy link

RA489 commented Jul 4, 2022

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Jul 4, 2022
@tony-sol
Copy link
Contributor

tony-sol commented Jul 5, 2022

Looks like have a same issue, but on macos

@laksh2206
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

@Apocaly-pse
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

it helps me~ thx~~

@paymanfu
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

解决了我的问题。

@jeeinn
Copy link

jeeinn commented Jul 15, 2022

my logs.
minikubelogs.log

@maslke
Copy link

maslke commented Jul 22, 2022

minikube version: v1.26.0 On Ubuntu 20.04.4 LTS,same issue.
This saved me.

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

@cityiron
Copy link

good job.

@Mxiaoyu
Copy link

Mxiaoyu commented Jul 26, 2022

Mac OS,the same question fixed!

@OronDF343
Copy link

Same issue still exists on minikube v1.26.1, same fix of using previous Kubernetes v1.23.8 worked for me.

Tested on WSL2 (on both Debian 11.4 and Ubuntu 22.04 LTS, using a shared Docker socket running on Debian), and also the Hyper-V driver on Windows.
Host is Windows 10 21H2, connected to a corporate proxy (Windows authentication). Proxy environment variables configured correctly according to documentation.

Has any contributor been able to reproduce this issue? It seems to be widespread judging by the number of reactions here and the number of similar/duplicate issues.

@l6l6ng
Copy link

l6l6ng commented Aug 5, 2022

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

it helps me!

@Drongerman
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

It works! Thanks!

@lunkaleung
Copy link

it really helps!

@BnkTCh
Copy link

BnkTCh commented Aug 11, 2022

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

minikube can run but now it can not pull any image.
I tried to run kubectl run nignx --image=nginx and I had Error: ImagePullBackOff
Somebody knows what could be wrong?

@l6l6ng
Copy link

l6l6ng commented Aug 11, 2022

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.
minikube start --kubernetes-version=v1.23.8

minikube can run but now it can not pull any image. I tried to run kubectl run nignx --image=nginx and I had Error: ImagePullBackOff Somebody knows what could be wrong?

minikube ssh
docker search nginx
then pull the nginx image?

@BnkTCh
Copy link

BnkTCh commented Aug 13, 2022

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.
minikube start --kubernetes-version=v1.23.8

minikube can run but now it can not pull any image. I tried to run kubectl run nignx --image=nginx and I had Error: ImagePullBackOff Somebody knows what could be wrong?

minikube ssh docker search nginx then pull the nginx image?

No it can't pull any image, minikube is not reaching internet I don't know why :/
bianca@kubernetes:~$ minikube ssh docker@minikube:~$ docker search nginx Error response from daemon: Get "https://index.docker.io/v1/search?q=nginx&n=25": dial tcp 44.206.182.67:443: i/o timeout docker@minikube:~$ docker pull nginx Using default tag: latest Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

@Apocaly-pse
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.
minikube start --kubernetes-version=v1.23.8

minikube can run but now it can not pull any image. I tried to run kubectl run nignx --image=nginx and I had Error: ImagePullBackOff Somebody knows what could be wrong?

minikube ssh docker search nginx then pull the nginx image?

No it can't pull any image, minikube is not reaching internet I don't know why :/ bianca@kubernetes:~$ minikube ssh docker@minikube:~$ docker search nginx Error response from daemon: Get "https://index.docker.io/v1/search?q=nginx&n=25": dial tcp 44.206.182.67:443: i/o timeout docker@minikube:~$ docker pull nginx Using default tag: latest Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

maybe you need China mirror image, by adding "registry-mirrors":["https://registry.docker-cn.com"] in ~/.docker/daemon.json.

@medyagh
Copy link
Member

medyagh commented Aug 31, 2022

this is possible that you are upgrading kubernets version on an older minikube iso/base image,
I wish we could detect this on minikube side so the user be aware, but do you all mind trying

minikube delete --all --purge
and then start minikube and see if it works ?

@ZoeLeee
Copy link

ZoeLeee commented Sep 3, 2022

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

it helps me!

@liangjihua
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

Works for me! Thanks!

@xakoy
Copy link

xakoy commented Sep 20, 2022

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

666

@organics2016
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

minikube v1.27.0 on Ubuntu 18.04 (vbox/amd64)
It works! Thanks!

@gxcuit
Copy link

gxcuit commented Sep 26, 2022

same issue

@BassMonkey
Copy link

I would like to point out that using an older kubernetes version is a workaround and not a fix. @arpitmishra-eaton can you change this to a bug instead of a support question?

Or should we open a new bug for this?

@tweaknc
Copy link

tweaknc commented Feb 3, 2023

This is fixed my issue Ubuntu 22.04 with minkube 1.29

@wbcangus
Copy link

wbcangus commented Feb 9, 2023

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

It works for me

@h4rv9y
Copy link

h4rv9y commented Feb 10, 2023

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

It works for me. Arch Linux with minikube 1.28.0

@MustacheXb
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

hope it help me!!!

@bobbyz007
Copy link

I would like to point out that using an older kubernetes version is a workaround and not a fix. @arpitmishra-eaton can you change this to a bug instead of a support question?

Or should we open a new bug for this?

So have it been submitted as a bug to track?

@dusatvoj
Copy link

dusatvoj commented Mar 1, 2023

minikube start --container-runtime=containerd

@velvetzhero
Copy link

minikube start --container-runtime=containerd --> is working on ubuntu 22.04/docker 23.0.1

@KaushikRathva
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.
minikube start --kubernetes-version=v1.23.8

fix my problem and it works.

works like a charm

@Palak-1202
Copy link

minikube version: v1.26.0 On Ubuntu 20.04.4 LTS,same issue. This saved me.

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.
minikube start --kubernetes-version=v1.23.8

Same here, using the latest version v1.25.2 caused the issue but it worked fine with v1.23.8

@timoooo
Copy link

timoooo commented Mar 28, 2023

Same issue here with 1.26.1. it worked with v1.23.8 on RHEL 9.1

@KirylMi
Copy link

KirylMi commented Mar 30, 2023

minikube version: v1.26.0 On Ubuntu 20.04.4 LTS,same issue. This saved me.

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.
minikube start --kubernetes-version=v1.23.8

Thanks a lot!

@ayeshasiddiqs7
Copy link

Using a specific version of kubectl worked for me. Apparently, Kubernetes version 1.24 was causing some issues for me.

minikube start --kubernetes-version=v1.23.8

Thanks a lot! Faced the same issue, this helped!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 8, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@OronDF343
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 18, 2024
@mhellmeier
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 17, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 15, 2024
@laksh2206
Copy link

/close

@k8s-ci-robot
Copy link
Contributor

@laksh2206: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests