Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingres-Controller rolling updates stuck on Pending state when a patch is added #12903

Closed
alexbaeza opened this issue Nov 9, 2021 · 4 comments · Fixed by #13439
Closed

Ingres-Controller rolling updates stuck on Pending state when a patch is added #12903

alexbaeza opened this issue Nov 9, 2021 · 4 comments · Fixed by #13439

Comments

@alexbaeza
Copy link

alexbaeza commented Nov 9, 2021

It seems like minikube v1.24 broke the rolling updates for the ingress controller

Steps to reproduce the issue:

  1. Start a new cluster with ingress addon enabled
    example:
minikube start \
     --memory=8192 \
     --cpus=2 \
     --driver=hyperkit \
     --addons=ingress
  1. Apply ingress patches:
    ./ingress/ingress-patch.yaml
spec:
  template:
    spec:
      containers:
        - name: controller
          ports:
            - containerPort: 5432
              hostPort: 5432

kubectl patch configmap tcp-services -n ingress-nginx --patch '{"data":{"5432":"some-namespace/shared-database:5432"}}'

kubectl patch deployment ingress-nginx-controller --patch "$(cat ./ingress/ingress-patch.yaml)" -n ingress-nginx
  1. When listing the pods you should see the two ingress controller pods with the latest stuck on Pending due to:
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  31s   default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
14:14:51 ❯ kubectl get pods -n ingress-nginx 
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create--1-hzczf     0/1     Completed   0          113s
ingress-nginx-admission-patch--1-qpv42      0/1     Completed   1          113s
ingress-nginx-controller-5c58864bc8-5ckl6   0/1     Pending     0          11s
ingress-nginx-controller-5f66978484-qghkc   1/1     Running     0          112s

@alexbaeza
Copy link
Author

I could fix this by adding the strategy on the deployment patch and re-applying
./ingress/ingress-patch.yaml

spec:
  strategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    spec:
      containers:
        - name: controller
          ports:
            - containerPort: 5432
              hostPort: 5432

kubectl patch deployment ingress-nginx-controller --patch "$(cat ./ingress/ingress-patch.yaml)" -n ingress-nginx

I have raised a propsed PR to fix this:
#12904

@alexbaeza alexbaeza changed the title Ingres-Controller rolling updates stuck on Pending s./etate when a patch is added Ingres-Controller rolling updates stuck on Pending state when a patch is added Nov 9, 2021
@medyagh
Copy link
Member

medyagh commented Nov 9, 2021

@alexbaeza I am curious is this something because of new kubernetes version ?

does that happen with newest minikube with and old kubernetes version? you can specify with "--kuberentes-version" flag to "start" command.

and do u mind pasting a "working" output of a previous version too?

@btalbot
Copy link

btalbot commented Nov 12, 2021

Another work-around is to scale deploy/ingress-nginx-controller to zero replicas, apply the patches, and scale back up.

@alexbaeza
Copy link
Author

@alexbaeza I am curious is this something because of new kubernetes version ?

does that happen with newest minikube with and old kubernetes version? you can specify with "--kuberentes-version" flag to "start" command.

and do u mind pasting a "working" output of a previous version too?

Hi thanks for your comments, Unfortunately, I am entirely sure if it is because of the new Kubernetes version I will need to dig further to investigate but I can confirm it was working > 1.23.
Been mostly using 1.18 which I can confirm as well worked as expected.

Another work-around is to scale deploy/ingress-nginx-controller to zero replicas, apply the patches, and scale back up.

Indeed that'll work as well or a recreate strategy will work too as suggested here:
#12904 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants