-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Not all Pods get restartet after Secret Change #701
Comments
More: He logs But sometimes the pod doesnt restartet so 17 or 18 have restartet and 1-2 NOT But the Log said he restartet all 20. There is no error log that something has failed. |
Is the restarting done in a fire-and-request approach and if the API server has issues, they are lost or is there some ACK/retry involved? |
The pods which are not restarted, are they the same ones everytime or random? |
Very Random. We watch this over the last 3-4 weeks. sometime this is deployment 17 then next time deployment 3. Tried it also with latest version there is it still there |
Facing the same issue. |
any more information about what values are being used to install Reloader? |
+1 Facing the same issue. |
We are using reloader helm chart. Chart.yaml
values.yaml
|
I will try to replicate the load and the issue. Meanwhile, can you guys tell how the apps are being deployed? Is there any CD tool in picture? |
Yes we are using argo cd to install the reloader helm chart |
I was asking about the applications which are reloaded 😃 Assuming they are, since you are using argocd anyways. Have you tried switching the |
You mean to set this from Whats exactly is the differenz ? On the akhq pods we are already using
And this working, but sometimes random not all get rolled. In my option this could be kubernetes client problem how the restart is performed in reloader bit i didnt find where this is done |
yes
That, instead of updating I will try to replicate the workload over our clusters and test this soon. |
Will also test out to set If reloader has API Server Issues would this be logged and if yes which log level ? |
Since you are getting proper update logs for all of the deployments, I'd like to believe that it's not an API Server issue. Because if it were, it should log errors if deployment state is not changed for some reason by Reloader. |
i would like a add a line in the documentation. When using argo to set this flag |
https://github.com/stakater/Reloader/blob/master/README.md#reload-strategies Ok there is already Argo CD in the docs. |
Describe the bug
We have like 20 Deployment in our cluster.
all have on the Deployment the Annotations: reloader.stakater.com/auto: true
This all are AKHQ Deployment with Kafka Secrets.
Every 5 Days the Secrets get changed. (at the same time)
Sometimes:
Only 17/18 get restartet by reloader.
level=info msg="Changes detected in 'root-ca-cert-truststore' of type 'SECRET' in namespace 'test1', Updated 'akhq' of type 'Deployment' in namespace 'test1'"
Log look like this without any error.
To Reproduce
Who can i debug this deeper ?
Expected behavior
All pods get restartet
Environment
The text was updated successfully, but these errors were encountered: