-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reflector not watching secrets after period of time (still happening) #467
Comments
Hello. I don't know if someone could check the problem ? |
Up |
Same for me
|
Hello. Is it possible to get a status on this behaviour please ? Best regards. |
Started to observe the same issue once our cluster reached 5k secrets in all namespaces. My guess is that Kubernetes API paginates responses ListSecretForAllNamespacesAsync and code is not handling pagination. |
In my kubernetes cluster, where we have this problem, we have a total of 62 secrets. |
I have 4 secrets in my cluster and the issue exists. |
Did you try to set watcher timeout ? |
We had such case today. In our case, we have specified the watcher timeout to 900. The only logs we got were the following three lines
No log entries like It seems somehow the following code got stuck somewhere at kubernetes-reflector/src/ES.Kubernetes.Reflector/Core/Watchers/WatcherBackgroundService.cs Line 42 in a9571d3
version: emberstack/kubernetes-reflector:7.1.288 |
I have the same issue. We were previously using version 6.x and have now updated to the latest 7.x. As recommended, I adjusted the watcher timeout to "900", but this did not resolve the problem. It doesn’t matter how many namespaces or secrets I have, after a short period of time the synchronization of new secrets completely stops. I tested with up to 10000 namespaces, 50000 secrets, and 50000 configmaps. The test used a simple secret containing a username and password and a configmap with four values. Once the race condition is met, the synchronization of new secrets stops. However, the deletion of secrets still works, and the creation and deletion of configmaps also work. The number of Kubernetes Reflector pod replicas doesn’t seem to affect the issue; only CPU usage increases with the large number of secrets/configmaps I use. The only working solution is to restart the pod, after which the mirroring of secrets resumes. AKS version: 1.28.x |
Hello.
As issue #341 is closed, I open a new one.
As described in #341 , we encountered the same problem the 7th of October 2024, Reflector stopped replicating secrets.
Reflector did not log anything anymore (neither namespace watcher, configmap watcher or secret watcher).
Here is the end of the log :
After this time, there was no log at all anymore. Concerning the metrics, the pod reflector cpu was almost zero (seems normal because it wasn't doing anything anymore. Nothing specific about the memory usage just before the incident.
Here are informations about version :
emberstack/kubernetes-reflector:7.1.256
GKE - 1.29.8-gke.1096000
GCP
Is it possible to solve this problem please ? It makes reflector solution unstable unfortunately :(.
The text was updated successfully, but these errors were encountered: