You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like a method to expose the current state of workers so that I can perform a livenessProbe health check in Kubernetes.
For context, we run django-q via python manage.py qcluster within docker containers hosted in GCP via Kubernetes Engine. We have a redis backend which has 1 master and 2 slaves.
In the scenario where the master node fails one of the slaves is automatically promoted to be the new master. When this happens the django-q workers stop processing any new requests and appear to be hung on the old tasks.
If we perform the broker.ping() it comes back True as the redis slave still connects.
We need a way to determine the state of the workers in this scenario so that we can restart the container to get things working again.
The text was updated successfully, but these errors were encountered:
I would like a method to expose the current state of workers so that I can perform a livenessProbe health check in Kubernetes.
For context, we run django-q via
python manage.py qcluster
within docker containers hosted in GCP via Kubernetes Engine. We have a redis backend which has 1 master and 2 slaves.In the scenario where the master node fails one of the slaves is automatically promoted to be the new master. When this happens the django-q workers stop processing any new requests and appear to be hung on the old tasks.
If we perform the
broker.ping()
it comes backTrue
as the redis slave still connects.We need a way to determine the state of the workers in this scenario so that we can restart the container to get things working again.
The text was updated successfully, but these errors were encountered: