-
-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting "MySQL server has gone away" on new tasks after idling #76
Comments
I just read through that ticket. I'll see if I can somehow check the db connection. |
Thank you for your quick response. Yes, no results in the database as soon as that error pops out. |
I can check for stale connections and reset them on every save, but this might affect performance a bit. If not we'll have to resort to testing the db connection on every save or closing it like the guys over at Django seem to prefer. |
Hmm. I just realized this probably won't work when the scheduler and saver run in different threads. I need more coffee. |
Yeah, no change whatsoever running with the dev branch, tried several times. |
Adds a check for old connections in both the workers and the monitor , every DB_TIMEOUT seconds
I really shouldn't be coding on a Monday morning before coffee. Meanwhile I made a version that checks connections both in the worker and the saver every x seconds. |
It turns out that checking stale connections on a timer takes between 1-2 times as long as just checking them always. This also has the benefit of catching timeouts that happen between timer loops.
The coffee helped. I did some performance testing and it turns out that the timed checking of connections can actually takes up to two times longer, than just checking for stale connections before every transaction. Always checking of course also circumvents missing any timeouts that happen between the loops. So I removed the |
Hey don't worry, it's Monday for everyone :D |
Ok cool. I'll probably do a release at the end of the day. |
Adds a check for old connections in both the workers and the monitor , every DB_TIMEOUT seconds
It turns out that checking stale connections on a timer takes between 1-2 times as long as just checking them always. This also has the benefit of catching timeouts that happen between timer loops.
Hello,
I've been trying Django Q for a new project as an alternative to Celery, and so far I love it, but I keep getting a "MySQL server has gone away" error on each new task received by my cluster after it idled for at least 8 hours, which corresponds to the wait_timeout of my MySQL server. Also, to make sure it was related to wait_timeout I tried lowering it to a minute and the same thing keeps happening until I restart the cluster.
Is this behavior intended? Shouldn't Django or Django Q handle this and at least try to re-establish a connection?
I tried changing Django's CONN_MAX_AGE in my settings.py to both 0 and a value higher but lower than MySQL's wait_timeout, but no luck.
So after a bit of googling I found this: https://code.djangoproject.com/ticket/21597#comment:29, they recommend using connection.close() so that Django can connect again, and it works for my task, meaning it can change stuff on one of my models and save it, but the task itself isn't getting saved and doesn't appear under successful tasks.
Is there any workaround other than periodically restarting the cluster or increasing wait_timeout to insane values?
My Django Q settings are vanilla, using redis as broker.
The text was updated successfully, but these errors were encountered: