You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Celery Beat silently fails after an unpredictable amount of time running on Digital Ocean App Platform, meaning tasks are no longer executed. There are no obvious indications in the logs.
My setup:
Django 5.1.2
Celery-Beat Version: 2.70
Celery Version 5.4.0
Redis 7
Postgres 16
Exact steps to reproduce the issue:
Deploy Celery Beat to Digital Ocean App Platform as part of a Django app
Configure scheduled task via database scheduler
Leave Celery Beat running
After hours/days, tasks stop being run
Detailed information
I'm running Celery Beat on Digital Ocean App Platform (Docker based deployments via build packs) via the command:
celery -A config.celery_app beat -l debug --scheduler django_celery_beat.schedulers:DatabaseScheduler
After an unpredictable amount of time (usually days) Celery Beat will stop running tasks. There are no console errors and the process doesn't crash, Celery simply stops running scheduled tasks silently. If I redeploy the app, tasks will resume.
Having enabled debug logging, the final lines in the log are:
[celery-beat] [2024-11-13 00:49:27] [2024-11-13 00:49:27,236: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[celery-beat] [2024-11-13 00:49:32] [2024-11-13 00:49:32,237: DEBUG/MainProcess] beat: Synchronizing schedule...
[celery-beat] [2024-11-13 00:49:32] [2024-11-13 00:49:32,238: DEBUG/MainProcess] Writing entries...
[celery-beat] [2024-11-13 00:49:32] [2024-11-13 00:49:32,278: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[celery-beat] [2024-11-13 00:49:37] [2024-11-13 00:49:37,311: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[celery-beat] [2024-11-13 00:49:42] [2024-11-13 00:49:42,348: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[celery-beat] [2024-11-13 00:49:47] [2024-11-13 00:49:47,381: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[celery-beat] [2024-11-13 00:49:52] [2024-11-13 00:49:52,415: DEBUG/MainProcess] beat: Waking up in 5.00 seconds.
[celery-beat] [2024-11-13 00:49:57] [2024-11-13 00:49:57,447: DEBUG/MainProcess] beat: Waking up in 2.54 seconds.
[celery-beat] [2024-11-13 00:50:00] [2024-11-13 00:50:00,068: INFO/MainProcess] Scheduler: Sending due task Celery uptime heartbeat (fetch.utils.tasks.celery_uptime_heartbeat)
[celery-beat] [2024-11-13 00:50:00] [2024-11-13 00:50:00,091: DEBUG/MainProcess] fetch.utils.tasks.celery_uptime_heartbeat sent. id->657d0fd4-222a-474f-9c8f-13142909c69b
The final line here is the running of a scheduled task that I'm using to send heartbeats to an uptime monitor. I set up this task to help diagnose the issue and track when it occurs - there's nothing wrong with this task specifically.
I've looked at the metrics and there is no issue with resources (there is enough RAM and CPU).
I'm running a similar setup in a separate project (Digital Ocean App Platform) which doesn't have this issue (same Celery Beat versions)
I'm unsure of how I can further investigate the issue
The text was updated successfully, but these errors were encountered:
Summary:
Celery Beat silently fails after an unpredictable amount of time running on Digital Ocean App Platform, meaning tasks are no longer executed. There are no obvious indications in the logs.
My setup:
Exact steps to reproduce the issue:
Detailed information
I'm running Celery Beat on Digital Ocean App Platform (Docker based deployments via build packs) via the command:
After an unpredictable amount of time (usually days) Celery Beat will stop running tasks. There are no console errors and the process doesn't crash, Celery simply stops running scheduled tasks silently. If I redeploy the app, tasks will resume.
Having enabled debug logging, the final lines in the log are:
The final line here is the running of a scheduled task that I'm using to send heartbeats to an uptime monitor. I set up this task to help diagnose the issue and track when it occurs - there's nothing wrong with this task specifically.
I'm unsure of how I can further investigate the issue
The text was updated successfully, but these errors were encountered: