You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Using more than one worker for the same queue is somehow duplicating(or more) jobs when scheduling next execution. EDIT (it stills happens with one worker)
(from v2.10)
defis_scheduled(self) ->bool:
"""Check whether a next job for this task is queued/scheduled to be executed"""ifself.job_idisNone: # no job_id => is not scheduledreturnFalse# check whether job_id is in scheduled/queued/active jobsscheduled_jobs=self.rqueue.scheduled_job_registry.get_job_ids()
enqueued_jobs=self.rqueue.get_job_ids()
active_jobs=self.rqueue.started_job_registry.get_job_ids()
res= (self.job_idinscheduled_jobs) or (self.job_idinenqueued_jobs) or (self.job_idinactive_jobs)
# If the job_id is not scheduled/queued/started,# update the job_id to None. (The job_id belongs to a previous run which is completed)ifnotres:
self.job_id=Nonesuper(BaseTask, self).save()
returnres
def_next_job_id(self):
addition=uuid.uuid4().hex[-10:]
name=self.name.replace("/", ".")
returnf"{self.queue}:{name}:{addition}"def_enqueue_args(self) ->Dict:
"""Args for DjangoQueue.enqueue. Set all arguments for DjangoQueue.enqueue/enqueue_at. Particularly: - set job timeout and ttl - ensure a callback to reschedule the job next iteration. - Set job-id to proper format - set job meta """res=dict(
meta=dict(
task_type=self.TASK_TYPE,
scheduled_task_id=self.id,
),
on_success=success_callback,
on_failure=failure_callback,
job_id=self._next_job_id(), # <------ makes the job id not identical
)
ifself.at_front:
res["at_front"] =self.at_frontifself.timeout:
res["job_timeout"] =self.timeoutifself.result_ttlisnotNone:
res["result_ttl"] =self.result_ttlreturnres
To Reproduce
Steps to reproduce the behavior:
start a worker twice for the same queue (I used & to start 2 daemons)
Screenshots
Can provide screenshots if needed
Additional context @cunla Do you see the bug? Do you want me to propose a PR with a fix? (also, could that be in the 2x version and not in v3? We're not ready for the switch. Moreover, doing away with RQ would mean also being Backend-agnostic? (redis, database))
Thanks so much!
The text was updated successfully, but these errors were encountered:
gabriels1234
changed the title
[apparent bug] uuid appended to job_id makes multiple workers
[apparent bug] uuid appended to job_id makes duplicate next execution scheduling
Dec 6, 2024
I made a temporary fix, job that runs every minute and dedupes the ScheduledRegistry of all the queues ignoring the final random value.
[EDIT: this seems to be an issue running locally, since I reload the worker (automatically) every time there's a code change saved, it might bring duplicates. The issue is still present in non-local environments]
Describe the bug
Using more than one worker for the same queue is somehow duplicating(or more) jobs when scheduling next execution.
EDIT (it stills happens with one worker)
(from v2.10)
To Reproduce
Steps to reproduce the behavior:
Screenshots
Can provide screenshots if needed
Additional context
@cunla Do you see the bug? Do you want me to propose a PR with a fix? (also, could that be in the 2x version and not in v3? We're not ready for the switch. Moreover, doing away with RQ would mean also being Backend-agnostic? (redis, database))
Thanks so much!
The text was updated successfully, but these errors were encountered: