You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
EDIT: replaced "bucket" with segment since it's a group unified by a key but not a fixed number of "buckets" as the term would normally be used
The "toy" case on the need for 1 running and 1 queued goes something like... assume we're updating a standard deviation field for a specific segment of values:
We add a value to the DB
We queue that segment for calculation
The task starts and loads the segment (and starts work)
We add a value to the DB
We try to queue the segment (but celery-once blocks it)
The task finishes (without incorporating the latest value)
If we don't queue a second copy when we add the new value, the stdef ends up out of date. The "1 running" limitation is due to the nature of the task -- which is not idempotent (and can't be made so).
It sounds like:
The package will always ensure only one is queued
unlock_before_run would make it possible to have a running and a queued
However, the documentation on unlock_before_run states "any retry of the task won't re-enable the lock" and #26 calls retry. Together it sounds like this will put an arbitrary number of tasks into the retry queue. Does my analysis sound right so far?
Is there some reason we can't/shouldn't wrap the retry call to attempt to restore the lock (or raise/quit, preventing duplicates in the queue) if unlock_before_run is configured?
The text was updated successfully, but these errors were encountered:
EDIT: replaced "bucket" with segment since it's a group unified by a key but not a fixed number of "buckets" as the term would normally be used
The "toy" case on the need for 1 running and 1 queued goes something like... assume we're updating a standard deviation field for a specific segment of values:
If we don't queue a second copy when we add the new value, the stdef ends up out of date. The "1 running" limitation is due to the nature of the task -- which is not idempotent (and can't be made so).
It sounds like:
queued
unlock_before_run
would make it possible to have arunning
and aqueued
running
However, the documentation on
unlock_before_run
states "any retry of the task won't re-enable the lock" and #26 callsretry
. Together it sounds like this will put an arbitrary number of tasks into the retry queue. Does my analysis sound right so far?Is there some reason we can't/shouldn't wrap the retry call to attempt to restore the lock (or raise/quit, preventing duplicates in the queue) if
unlock_before_run
is configured?The text was updated successfully, but these errors were encountered: