You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The second of two identical tasks with same UUID and payload should not be sent to Redis before TTL, when the message sent could have been from a retry.
This is a related to issue #275 with the difference that in 275, repeated tasks are allowed to be processed before TTL because the previous task is deleted. In my case, asynq must respect TTL even when the task has completed.
To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
An external API sends an HTTP request to my asynq microservice through an endpoint.
This external API does a retry for 3 times, sleeping 10 seconds in between.
The first request could have been delayed for more than 10 seconds, prompting a first retry (second identical request).
That second request reaches asynq before original first request, starts processing, and completed in one second.
The original first request finally reaches asynq and starts processing - even though it is identical to the one that has been processed and before its TTL.
the original first request (which is second chronologically) should have been rejected because it is identical to the first retry (second request) and before 5 minutes
Environment (please complete the following information):
OS: Linux
Version of asynq package v0.20.0
Additional context
Another way to achieve distributed lock is I manually do the following.
include a UUID on every request
extract UUID from request
check if UUID key is present in Redis
If not present, create a new key/value entry
client.Set(r.Context(), requestID, true, 0).Err()
If present, reject
Do not delete the UUID key from Redis even after the task has been completed. This is to ensure retry requests fail.
Once task has been completed, update the key with TTL
The intention behind the Unique option is to prevent a duplicate task being enqueued to the same queue. The duration you pass to the Unique option is there to avoid a situation where a stale task in the queue blocking new tasks from being enqueued (or other similar situations).
Reading your use case, I think the alternative you suggested sounds perfect: the approach of using TaskID and Retention option so that completed tasks still remain the queue to prevent other duplicate tasks from being enqueued.
Describe the bug
The second of two identical tasks with same UUID and payload should not be sent to Redis before TTL, when the message sent could have been from a retry.
This is a related to issue #275 with the difference that in 275, repeated tasks are allowed to be processed before TTL because the previous task is deleted. In my case, asynq must respect TTL even when the task has completed.
To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
Using
Expected behavior
Environment (please complete the following information):
asynq
package v0.20.0Additional context
Another way to achieve distributed lock is I manually do the following.
Additionally, I could delete the UUID key if a task is explicitly deleted (not completed).
The other way I can think of is to use retention.
This way, the UUID is still kept in Redis for another 5 minutes after the task is completed. This prevents the Http retry request from being accepted.
The text was updated successfully, but these errors were encountered: