Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Unique tasks before TTL should not be run #369

Open
gmhafiz opened this issue Dec 26, 2021 · 3 comments
Open

[BUG] Unique tasks before TTL should not be run #369

gmhafiz opened this issue Dec 26, 2021 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@gmhafiz
Copy link

gmhafiz commented Dec 26, 2021

Describe the bug
The second of two identical tasks with same UUID and payload should not be sent to Redis before TTL, when the message sent could have been from a retry.

This is a related to issue #275 with the difference that in 275, repeated tasks are allowed to be processed before TTL because the previous task is deleted. In my case, asynq must respect TTL even when the task has completed.

To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):

  1. An external API sends an HTTP request to my asynq microservice through an endpoint.
  2. This external API does a retry for 3 times, sleeping 10 seconds in between.
  3. The first request could have been delayed for more than 10 seconds, prompting a first retry (second identical request).
  4. That second request reaches asynq before original first request, starts processing, and completed in one second.
  5. The original first request finally reaches asynq and starts processing - even though it is identical to the one that has been processed and before its TTL.

Using

info, err := u.redis.EnqueueContext(ctx, task, asynq.Unique(5*time.Minute))

Expected behavior

  • Unique() respects TTL of completed tasks
  • the original first request (which is second chronologically) should have been rejected because it is identical to the first retry (second request) and before 5 minutes

Environment (please complete the following information):

  • OS: Linux
  • Version of asynq package v0.20.0

Additional context

Another way to achieve distributed lock is I manually do the following.

  • include a UUID on every request
  • extract UUID from request
  • check if UUID key is present in Redis
    • If not present, create a new key/value entry
 client.Set(r.Context(), requestID, true, 0).Err()
  • If present, reject
  • Do not delete the UUID key from Redis even after the task has been completed. This is to ensure retry requests fail.
  • Once task has been completed, update the key with TTL
redis.Set(ctx, requestID, true, 5*time.Minute).Err()
  • Once TTL has passed, Redis automatically deletes the key, allowing identical request to be processed.

Additionally, I could delete the UUID key if a task is explicitly deleted (not completed).

The other way I can think of is to use retention.

asynqClient.EnqueueContext(ctx, task, asynq.TaskID(request.UUID), asynq.Retention(5*time.Minute))

This way, the UUID is still kept in Redis for another 5 minutes after the task is completed. This prevents the Http retry request from being accepted.

@gmhafiz gmhafiz added the bug Something isn't working label Dec 26, 2021
@hibiken
Copy link
Owner

hibiken commented Dec 26, 2021

@gmhafiz Thank you for reporting an issue!

The intention behind the Unique option is to prevent a duplicate task being enqueued to the same queue. The duration you pass to the Unique option is there to avoid a situation where a stale task in the queue blocking new tasks from being enqueued (or other similar situations).

Reading your use case, I think the alternative you suggested sounds perfect: the approach of using TaskID and Retention option so that completed tasks still remain the queue to prevent other duplicate tasks from being enqueued.

asynqClient.EnqueueContext(ctx, task, asynq.TaskID(request.UUID), asynq.Retention(5*time.Minute))

Thank you again for the detailed issue report, and let me know if this approach doesn't work for you :)

@hibiken hibiken closed this as completed Dec 26, 2021
@hibiken
Copy link
Owner

hibiken commented Jan 6, 2022

This issue came up in discussion #376. I'm re-opening this issue to revisit the semantics of the Unique option.

@hibiken hibiken reopened this Jan 6, 2022
@hibiken hibiken added enhancement New feature or request and removed bug Something isn't working labels Jan 12, 2022
@hibiken
Copy link
Owner

hibiken commented Mar 1, 2022

Started a discussion at #409

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants