Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add concurrency extension for ActiveJob #281

Merged
merged 1 commit into from
Jul 7, 2021
Merged

Conversation

bensheldon
Copy link
Owner

CreatesGoodJob::ActiveJobExtensions::Concurrency module that can be included into ActiveJob classes.

When enqueuing jobs and concurrency is reached/exceeded, jobs are not enqueued and the MyJob.perform_later returns false using native callback behavior of throw :abort

When performing jobs and concurrency is reached/exceeded, jobs are retried using an incremental backoff, using native ActiveJob retry_on ... attempts: Float::INFINITY, wait: :exponentially_longer functionality.

Closes #206.

# TODO: Why is `unscoped` necessary? Nested scope is bleeding into subsequent query?
enqueue_concurrency = GoodJob::Job.unscoped.where(concurrency_key: job.good_job_concurrency_key).unfinished.count
# The job has not yet been enqueued, so check if adding it will go over the limit
throw :abort if enqueue_concurrency + 1 > limit
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this create a race condition that needs to be accounted for? Say the enqueue_concurrency limit is 1, and the actual enqueue concurrency goes from 1 to 0 between the above two lines of code.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the race condition is addresses because this is wrapped with a pg_advisory_lock on the key, which should function as a database-wide mutex.

But It's also gonna introduce latency because the PG connection will wait until it can acquire an advisory lock before running the code. If done on the webserver when under heavy load and there's a database statement timeout... that could be bad. I hope that jobs that are likely to need enqueue limits will be triggered from a backend process (rake task, upcoming cron feature, etc).

@bensheldon bensheldon force-pushed the concurrency_controls branch 3 times, most recently from a5b1807 to 6e4ba74 Compare June 25, 2021 14:32
@bensheldon bensheldon added the enhancement New feature or request label Jun 29, 2021
@bensheldon bensheldon force-pushed the concurrency_controls branch 3 times, most recently from 69b42bb to d9b95dd Compare July 2, 2021 14:54
@bensheldon bensheldon merged commit 5431fef into main Jul 7, 2021
@bensheldon bensheldon deleted the concurrency_controls branch July 7, 2021 14:35
@reczy
Copy link
Contributor

reczy commented Jul 13, 2021

@bensheldon thank you for this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Custom advisory locks to prevent certain jobs from being worked on concurrently?
2 participants