Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quartz scheduler does not support ConcurrentExecution.SKIP correctly #15800

Closed
davidcurrie opened this issue Mar 17, 2021 · 6 comments · Fixed by #19369
Closed

Quartz scheduler does not support ConcurrentExecution.SKIP correctly #15800

davidcurrie opened this issue Mar 17, 2021 · 6 comments · Fixed by #19369
Labels
Milestone

Comments

@davidcurrie
Copy link

Describe the bug

When using the Quartz scheduler with multiple nodes, ConcurrentExecution.SKIP as implemented under #10203 only skips executions that happen to be scheduled to a node that is still running a previous execution. Given that driver to switch from the simple scheduler to Quartz is often to get scheduling across multiple nodes, this behavior is not as expected. This problem is discussed in #13048 although that issue is looking for something different.

Expected behavior

Concurrent execution should be skipped if a job is still executing anywhere in the cluster. See https://stackoverflow.com/a/5801514 for an example of how to implement this with Quartz.

Actual behavior

Concurrent execution is only skipped if a job is still executing in the same process that a subsequent execution is scheduled to.

@davidcurrie davidcurrie added the kind/bug Something isn't working label Mar 17, 2021
@quarkus-bot
Copy link

quarkus-bot bot commented Mar 17, 2021

/cc @machi1990, @mkouba

@mkouba
Copy link
Contributor

mkouba commented Mar 17, 2021

Scheduled#concurrentExecution() is not intended to work accross the cluster.

@machi1990 Do you happen to know if it would be possible to map the Scheduled#concurrentExecution() to a Quartz-specific feature?

@mkouba mkouba added kind/enhancement New feature or request and removed kind/bug Something isn't working labels Mar 17, 2021
@davidcurrie
Copy link
Author

@mkouba - it seems odd to me that, by switching from the simple scheduler to Quartz, I go from running a job in every node to running that job in only one node, yet the semantics of the concurrent execution option don't mirror that. I'm not sure what the usage pattern is that would mean that you wouldn't want it to run concurrently in the same node but it would be fine to run it concurrently in a different one. (It certainly caught us by surprise.)

I'm no Quartz expert but I don't think there is a direct equivalent (@DisallowConcurrentExecution queues jobs rather than skipping them) but, as per the SO post I linked to, if you have access to the JobExecutionContext it is easy enough to manually check whether the same trigger is still being executed elsewhere and skip.

@mkouba
Copy link
Contributor

mkouba commented Mar 17, 2021

I'm not a Quartz expert either but AFAIK the @DisallowConcurrentExecution annotation can only be used to prevent multiple job executions on a single node and the org.quartz.Scheduler.getCurrentlyExecutingJobs() method's javadoc (mentioned in the SO post) reads: "This method is not cluster aware. That is, it will only return Jobs currently executing in this Scheduler instance, not across the entire cluster".

@davidcurrie
Copy link
Author

Apologies for the bum steer. The information is patently in the Quartz DB but it is not obviously exposed in any API. Seems I will have to look elsewhere for what seems like such an obvious capability.

@machi1990
Copy link
Member

machi1990 commented Mar 18, 2021

Scheduled#concurrentExecution() is not intended to work accross the cluster.

@machi1990 Do you happen to know if it would be possible to map the Scheduled#concurrentExecution() to a Quartz-specific feature?

@mkouba I do not think / know if there is an equivalent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants