-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure no Watches are running after Watcher is stopped. #43888
Conversation
Watcher keeps track of which watches are currently running keyed by watcher name/id. If a watch is currently running it will not run the same watch and will result in a message : "Watch is already queued in thread pool" and a state: "not_executed_already_queued" When Watcher is stopped, it will stop watcher (rejecting any new watches), but allow the currently running watches to run to completion. Waiting for the currently running watches to complete is done async to the stopping of Watcher. Meaning that Watcher will report as fully stopped, but there is still a background thread waiting for all of the Watches to finish before it removes the watch from it's list of currently running Watches. The integration test start and stop watcher between each test. The goal to ensure a clean state between tests. However, since Watcher can report "yes - I am stopped", but there are still running Watches, the tests may bleed over into each other, especially on slow machines. This can result in errors related to "Watch is already queued in thread pool" and a state: "not_executed_already_queued", and is VERY difficult to reproduce. This commit changes the waiting for Watches on stop/pause from an aysnc waiting, back to a sync wait as it worked prior to elastic#30118. This help ensure that for testing testing scenario the stop much more predictable, such that after fully stopped, no Watches are running. This should have little impact if any on production code since Watcher isn't stopped/paused too often and when it stop/pause it has the same behavior is the same, it will just run on the calling thread, not a generic thread.
Pinging @elastic/es-core-features |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a question, but this looks good otherwise.
@@ -106,7 +105,7 @@ | |||
private final WatchExecutor executor; | |||
private final ExecutorService genericExecutor; | |||
|
|||
private AtomicReference<CurrentExecutions> currentExecutions = new AtomicReference<>(); | |||
private CurrentExecutions currentExecutions; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can CurrentExecutions
remain inside an AtomicReference
?
It is read else where without acquiring a lock and as far as I understand it is about making sure that clearExecutions()
happens in a sync manner, which is possible with keeping the AtomicReference
?
(also the currentExecutions field can then be made final)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think this either needs to be volatile
or go back to being an AtomicReference
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks nice catch. I also had messed up the order of "sealing" the concurrent executions. The change here is now a single line that removes the fork.
@elasticmachine update branch |
This reverts commit 9d18274.
This reverts commit 926a671.
Watcher keeps track of which watches are currently running keyed by watcher name/id. If a watch is currently running it will not run the same watch and will result in a message : "Watch is already queued in thread pool" and a state: "not_executed_already_queued" When Watcher is stopped, it will stop watcher (rejecting any new watches), but allow the currently running watches to run to completion. Waiting for the currently running watches to complete is done async to the stopping of Watcher. Meaning that Watcher will report as fully stopped, but there is still a background thread waiting for all of the Watches to finish before it removes the watch from it's list of currently running Watches. The integration test start and stop watcher between each test. The goal to ensure a clean state between tests. However, since Watcher can report "yes - I am stopped", but there are still running Watches, the tests may bleed over into each other, especially on slow machines. This can result in errors related to "Watch is already queued in thread pool" and a state: "not_executed_already_queued", and is VERY difficult to reproduce. This commit changes the waiting for Watches on stop/pause from an aysnc waiting, back to a sync wait as it worked prior to elastic#30118. This help ensure that for testing testing scenario the stop much more predictable, such that after fully stopped, no Watches are running. This should have little impact if any on production code since Watcher isn't stopped/paused too often and when it stop/pause it has the same behavior is the same, it will just run on the calling thread, not a generic thread.
…icsearch into watcher_stop_less_async
test failures appear relevant... looking into it. |
It appears that this can result in holding up the cluster state applier thread too long. closing this PR and will open a new one that will take into account the concurrent executions in addition to the closed state returned by watcher stats to block until Watcher is fully stopped. |
Watcher keeps track of which watches are currently running keyed by watcher name/id.
If a Watch is currently running it will not run the same Watch and will result in a
message : "Watch is already queued in thread pool" and a state: "not_executed_already_queued"
When Watcher is stopped, it will stop watcher (rejecting any new watches), but allow
the currently running watches to run to completion. Waiting for the currently running
Watches to complete is done async to the stopping of Watcher. Meaning that Watcher will
report as fully stopped, but there is still a background thread waiting for all of the
Watches to finish before it removes the Watch from it's list of currently running Watches.
The integration test start and stop watcher between each test. The goal to ensure a clean
state between tests. However, since Watcher can report "yes - I am stopped", but there
are still running Watches, the tests may bleed over into each other, especially on slow
machines. This can result in errors related to "Watch is already queued in thread pool"
and a state: "not_executed_already_queued", and is VERY difficult to reproduce. This
may also change the most recent Watcher history document in an unpredictable way.
This commit changes the waiting for Watches on stop/pause from an aysnc waiting, back to a
sync wait as it worked prior to #30118. This help ensure that for testing testing scenario
the stop much more predictable, such that after fully stopped, no Watches are running.
This should have little impact if any on production code since Watcher isn't stopped/paused
too often and when it stop/pause it has the same behavior is the same, it will just run on
the calling thread, not a generic thread.
Related: #42409