Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Increase assertBusy timeout in ML node failure tests #52425

Merged

Conversation

droberts195
Copy link
Contributor

Following the change to store cluster state in Lucene indices
(#50907) it can take longer for all the cluster state updates
associated with node failure scenarios to be processed during
internal cluster tests where several nodes all run in the same
JVM.

Following the change to store cluster state in Lucene indices
(elastic#50907) it can take longer for all the cluster state updates
associated with node failure scenarios to be processed during
internal cluster tests where several nodes all run in the same
JVM.
@droberts195 droberts195 added >test Issues or PRs that are addressing/adding tests :ml Machine learning v8.0.0 v7.7.0 v7.6.1 labels Feb 17, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/ml-core (:ml)

Copy link
Contributor

@przemekwitek przemekwitek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@droberts195 droberts195 merged commit d7d487b into elastic:master Feb 17, 2020
@droberts195 droberts195 deleted the increase_ml_distrib_fail_timeout branch February 17, 2020 16:50
droberts195 added a commit that referenced this pull request Feb 17, 2020
Following the change to store cluster state in Lucene indices
(#50907) it can take longer for all the cluster state updates
associated with node failure scenarios to be processed during
internal cluster tests where several nodes all run in the same
JVM.
droberts195 added a commit that referenced this pull request Feb 17, 2020
Following the change to store cluster state in Lucene indices
(#50907) it can take longer for all the cluster state updates
associated with node failure scenarios to be processed during
internal cluster tests where several nodes all run in the same
JVM.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:ml Machine learning >test Issues or PRs that are addressing/adding tests v7.6.1 v7.7.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants