Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't remove all non-data/non-master nodes during upgrade/scaling #1753

Closed
david-kow opened this issue Sep 19, 2019 · 0 comments · Fixed by #5452
Closed

Don't remove all non-data/non-master nodes during upgrade/scaling #1753

david-kow opened this issue Sep 19, 2019 · 0 comments · Fixed by #5452
Assignees
Labels
>bug Something isn't working discuss We need to figure this out

Comments

@david-kow
Copy link
Contributor

Currently, we allow all ingest nodes to be unavailable during upgrade and scaling. Upgrade respects maxUnavailable setting and after #1292 is fixed scaling will as well.

Nevertheless, I'd consider an unexpected behavior to remove all ingest nodes making cluster unable to index even if in line with maxUnavailable setting. We should at least leave a single ingest node (like we treat masters) and maybe have even stricter bound (50%?) as going to a single node may cripple cluster capabilities.

@david-kow david-kow added >bug Something isn't working discuss We need to figure this out labels Sep 19, 2019
@barkbay barkbay self-assigned this Nov 18, 2021
@barkbay barkbay removed their assignment Dec 15, 2021
@pebrc pebrc changed the title Don't remove all ingest nodes during upgrade/scaling Don't remove all non-data/non-master nodes during upgrade/scaling Jan 17, 2022
@pebrc pebrc self-assigned this Mar 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug Something isn't working discuss We need to figure this out
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants