-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
migrations fail with "Not enough active copies to meet shard count of [ALL] (have 1, needed 2)" #127136
Comments
Pinging @elastic/kibana-core (Team:Core) |
After further investigation, we found-out that this issue is caused by the following: a) Orchestration issue in ESS/ECE where the shutdown metadata is not cleared (c.f Get shutdown API). This usually happens after some failed configuration changes. If so, the workaround here is to delete the shutdown metadata (c.f Delete shutdown API). This will allow the shards to be allocated and the Kibana saved objects migration to complete. b) Related to the above, the number of replicas (set by |
From #129016:
|
We believe the root cause of this problem will be fixed in 8.3 elastic/elasticsearch#86047 |
elastic/elasticsearch#85277 Also seems to address a part of this problem although it was backported to 7.17.2 we still saw occurrences of this in 7.17.2 |
This error occurs under the following circumstances:
Apart from the referenced bugs for single node clusters this is almost always a temporary problem, so if Kibana gets restarted the index status is usually green eventually and the migration can successfully complete without intervention. To fix this we should wait for the temporary index status to become "green". We initially chose an "yellow" index status because that's all that we need for reading from the source index, but when writing to an index we always need a "green" status because we use So |
has this been resolved? we have ES cluster in k8s and seems like we have a similar issue but since kibana is down we wont be able to perform these steps? how to get this working for cluster? what is the resolution to this? |
Part of #129016
We've observed some Kibana upgrades to 8.0+ on a single node Elasticsearch cluster can fail with:
This issue has been observed in Elasticsearch Service (ESS) and Elastic Cloud Enterprise (ECE). There is an orchestration issue where the shutdown metadata is not cleared (c.f Get shutdown API). This usually happens after some failed configuration changes.
Related to the above, the number of replicas (set by
index.auto_expand_replicas
) may be incorrect in a single-node cluster (c.f elastic/elasticsearch#84788).Workaround
a) Delete the shutdown metadata (c.f Delete shutdown API). This will allow the shards to be allocated and the Kibana saved objects migration to complete.
b) Wait for the next restart of Kibana (this happens automatically in ECE/ESS in this particular scenario).
c) Check that Kibana is healthy and accessible. Kibana logs should reveal that the saved objects migration successfully completed. For example:
The text was updated successfully, but these errors were encountered: