-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce number of writes needed for metadata updates #48701
Labels
:Distributed Coordination/Cluster Coordination
Cluster formation and cluster state publication, including cluster membership and fault detection.
>enhancement
Meta
v7.6.0
Comments
DaveCTurner
added
>enhancement
Meta
:Distributed Coordination/Cluster Coordination
Cluster formation and cluster state publication, including cluster membership and fault detection.
7x
labels
Oct 30, 2019
Pinging @elastic/es-distributed (:Distributed/Cluster Coordination) |
DaveCTurner
added a commit
to DaveCTurner/elasticsearch
that referenced
this issue
Oct 31, 2019
This commit introduces `LucenePersistedState` which master-eligible nodes can use to persist the cluster metadata in a Lucene index rather than in many separate files. Relates elastic#48701
DaveCTurner
added a commit
that referenced
this issue
Nov 13, 2019
This commit introduces `LucenePersistedState` which master-eligible nodes can use to persist the cluster metadata in a Lucene index rather than in many separate files. Relates #48701
DaveCTurner
added a commit
to DaveCTurner/elasticsearch
that referenced
this issue
Nov 18, 2019
Today on master-eligible nodes we maintain per-index metadata files for every index. However, we also keep this metadata in the `LucenePersistedState`, and only use the per-index metadata files for importing dangling indices. However there is no point in importing a dangling index without any shard data, so we do not need to maintain these extra files any more. This commit removes per-index metadata files from nodes which do not hold any shards of those indices. Relates elastic#48701
DaveCTurner
added a commit
that referenced
this issue
Nov 18, 2019
Today on master-eligible nodes we maintain per-index metadata files for every index. However, we also keep this metadata in the `LucenePersistedState`, and only use the per-index metadata files for importing dangling indices. However there is no point in importing a dangling index without any shard data, so we do not need to maintain these extra files any more. This commit removes per-index metadata files from nodes which do not hold any shards of those indices. Relates #48701
ywelsch
added a commit
that referenced
this issue
Dec 13, 2019
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds an interoperability layer for upgrades from prior versions. This commit disables a number of tests related to dangling indices and command-line tools. Those will be addressed in follow-ups. Relates #48701
This was referenced Dec 13, 2019
ywelsch
added a commit
that referenced
this issue
Dec 17, 2019
Loading shard state information during shard allocation sometimes runs into a situation where a data node does not know yet how to look up the shard on disk if custom data paths are used. The current implementation loads the index metadata from disk to determine what the custom data path looks like. This PR removes this dependency, simplifying the lookup. Relates #48701
ywelsch
added a commit
that referenced
this issue
Dec 17, 2019
Loading shard state information during shard allocation sometimes runs into a situation where a data node does not know yet how to look up the shard on disk if custom data paths are used. The current implementation loads the index metadata from disk to determine what the custom data path looks like. This PR removes this dependency, simplifying the lookup. Relates #48701
ywelsch
added a commit
that referenced
this issue
Dec 19, 2019
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard commands) for the Lucene-based metadata storage. Relates #48701
This was referenced Jan 6, 2020
ywelsch
added a commit
that referenced
this issue
Jan 8, 2020
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed though, and introduces an additional unnecessary cognitive burden to the users. Co-Authored-By: David Turner <david.turner@elastic.co>
ywelsch
added a commit
that referenced
this issue
Jan 8, 2020
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to avoid writing out dangling indices at all. Relates #48701
ywelsch
added a commit
that referenced
this issue
Jan 8, 2020
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
ywelsch
added a commit
that referenced
this issue
Jan 9, 2020
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a bit of bootstrap validation and so that the shard recovery tools work. Cluster states that are written asynchronously have their voting configuration adapted to a non existing configuration so that these nodes cannot mistakenly become master even if their node role is changed back and forth. Relates #48701
ywelsch
added a commit
that referenced
this issue
Jan 14, 2020
Has the new cluster state storage layer emit warnings in case metadata performance is very slow. Relates #48701
ywelsch
added a commit
that referenced
this issue
Jan 14, 2020
Adds a command-line tool to remove broken custom metadata from the cluster state. Relates to #48701
ywelsch
added a commit
that referenced
this issue
Jan 14, 2020
Adds a command-line tool to remove broken custom metadata from the cluster state. Relates to #48701
SivagurunathanV
pushed a commit
to SivagurunathanV/elasticsearch
that referenced
this issue
Jan 23, 2020
Loading shard state information during shard allocation sometimes runs into a situation where a data node does not know yet how to look up the shard on disk if custom data paths are used. The current implementation loads the index metadata from disk to determine what the custom data path looks like. This PR removes this dependency, simplifying the lookup. Relates elastic#48701
SivagurunathanV
pushed a commit
to SivagurunathanV/elasticsearch
that referenced
this issue
Jan 23, 2020
Today we split the on-disk cluster metadata across many files: one file for the metadata of each index, plus one file for the global metadata and another for the manifest. Most metadata updates only touch a few of these files, but some must write them all. If a node holds a large number of indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election from succeeding. This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version of the following PRs that were targeting a feature branch: * Introduce Lucene-based metadata persistence (elastic#48733) This commit introduces `LucenePersistedState` which master-eligible nodes can use to persist the cluster metadata in a Lucene index rather than in many separate files. Relates elastic#48701 * Remove per-index metadata without assigned shards (elastic#49234) Today on master-eligible nodes we maintain per-index metadata files for every index. However, we also keep this metadata in the `LucenePersistedState`, and only use the per-index metadata files for importing dangling indices. However there is no point in importing a dangling index without any shard data, so we do not need to maintain these extra files any more. This commit removes per-index metadata files from nodes which do not hold any shards of those indices. Relates elastic#48701 * Use Lucene exclusively for metadata storage (elastic#50144) This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds an interoperability layer for upgrades from prior versions. This commit disables a number of tests related to dangling indices and command-line tools. Those will be addressed in follow-ups. Relates elastic#48701 * Add command-line tool support for Lucene-based metadata storage (elastic#50179) Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard commands) for the Lucene-based metadata storage. Relates elastic#48701 * Use single directory for metadata (elastic#50639) Earlier PRs for elastic#48701 introduced a separate directory for the cluster state. This is not needed though, and introduces an additional unnecessary cognitive burden to the users. Co-Authored-By: David Turner <david.turner@elastic.co> * Add async dangling indices support (elastic#50642) Adds support for writing out dangling indices in an asynchronous way. Also provides an option to avoid writing out dangling indices at all. Relates elastic#48701 * Fold node metadata into new node storage (elastic#50741) Moves node metadata to uses the new storage mechanism (see elastic#48701) as the authoritative source. * Write CS asynchronously on data-only nodes (elastic#50782) Writes cluster states out asynchronously on data-only nodes. The main reason for writing out the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a bit of bootstrap validation and so that the shard recovery tools work. Cluster states that are written asynchronously have their voting configuration adapted to a non existing configuration so that these nodes cannot mistakenly become master even if their node role is changed back and forth. Relates elastic#48701 * Remove persistent cluster settings tool (elastic#50694) Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on disk cluster state in case where it contains incompatible settings that prevent the cluster from forming. Relates elastic#48701 * Make cluster state writer resilient to disk issues (elastic#50805) Adds handling to make the cluster state writer resilient to disk issues. Relates to elastic#48701 * Omit writing global metadata if no change (elastic#50901) Uses the same optimization for the new cluster state storage layer as the old one, writing global metadata only when changed. Avoids writing out the global metadata if none of the persistent fields changed. Speeds up server:integTest by ~10%. Relates elastic#48701 * DanglingIndicesIT should ensure node removed first (elastic#50896) These tests occasionally failed because the deletion was submitted before the restarting node was removed from the cluster, causing the deletion not to be fully acked. This commit fixes this by checking the restarting node has been removed from the cluster. Co-authored-by: David Turner <david.turner@elastic.co>
SivagurunathanV
pushed a commit
to SivagurunathanV/elasticsearch
that referenced
this issue
Jan 23, 2020
Has the new cluster state storage layer emit warnings in case metadata performance is very slow. Relates elastic#48701
SivagurunathanV
pushed a commit
to SivagurunathanV/elasticsearch
that referenced
this issue
Jan 23, 2020
Adds a command-line tool to remove broken custom metadata from the cluster state. Relates to elastic#48701
This was referenced Feb 3, 2020
Closing this issue as the work here was merged for 7.6.0 |
henningandersen
added a commit
to henningandersen/elasticsearch
that referenced
this issue
Jan 15, 2021
Re-enabled and fixed test after we now persist metadata in lucene. Relates elastic#48701
henningandersen
added a commit
that referenced
this issue
Jan 18, 2021
Re-enabled and fixed test after we now persist metadata in lucene. Relates #48701
henningandersen
added a commit
that referenced
this issue
Jan 18, 2021
Re-enabled and fixed test after we now persist metadata in lucene. Relates #48701
henningandersen
added a commit
that referenced
this issue
Jan 18, 2021
Re-enabled and fixed test after we now persist metadata in lucene. Relates #48701
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Distributed Coordination/Cluster Coordination
Cluster formation and cluster state publication, including cluster membership and fault detection.
>enhancement
Meta
v7.6.0
Today we split the on-disk cluster metadata across many files: one file for the metadata of each index, plus one file for the global metadata and another for the manifest. Most metadata updates only touch a few of these files, but some must write them all. If a node holds a large number of indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election from succeeding.
We plan to change the format of on-disk metadata to reduce the number of writes needed during metadata updates. One option is a monolithic file containing the complete metadata, but this is inefficient in the common case that the metadata is mostly unchanged. Another option is to keep an append-only log of changes, but such a log must be compacted and this introduces quite some complexity. However we already have access to a very good storage mechanism that has the right kinds of properties: Lucene! We will use a dedicated Lucene index on each master-eligible node and replace each individual file with a document in this index. Most metadata updates will need only a few writes, and Lucene's background merging will take care of compaction.
On master-ineligible nodes we can keep the existing format and still reduce the writes required, because we can make better use of the fact that master-ineligible nodes only write committed metadata and therefore the version numbers are trustworthy. It may also be possible to avoid writing index metadata during cluster state application entirely and defer it until later.
CoordinatorTests#testCannotJoinClusterWithDifferentUUID
test case) (Add command-line tool support for Lucene-based metadata storage #50179)Implement rescue tool for when the node ID file is lost (ref. Introduce Lucene-based metadata persistence #48733 (comment))Replaced by folding node metadata into new storage and using that as authoritative source: Fold node metadata into new node storage #50741Later:
The text was updated successfully, but these errors were encountered: