-
Notifications
You must be signed in to change notification settings - Fork 653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: remove fixed shards #9219
Conversation
the next steps are:
|
surprisingly nayduck doesn't seem to have a dependency on fixed_shards |
Discussed on zulip here and we're good to go. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
74f7238
to
f65557d
Compare
note to self: describe the change in changelog |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
core/primitives/src/shard_layout.rs
Outdated
// Note: As we scale up the number of shards we can consider | ||
// changing this method to do a binary search rather than linear | ||
// scan. For the time being, with only 4 shards, this is perfectly fine. | ||
let mut shard_id = 0 as ShardId; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
super nit: this looks a bit weird, let mut shard_id: ShardId = 0
seems more canonical to me
2c253d1
to
2d99f84
Compare
In this PR I'm removing the fixed shards field from the shard layout. I removed the fixed_shards field directly from the ShardLayoutV1. This implementation is quite hacky as it does not follow the typical flow for making protocol changes. This approach needs to be discussed and agreed on first but here is an implementation just to get a feel for it. The motivation for removing fixed shards is that fixes_shards break the account ids contiguity within a shard. A sub-account of a fixed shard is assigned to the same shard as the fixed account but it may fall in the middle of account range of another shard. For example let's consider the following shard layout: ``` fixed_shards = ['fixed'] boundary_accounts = ['middle'] ``` In this shard layout we have three shards: - 0 - 'fixed' and all sub-accounts - 1 - [..'middle'] with exception for any sub-account of fixed - 2 - ['middle'..] with exception for any sub-account of fixed Because of the fixed account, shards 1 and 2 are now full of holes for any subaccounts of 'fixed'. This property of fixed_shards makes it hard to perform any range operations on a shard such as resharding or deletion. The proper way to do it would be to add ShardLayoutV2 - a copy of V1 with fixed_shards removed and version bumped. Due to the way that we use for storing state date in storage - using the shard_uid as a prefix for the storage key - this approach would require a careful migration. One way to go about it would be to perform a null-resharding in epoch preceeding the shard layout version bump. There the node would have to create a copy of the state and store it in storage with the new version in shard_uid. Once the state is duplicated the node can keep applying changes to both states, same as in resharding. Such a migration would need to be prepared very carefully and would take a lot of time and effort. The reason why I believe that we can get away without new shard layout version is that fixed_shards are not used in production and the shard version is not actually stored in the state. The ShardLayoutV1 with empty fixed_shards is semantically equivalent to ShardLayoutV1 with the fixed_shards field removed. It's worth mentioning that the migration pains would be removed if we decided to change the storage structure to one that doesn't have the shard_uid in the storage key.
* fix(state-sync): finalize using flat storage * fix(state-sync): Applied values from state parts should be inlined when applicable (#9265) * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * clippy * fmt * logging * scan from a hash * refactor: add FlatStateValue constructor for on-disk values (#9269) This logic is used in multiple places, so it makes sense to abstract it behind a method in `FlatStateValue`. `on_disk` name is chosen over `new` since this logic is only applicable for values to be persisted on disk. Later we will also add inlining for values as part of Flat Storage Delta and those will have lower inlining threshold, so we will have another constructor. * refactor: Move adjust-db tool to database tool (#9264) Refactoring of `adjust-db` tool, moving it to the `database` toolset where there's of database tools should live. * feat: remove fixed shards (#9219) In this PR I'm removing the fixed shards field from the shard layout. I removed the fixed_shards field directly from the ShardLayoutV1. This implementation is quite hacky as it does not follow the typical flow for making protocol changes. This approach needs to be discussed and agreed on first but here is an implementation just to get a feel for it. The motivation for removing fixed shards is that fixes_shards break the account ids contiguity within a shard. A sub-account of a fixed shard is assigned to the same shard as the fixed account but it may fall in the middle of account range of another shard. For example let's consider the following shard layout: ``` fixed_shards = ['fixed'] boundary_accounts = ['middle'] ``` In this shard layout we have three shards: - 0 - 'fixed' and all sub-accounts - 1 - [..'middle'] with exception for any sub-account of fixed - 2 - ['middle'..] with exception for any sub-account of fixed Because of the fixed account, shards 1 and 2 are now full of holes for any subaccounts of 'fixed'. This property of fixed_shards makes it hard to perform any range operations on a shard such as resharding or deletion. The proper way to do it would be to add ShardLayoutV2 - a copy of V1 with fixed_shards removed and version bumped. Due to the way that we use for storing state date in storage - using the shard_uid as a prefix for the storage key - this approach would require a careful migration. One way to go about it would be to perform a null-resharding in epoch preceeding the shard layout version bump. There the node would have to create a copy of the state and store it in storage with the new version in shard_uid. Once the state is duplicated the node can keep applying changes to both states, same as in resharding. Such a migration would need to be prepared very carefully and would take a lot of time and effort. The reason why I believe that we can get away without new shard layout version is that fixed_shards are not used in production and the shard version is not actually stored in the state. The ShardLayoutV1 with empty fixed_shards is semantically equivalent to ShardLayoutV1 with the fixed_shards field removed. It's worth mentioning that the migration pains would be removed if we decided to change the storage structure to one that doesn't have the shard_uid in the storage key. * remove near-cache dep from near-vm-runner (#9273) Part of #8197 * fix(state-sync): Clear flat storage outside of ClientActor (#9268) Deleting multiple GBs from RocksDB is definitely taking more than 100ms. In case a node catches up, it can potentially make a chunk producer or a block producer miss its chunk/block. This PR moves deletion of Flat State to `SyncJobsActor` to avoid blocking `ClientActor`. * remove near-stable-hasher dep from near-vm-runner (#9270) Part of #8197 * fix(state-sync): Applied values from state parts should be inlined when applicable (#9265) * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * clippy * fmt * fix(state-sync): Finalize state sync using flat storage * fix(state-sync): Finalize state sync using flat storage --------- Co-authored-by: Anton Puhach <anton@near.org> Co-authored-by: Jure Bajic <jure@near.org> Co-authored-by: wacban <wacban@users.noreply.github.com> Co-authored-by: Ekleog-NEAR <96595974+Ekleog-NEAR@users.noreply.github.com> Co-authored-by: near-bulldozer[bot] <73298989+near-bulldozer[bot]@users.noreply.github.com>
* fix(state-sync): finalize using flat storage * fix(state-sync): Applied values from state parts should be inlined when applicable (#9265) * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * clippy * fmt * logging * scan from a hash * refactor: add FlatStateValue constructor for on-disk values (#9269) This logic is used in multiple places, so it makes sense to abstract it behind a method in `FlatStateValue`. `on_disk` name is chosen over `new` since this logic is only applicable for values to be persisted on disk. Later we will also add inlining for values as part of Flat Storage Delta and those will have lower inlining threshold, so we will have another constructor. * refactor: Move adjust-db tool to database tool (#9264) Refactoring of `adjust-db` tool, moving it to the `database` toolset where there's of database tools should live. * feat: remove fixed shards (#9219) In this PR I'm removing the fixed shards field from the shard layout. I removed the fixed_shards field directly from the ShardLayoutV1. This implementation is quite hacky as it does not follow the typical flow for making protocol changes. This approach needs to be discussed and agreed on first but here is an implementation just to get a feel for it. The motivation for removing fixed shards is that fixes_shards break the account ids contiguity within a shard. A sub-account of a fixed shard is assigned to the same shard as the fixed account but it may fall in the middle of account range of another shard. For example let's consider the following shard layout: ``` fixed_shards = ['fixed'] boundary_accounts = ['middle'] ``` In this shard layout we have three shards: - 0 - 'fixed' and all sub-accounts - 1 - [..'middle'] with exception for any sub-account of fixed - 2 - ['middle'..] with exception for any sub-account of fixed Because of the fixed account, shards 1 and 2 are now full of holes for any subaccounts of 'fixed'. This property of fixed_shards makes it hard to perform any range operations on a shard such as resharding or deletion. The proper way to do it would be to add ShardLayoutV2 - a copy of V1 with fixed_shards removed and version bumped. Due to the way that we use for storing state date in storage - using the shard_uid as a prefix for the storage key - this approach would require a careful migration. One way to go about it would be to perform a null-resharding in epoch preceeding the shard layout version bump. There the node would have to create a copy of the state and store it in storage with the new version in shard_uid. Once the state is duplicated the node can keep applying changes to both states, same as in resharding. Such a migration would need to be prepared very carefully and would take a lot of time and effort. The reason why I believe that we can get away without new shard layout version is that fixed_shards are not used in production and the shard version is not actually stored in the state. The ShardLayoutV1 with empty fixed_shards is semantically equivalent to ShardLayoutV1 with the fixed_shards field removed. It's worth mentioning that the migration pains would be removed if we decided to change the storage structure to one that doesn't have the shard_uid in the storage key. * remove near-cache dep from near-vm-runner (#9273) Part of #8197 * fix(state-sync): Clear flat storage outside of ClientActor (#9268) Deleting multiple GBs from RocksDB is definitely taking more than 100ms. In case a node catches up, it can potentially make a chunk producer or a block producer miss its chunk/block. This PR moves deletion of Flat State to `SyncJobsActor` to avoid blocking `ClientActor`. * remove near-stable-hasher dep from near-vm-runner (#9270) Part of #8197 * fix(state-sync): Applied values from state parts should be inlined when applicable (#9265) * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * fix(state-sync): Applied values from state parts should be inlined when applicable * clippy * fmt * fix(state-sync): Finalize state sync using flat storage * fix(state-sync): Finalize state sync using flat storage --------- Co-authored-by: Anton Puhach <anton@near.org> Co-authored-by: Jure Bajic <jure@near.org> Co-authored-by: wacban <wacban@users.noreply.github.com> Co-authored-by: Ekleog-NEAR <96595974+Ekleog-NEAR@users.noreply.github.com> Co-authored-by: near-bulldozer[bot] <73298989+near-bulldozer[bot]@users.noreply.github.com>
In this PR I'm removing the fixed shards field from the shard layout. I removed the fixed_shards field directly from the ShardLayoutV1. This implementation is quite hacky as it does not follow the typical flow for making protocol changes. This approach needs to be discussed and agreed on first but here is an implementation just to get a feel for it.
The motivation for removing fixed shards is that fixes_shards break the account ids contiguity within a shard. A sub-account of a fixed shard is assigned to the same shard as the fixed account but it may fall in the middle of account range of another shard. For example let's consider the following shard layout:
In this shard layout we have three shards:
Because of the fixed account, shards 1 and 2 are now full of holes for any subaccounts of 'fixed'. This property of fixed_shards makes it hard to perform any range operations on a shard such as resharding or deletion.
The proper way to do it would be to add ShardLayoutV2 - a copy of V1 with fixed_shards removed and version bumped. Due to the way that we use for storing state date in storage - using the shard_uid as a prefix for the storage key - this approach would require a careful migration. One way to go about it would be to perform a null-resharding in epoch preceeding the shard layout version bump. There the node would have to create a copy of the state and store it in storage with the new version in shard_uid. Once the state is duplicated the node can keep applying changes to both states, same as in resharding. Such a migration would need to be prepared very carefully and would take a lot of time and effort.
The reason why I believe that we can get away without new shard layout version is that fixed_shards are not used in production and the shard version is not actually stored in the state. The ShardLayoutV1 with empty fixed_shards is semantically equivalent to ShardLayoutV1 with the fixed_shards field removed.
It's worth mentioning that the migration pains would be removed if we decided to change the storage structure to one that doesn't have the shard_uid in the storage key.