Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unixfs autosharding with config #8527

Closed
wants to merge 9 commits into from

Conversation

aschmahmann
Copy link
Contributor

@aschmahmann aschmahmann commented Oct 27, 2021

Related to #8114 and may end up merging onto it.

ipfs/go-ipfs-config#149 adds Internal.UnixFSShardingSizeThreshold config flag

Testing:

  1. Does the config flag work?
  2. Do the old tests work if the sharding threshold is set really low whenever we previously used Experimental.ShardingEnabled?

@lidel
Copy link
Member

lidel commented Oct 28, 2021

@aschmahmann aschmahmann force-pushed the feat/unixfs-autosharding-with-config branch from 9a0c4a5 to fd55804 Compare November 15, 2021 18:07
@@ -216,7 +216,7 @@ jobs:
command: |
npm init -y
npm install ipfs@^0.59.1
npm install ipfs-interop@^7.0.3
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: use a version of interop that's on master

go.mod Show resolved Hide resolved
go.mod Show resolved Hide resolved
go.mod Show resolved Hide resolved
aschmahmann and others added 7 commits November 16, 2021 13:48
- applies fix from go-ipfs-config which removes null when default
- escapes JSON string when passed in sharness script
* test: add tests for automatic sharding and unsharding
* refactor: changed some naming in the sharding sharness tests to make more sense

Co-authored-by: Adin Schmahmann <adin.schmahmann@gmail.com>
This ensures existing users of global sharding experiment get notified
that the flag no longer works + that autosharding happens automatically.

For people who NEED to keep the old behavior (eg. have no time to
migrate today) there is a note about restoring it with
`UnixFSShardingSizeThreshold`.
Comment on lines +329 to +334
// Migrate users of deprecated Experimental.ShardingEnabled flag
if cfg.Experimental.ShardingEnabled {
logger.Fatal("The `Experimental.ShardingEnabled` field is no longer used, please remove it from the config.\n" +
"go-ipfs now automatically shards when directory block is bigger than `" + shardSizeString + "`.\n" +
"If you need to restore the old behavior (sharding everything) set `Internal.UnixFSShardingSizeThreshold` to `1B`.\n")
}
Copy link
Member

@lidel lidel Nov 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ℹ️ This provides a migration path for existing users that have experiment enabled.
I also updated docs/experimental-features.md and made flag optional in JSON in ipfs/go-ipfs-config#158

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. This seems like the best we can get.

There's no way to warn anybody who explicitly wanted sharding to stay off since that flag didn't have no way to denote "unset" which is what we're moving towards now.

lidel added a commit to ipfs/go-ipfs-config that referenced this pull request Nov 23, 2021
@lidel
Copy link
Member

lidel commented Nov 23, 2021

Continued in #8563

@lidel lidel closed this Nov 23, 2021
laurentsenta pushed a commit to laurentsenta/kubo that referenced this pull request Feb 25, 2022
laurentsenta pushed a commit to laurentsenta/kubo that referenced this pull request Feb 25, 2022
laurentsenta pushed a commit to laurentsenta/kubo that referenced this pull request Mar 4, 2022
laurentsenta pushed a commit to laurentsenta/kubo that referenced this pull request Mar 4, 2022
@schomatis schomatis deleted the feat/unixfs-autosharding-with-config branch April 27, 2022 14:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants