Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unixfs autosharding with config #8527

Closed
wants to merge 9 commits into from
2 changes: 1 addition & 1 deletion .circleci/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ jobs:
command: |
npm init -y
npm install ipfs@^0.59.1
npm install ipfs-interop@^7.0.3
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: use a version of interop that's on master

npm install ipfs/interop#fix/use-new-go-ipfs-sharding-option
npm install mocha-circleci-reporter@0.0.3
working_directory: ~/ipfs/go-ipfs/interop
- run:
Expand Down
11 changes: 5 additions & 6 deletions core/coreapi/test/path_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@ import (
"github.com/ipld/go-ipld-prime"
)


func TestPathUnixFSHAMTPartial(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
Expand All @@ -27,16 +26,16 @@ func TestPathUnixFSHAMTPartial(t *testing.T) {
a := apis[0]

// Setting this after instantiating the swarm so that it's not clobbered by loading the go-ipfs config
prevVal := uio.UseHAMTSharding
uio.UseHAMTSharding = true
prevVal := uio.HAMTShardingSize
uio.HAMTShardingSize = 1
defer func() {
uio.UseHAMTSharding = prevVal
uio.HAMTShardingSize = prevVal
}()

// Create and add a sharded directory
dir := make(map[string]files.Node)
// Make sure we have at least two levels of sharding
for i := 0; i < uio.DefaultShardWidth + 1; i++ {
for i := 0; i < uio.DefaultShardWidth+1; i++ {
dir[strconv.Itoa(i)] = files.NewBytesFile([]byte(strconv.Itoa(i)))
}

Expand Down Expand Up @@ -67,7 +66,7 @@ func TestPathUnixFSHAMTPartial(t *testing.T) {
for k := range dir {
// The node will go out to the (non-existent) network looking for the missing block. Make sure we're erroring
// because we exceeded the timeout on our query
timeoutCtx, timeoutCancel := context.WithTimeout(ctx, time.Second * 1)
timeoutCtx, timeoutCancel := context.WithTimeout(ctx, time.Second*1)
_, err := a.ResolveNode(timeoutCtx, path.Join(r, k))
if err != nil {
if timeoutCtx.Err() == nil {
Expand Down
18 changes: 16 additions & 2 deletions core/node/groups.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ import (
offline "github.com/ipfs/go-ipfs-exchange-offline"
offroute "github.com/ipfs/go-ipfs-routing/offline"
uio "github.com/ipfs/go-unixfs/io"

"github.com/dustin/go-humanize"
"go.uber.org/fx"
)

Expand Down Expand Up @@ -316,8 +318,20 @@ func IPFS(ctx context.Context, bcfg *BuildCfg) fx.Option {
return bcfgOpts // error
}

// TEMP: setting global sharding switch here
uio.UseHAMTSharding = cfg.Experimental.ShardingEnabled
// Auto-sharding settings
shardSizeString := cfg.Internal.UnixFSShardingSizeThreshold.WithDefault("256kiB")
shardSizeInt, err := humanize.ParseBytes(shardSizeString)
if err != nil {
return fx.Error(err)
}
uio.HAMTShardingSize = int(shardSizeInt)

// Migrate users of deprecated Experimental.ShardingEnabled flag
if cfg.Experimental.ShardingEnabled {
logger.Fatal("The `Experimental.ShardingEnabled` field is no longer used, please remove it from the config.\n" +
"go-ipfs now automatically shards when directory block is bigger than `" + shardSizeString + "`.\n" +
"If you need to restore the old behavior (sharding everything) set `Internal.UnixFSShardingSizeThreshold` to `1B`.\n")
}
Comment on lines +329 to +334
Copy link
Member

@lidel lidel Nov 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ℹ️ This provides a migration path for existing users that have experiment enabled.
I also updated docs/experimental-features.md and made flag optional in JSON in ipfs/go-ipfs-config#158

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. This seems like the best we can get.

There's no way to warn anybody who explicitly wanted sharding to stay off since that flag didn't have no way to denote "unset" which is what we're moving towards now.


return fx.Options(
bcfgOpts,
Expand Down
27 changes: 11 additions & 16 deletions docs/experimental-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -402,27 +402,22 @@ See [Plugin docs](./plugins.md)
## Directory Sharding / HAMT

### In Version
0.4.8

### State
Experimental
- 0.4.8:
- Introduced `Experimental.ShardingEnabled` which enabled sharding globally.
- All-or-nothing, unnecessary sharding of small directories.

Allows creating directories with an unlimited number of entries.
- 0.11.0 :
- Removed support for `Experimental.ShardingEnabled`
- Replaced with automatic sharding based on the block size

**Caveats:**
1. right now it is a GLOBAL FLAG which will impact the final CID of all directories produced by `ipfs.add` (even the small ones)
2. currently size of unixfs directories is limited by the maximum block size

### Basic Usage:
### State

```
ipfs config --json Experimental.ShardingEnabled true
```
Replaced by autosharding.

### Road to being a real feature
The `Experimental.ShardingEnabled` config field is no longer used, please remove it from your configs.

- [ ] Make sure that objects that don't have to be sharded aren't
- [ ] Generalize sharding and define a new layer between IPLD and IPFS
go-ipfs now automatically shards when directory block is bigger than 256KB, ensuring every block is small enough to be exchanged with other peers

## IPNS pubsub

Expand Down Expand Up @@ -600,4 +595,4 @@ ipfs config --json Experimental.AcceleratedDHTClient true

- [ ] Needs more people to use and report on how well it works
- [ ] Should be usable for queries (even if slower/less efficient) shortly after startup
- [ ] Should be usable with non-WAN DHTs
- [ ] Should be usable with non-WAN DHTs
6 changes: 3 additions & 3 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ require (
github.com/ipfs/go-ipfs-blockstore v0.1.6
github.com/ipfs/go-ipfs-chunker v0.0.5
github.com/ipfs/go-ipfs-cmds v0.6.0
github.com/ipfs/go-ipfs-config v0.16.0
github.com/ipfs/go-ipfs-config v0.16.1-0.20211027214300-047a48592f2a
aschmahmann marked this conversation as resolved.
Show resolved Hide resolved
github.com/ipfs/go-ipfs-exchange-interface v0.0.1
github.com/ipfs/go-ipfs-exchange-offline v0.0.1
github.com/ipfs/go-ipfs-files v0.0.9
Expand All @@ -49,11 +49,11 @@ require (
github.com/ipfs/go-merkledag v0.4.0
github.com/ipfs/go-metrics-interface v0.0.1
github.com/ipfs/go-metrics-prometheus v0.0.2
github.com/ipfs/go-mfs v0.1.2
github.com/ipfs/go-mfs v0.1.3-0.20211112012225-23d6734eab23
aschmahmann marked this conversation as resolved.
Show resolved Hide resolved
github.com/ipfs/go-namesys v0.3.1
github.com/ipfs/go-path v0.1.2
github.com/ipfs/go-pinning-service-http-client v0.1.0
github.com/ipfs/go-unixfs v0.2.5
github.com/ipfs/go-unixfs v0.2.7-0.20211112011223-bd53b6a811b1
aschmahmann marked this conversation as resolved.
Show resolved Hide resolved
github.com/ipfs/go-unixfsnode v1.1.3
github.com/ipfs/go-verifcid v0.0.1
github.com/ipfs/interface-go-ipfs-core v0.5.1
Expand Down
23 changes: 15 additions & 8 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,8 @@ github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuy
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/alecthomas/units v0.0.0-20210927113745-59d0afb8317a h1:E/8AP5dFtMhl5KPJz66Kt9G0n+7Sn41Fy1wv9/jHOrc=
github.com/alecthomas/units v0.0.0-20210927113745-59d0afb8317a/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/alexbrainman/goissue34681 v0.0.0-20191006012335-3fc7a47baff5 h1:iW0a5ljuFxkLGPNem5Ui+KBjFJzKg4Fv2fnxe4dvzpM=
github.com/alexbrainman/goissue34681 v0.0.0-20191006012335-3fc7a47baff5/go.mod h1:Y2QMoi1vgtOIfc+6DhrMOGkLoGzqSV2rKp4Sm+opsyA=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
Expand Down Expand Up @@ -385,6 +387,8 @@ github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod
github.com/ipfs/bbloom v0.0.1/go.mod h1:oqo8CVWsJFMOZqTglBG4wydCE4IQA/G2/SEofB0rjUI=
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
github.com/ipfs/go-bitfield v1.0.0 h1:y/XHm2GEmD9wKngheWNNCNL0pzrWXZwCdQGv1ikXknQ=
github.com/ipfs/go-bitfield v1.0.0/go.mod h1:N/UiujQy+K+ceU1EF5EkVd1TNqevLrCQMIcAEPrdtus=
github.com/ipfs/go-bitswap v0.0.9/go.mod h1:kAPf5qgn2W2DrgAcscZ3HrM9qh4pH+X8Fkk3UPrwvis=
github.com/ipfs/go-bitswap v0.1.0/go.mod h1:FFJEf18E9izuCqUtHxbWEvq+reg7o4CW5wSAE1wsxj0=
github.com/ipfs/go-bitswap v0.1.2/go.mod h1:qxSWS4NXGs7jQ6zQvoPY3+NmOfHHG47mhkiLzBpJQIs=
Expand Down Expand Up @@ -465,8 +469,8 @@ github.com/ipfs/go-ipfs-chunker v0.0.5 h1:ojCf7HV/m+uS2vhUGWcogIIxiO5ubl5O57Q7Na
github.com/ipfs/go-ipfs-chunker v0.0.5/go.mod h1:jhgdF8vxRHycr00k13FM8Y0E+6BoalYeobXmUyTreP8=
github.com/ipfs/go-ipfs-cmds v0.6.0 h1:yAxdowQZzoFKjcLI08sXVNnqVj3jnABbf9smrPQmBsw=
github.com/ipfs/go-ipfs-cmds v0.6.0/go.mod h1:ZgYiWVnCk43ChwoH8hAmI1IRbuVtq3GSTHwtRB/Kqhk=
github.com/ipfs/go-ipfs-config v0.16.0 h1:CBtIYyp/iWIczCv83bmfge8EA2KqxOOfqmETs3tUnnU=
github.com/ipfs/go-ipfs-config v0.16.0/go.mod h1:wz2lKzOjgJeYJa6zx8W9VT7mz+iSd0laBMqS/9wmX6A=
github.com/ipfs/go-ipfs-config v0.16.1-0.20211027214300-047a48592f2a h1:CFy3kjOvnGnLC5XofG91h5mM3CYIzeVjiUcQCqWfpl0=
github.com/ipfs/go-ipfs-config v0.16.1-0.20211027214300-047a48592f2a/go.mod h1:wz2lKzOjgJeYJa6zx8W9VT7mz+iSd0laBMqS/9wmX6A=
github.com/ipfs/go-ipfs-delay v0.0.0-20181109222059-70721b86a9a8/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
github.com/ipfs/go-ipfs-delay v0.0.1 h1:r/UXYyRcddO6thwOnhiznIAiSvxMECGgtv35Xs1IeRQ=
github.com/ipfs/go-ipfs-delay v0.0.1/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
Expand Down Expand Up @@ -529,7 +533,6 @@ github.com/ipfs/go-log/v2 v2.1.3/go.mod h1:/8d0SH3Su5Ooc31QlL1WysJhvyOTDCjcCZ9Ax
github.com/ipfs/go-log/v2 v2.3.0 h1:31Re/cPqFHpsRHgyVwjWADPoF0otB1WrjTy8ZFYwEZU=
github.com/ipfs/go-log/v2 v2.3.0/go.mod h1:QqGoj30OTpnKaG/LKTGTxoP2mmQtjVMEnK72gynbe/g=
github.com/ipfs/go-merkledag v0.0.6/go.mod h1:QYPdnlvkOg7GnQRofu9XZimC5ZW5Wi3bKys/4GQQfto=
github.com/ipfs/go-merkledag v0.1.0/go.mod h1:SQiXrtSts3KGNmgOzMICy5c0POOpUNQLvB3ClKnBAlk=
github.com/ipfs/go-merkledag v0.2.3/go.mod h1:SQiXrtSts3KGNmgOzMICy5c0POOpUNQLvB3ClKnBAlk=
github.com/ipfs/go-merkledag v0.3.0/go.mod h1:4pymaZLhSLNVuiCITYrpViD6vmfZ/Ws4n/L9tfNv3S4=
github.com/ipfs/go-merkledag v0.3.1/go.mod h1:fvkZNNZixVW6cKSZ/JfLlON5OlgTXNdRLz0p6QG/I2M=
Expand All @@ -540,8 +543,10 @@ github.com/ipfs/go-metrics-interface v0.0.1 h1:j+cpbjYvu4R8zbleSs36gvB7jR+wsL2fG
github.com/ipfs/go-metrics-interface v0.0.1/go.mod h1:6s6euYU4zowdslK0GKHmqaIZ3j/b/tL7HTWtJ4VPgWY=
github.com/ipfs/go-metrics-prometheus v0.0.2 h1:9i2iljLg12S78OhC6UAiXi176xvQGiZaGVF1CUVdE+s=
github.com/ipfs/go-metrics-prometheus v0.0.2/go.mod h1:ELLU99AQQNi+zX6GCGm2lAgnzdSH3u5UVlCdqSXnEks=
github.com/ipfs/go-mfs v0.1.2 h1:DlelNSmH+yz/Riy0RjPKlooPg0KML4lXGdLw7uZkfAg=
github.com/ipfs/go-mfs v0.1.2/go.mod h1:T1QBiZPEpkPLzDqEJLNnbK55BVKVlNi2a+gVm4diFo0=
github.com/ipfs/go-mfs v0.1.3-0.20210507195338-96fbfa122164 h1:0ATu9s5KktHhm8aYRSe1ysOJPik3dRwU/uag1Bcz+tg=
github.com/ipfs/go-mfs v0.1.3-0.20210507195338-96fbfa122164/go.mod h1:A525zyeY2o078AoxhjJirOlDTXI1GnZxiYQnESGJ9WU=
github.com/ipfs/go-mfs v0.1.3-0.20211112012225-23d6734eab23 h1:RP+F0BONWIXRDb1x5QXk9PV7AnEosaKZCzU4RInsNvw=
github.com/ipfs/go-mfs v0.1.3-0.20211112012225-23d6734eab23/go.mod h1:+NT2mLpzr3LKkRZbjYSFjUTUzHAqJjPbBfnDsLJPJjw=
github.com/ipfs/go-namesys v0.3.1 h1:DqmeXlVODejOyECAqoqhSB5JGRv8aRFhtG0oPDmxsMc=
github.com/ipfs/go-namesys v0.3.1/go.mod h1:/BL4xk8LP5Lq82AmaRKyxZv/eYRlumNiU9SZUe1Hlps=
github.com/ipfs/go-path v0.0.7/go.mod h1:6KTKmeRnBXgqrTvzFrPV3CamxcgvXX/4z79tfAd2Sno=
Expand All @@ -557,10 +562,12 @@ github.com/ipfs/go-peertaskqueue v0.4.0 h1:x1hFgA4JOUJ3ntPfqLRu6v4k6kKL0p07r3RSg
github.com/ipfs/go-peertaskqueue v0.4.0/go.mod h1:KL9F49hXJMoXCad8e5anivjN+kWdr+CyGcyh4K6doLc=
github.com/ipfs/go-pinning-service-http-client v0.1.0 h1:Au0P4NglL5JfzhNSZHlZ1qra+IcJyO3RWMd9EYCwqSY=
github.com/ipfs/go-pinning-service-http-client v0.1.0/go.mod h1:tcCKmlkWWH9JUUkKs8CrOZBanacNc1dmKLfjlyXAMu4=
github.com/ipfs/go-unixfs v0.1.0/go.mod h1:lysk5ELhOso8+Fed9U1QTGey2ocsfaZ18h0NCO2Fj9s=
github.com/ipfs/go-unixfs v0.2.4/go.mod h1:SUdisfUjNoSDzzhGVxvCL9QO/nKdwXdr+gbMUdqcbYw=
github.com/ipfs/go-unixfs v0.2.5 h1:irj/WzIcgTBay48mSMUYDbKlIzIocXWcuUUsi5qOMOE=
github.com/ipfs/go-unixfs v0.2.5/go.mod h1:SUdisfUjNoSDzzhGVxvCL9QO/nKdwXdr+gbMUdqcbYw=
github.com/ipfs/go-unixfs v0.2.6/go.mod h1:GTTzQvaZsTZARdNkkdjDKFFnBhmO3e5mIM1PkH/x4p0=
github.com/ipfs/go-unixfs v0.2.7-0.20211027185217-29ffa004db20 h1:pgaPI+mAg6aqpY9CKj74XZ6Q+Jt8+uZnMv9ZaYylAJQ=
github.com/ipfs/go-unixfs v0.2.7-0.20211027185217-29ffa004db20/go.mod h1:1WeAha/x8lEiiYfIDg7guKC4qAFEmBlNKFn2ztr4MPQ=
github.com/ipfs/go-unixfs v0.2.7-0.20211112011223-bd53b6a811b1 h1:FFSVXA9ns5IqwQRZUgq4GFk7qN1C9LbAwY6CJwk0I4Q=
github.com/ipfs/go-unixfs v0.2.7-0.20211112011223-bd53b6a811b1/go.mod h1:t8BWCW4OvTjcxQsX4e+GFroSZ5fCUXB5ywIMbw9eH/Y=
github.com/ipfs/go-unixfsnode v1.1.2/go.mod h1:5dcE2x03pyjHk4JjamXmunTMzz+VUtqvPwZjIEkfV6s=
github.com/ipfs/go-unixfsnode v1.1.3 h1:IyqJBGIEvcHvll1wDDVIHOEVXnE+IH6tjzTWpZ6kGiI=
github.com/ipfs/go-unixfsnode v1.1.3/go.mod h1:ZZxUM5wXBC+G0Co9FjrYTOm+UlhZTjxLfRYdWY9veZ4=
Expand Down
68 changes: 67 additions & 1 deletion test/sharness/t0250-files-api.sh
Original file line number Diff line number Diff line change
Expand Up @@ -845,7 +845,7 @@ tests_for_files_api "with-daemon"
test_kill_ipfs_daemon

test_expect_success "enable sharding in config" '
ipfs config --json Experimental.ShardingEnabled true
ipfs config --json Internal.UnixFSShardingSizeThreshold "\"1B\""
'

test_launch_ipfs_daemon_without_network
Expand All @@ -858,4 +858,70 @@ test_sharding "(cidv1 root)" "--cid-version=1"

test_kill_ipfs_daemon

# Test automatic sharding and unsharding

# We shard based on size with a threshold of 256 KiB (see config file docs)
# above which directories are sharded.
#
# The directory size is estimated as the size of each link. Links are roughly
# the entry name + the CID byte length (e.g. 34 bytes for a CIDv0). So for
# entries of length 10 we need 256 KiB / (34 + 10) ~ 6000 entries in the
# directory to trigger sharding.
test_expect_success "set up automatic sharding/unsharding data" '
mkdir big_dir
for i in `seq 5960` # Just above the number of entries that trigger sharding for 256KiB
do
echo $i > big_dir/`printf "file%06d" $i` # fixed length of 10 chars
done
'

# TODO: This does not need to report an error https://github.com/ipfs/go-ipfs/issues/8088
test_expect_failure "reset automatic sharding" '
ipfs config --json Internal.UnixFSShardingSizeThreshold null
'

test_launch_ipfs_daemon_without_network

LARGE_SHARDED="QmWfjnRWRvdvYezQWnfbvrvY7JjrpevsE9cato1x76UqGr"
LARGE_MINUS_5_UNSHARDED="QmbVxi5zDdzytrjdufUejM92JsWj8wGVmukk6tiPce3p1m"

test_add_large_sharded_dir() {
exphash="$1"
test_expect_success "ipfs add on directory succeeds" '
ipfs add -r -Q big_dir > shardbigdir_out &&
echo "$exphash" > shardbigdir_exp &&
test_cmp shardbigdir_exp shardbigdir_out
'

test_expect_success "can access a path under the dir" '
ipfs cat "$exphash/file000030" > file30_out &&
test_cmp big_dir/file000030 file30_out
'
}

test_add_large_sharded_dir "$LARGE_SHARDED"

test_expect_success "remove a few entries from big_dir/ to trigger unsharding" '
ipfs files cp /ipfs/"$LARGE_SHARDED" /big_dir &&
for i in `seq 5`
do
ipfs files rm /big_dir/`printf "file%06d" $i`
done &&
ipfs files stat --hash /big_dir > unshard_dir_hash &&
echo "$LARGE_MINUS_5_UNSHARDED" > unshard_exp &&
test_cmp unshard_exp unshard_dir_hash
'

test_expect_success "add a few entries to big_dir/ to retrigger sharding" '
for i in `seq 5`
do
ipfs files cp /ipfs/"$LARGE_SHARDED"/`printf "file%06d" $i` /big_dir/`printf "file%06d" $i`
done &&
ipfs files stat --hash /big_dir > shard_dir_hash &&
echo "$LARGE_SHARDED" > shard_exp &&
test_cmp shard_exp shard_dir_hash
'

test_kill_ipfs_daemon

test_done
33 changes: 19 additions & 14 deletions test/sharness/t0260-sharding.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,14 @@ test_expect_success "set up test data" '
done
'

test_add_large_dir() {
test_add_dir() {
exphash="$1"
test_expect_success "ipfs add on very large directory succeeds" '
test_expect_success "ipfs add on directory succeeds" '
ipfs add -r -Q testdata > sharddir_out &&
echo "$exphash" > sharddir_exp &&
test_cmp sharddir_exp sharddir_out
'
test_expect_success "ipfs get on very large directory succeeds" '
test_expect_success "ipfs get on directory succeeds" '
ipfs get -o testdata-out "$exphash" &&
test_cmp testdata testdata-out
'
Expand All @@ -32,24 +32,29 @@ test_add_large_dir() {
test_init_ipfs

UNSHARDED="QmavrTrQG4VhoJmantURAYuw3bowq3E2WcvP36NRQDAC1N"
test_add_large_dir "$UNSHARDED"

test_expect_success "force sharding off" '
ipfs config --json Internal.UnixFSShardingSizeThreshold "\"1G\""
'

test_add_dir "$UNSHARDED"

test_launch_ipfs_daemon

test_add_large_dir "$UNSHARDED"
test_add_dir "$UNSHARDED"

test_kill_ipfs_daemon

test_expect_success "enable sharding" '
ipfs config --json Experimental.ShardingEnabled true
test_expect_success "force sharding on" '
ipfs config --json Internal.UnixFSShardingSizeThreshold "\"1B\""
'

SHARDED="QmSCJD1KYLhVVHqBK3YyXuoEqHt7vggyJhzoFYbT8v1XYL"
test_add_large_dir "$SHARDED"
test_add_dir "$SHARDED"

test_launch_ipfs_daemon

test_add_large_dir "$SHARDED"
test_add_dir "$SHARDED"

test_kill_ipfs_daemon

Expand Down Expand Up @@ -93,9 +98,9 @@ test_expect_success "'ipfs resolve' can resolve sharded dirs" '

test_kill_ipfs_daemon

test_add_large_dir_v1() {
test_add_dir_v1() {
exphash="$1"
test_expect_success "ipfs add (CIDv1) on very large directory succeeds" '
test_expect_success "ipfs add (CIDv1) on directory succeeds" '
ipfs add -r -Q --cid-version=1 testdata > sharddir_out &&
echo "$exphash" > sharddir_exp &&
test_cmp sharddir_exp sharddir_out
Expand All @@ -109,11 +114,11 @@ test_add_large_dir_v1() {

# this hash implies the directory is CIDv1 and leaf entries are CIDv1 and raw
SHARDEDV1="bafybeibiemewfzzdyhq2l74wrd6qj2oz42usjlktgnlqv4yfawgouaqn4u"
test_add_large_dir_v1 "$SHARDEDV1"
test_add_dir_v1 "$SHARDEDV1"

test_launch_ipfs_daemon

test_add_large_dir_v1 "$SHARDEDV1"
test_add_dir_v1 "$SHARDEDV1"

test_kill_ipfs_daemon

Expand All @@ -129,7 +134,7 @@ test_list_incomplete_dir() {

test_expect_success "can list part of the directory" '
ipfs ls "$largeSHA3dir" 2> ls_err_out
echo "Error: merkledag: not found" > exp_err_out &&
echo "Error: failed to fetch all nodes" > exp_err_out &&
cat ls_err_out &&
test_cmp exp_err_out ls_err_out
'
Expand Down