-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release v0.12 #8344
Comments
ChangelogFull Changelog
❤ Contributors
|
I add my edits here, rather than editing the top-level-commit, but feel free to port them there: (moved to top, click to expand original comment)Blockstore migration from full CID to Multihash keysWe are switching the default low level datastore to be keyed only by the Multihash part of CID, and deduplicate some blocks in the process. The blockstore will become codec-agnostic. The blockstore/datastore layers are not concerned with data interpretation, only with storage of binary blocks and verification that the Multihash they are addressed with (which comes from the CID), matches the block. In fact, different CIDs, with different codecs prefixes, may be carrying the same multihash, and referencing the same block. Carrying the CID abstraction so low on the stack meant potentially fetching and storing the same blocks multiple times just because they are referenced by the different CIDs. For example, a CIDv1 with In order to perform the switch, and start referencing all blocks by solely by their multihash, a migration will occur on update. This migration will take the repository version from 11 (current) to 12. One thing to note is that any content addressed CIDv0 (all the hashes that start with On the other side, migration process will take care of re-keying any CIDv1 block so that it is only addressed by its multihash. Large nodes will lots of CIDv1-addressed content will need to go through a heavier process as the migration happens. This is how the migration works:
During the migration, the user will see log messages informing of how many blocks need to be rewritten and how far the process at every sync. CaveatsLarge repositories with very large numbers of CIDv1s should be mindful of the migration process:
Migration failures and revertsIf a problem occurred during the migration, it is be possible to simply re-start and retry it:
It will be also possible to revert the migration after it has succeeded, for example to go to a previous go-ipfs version, even after starting and using go-ipfs in the new version. The revert process works as follows:
The revert process does not delete any blocks, only makes sure that blocks that were accessible with CIDv1s before the migration are again keyed with CIDv1s. This may result in a datastore becoming twice as large (i.e. if all the blocks were CIDv1-addressed before the migration). This is however done this way to cover corner cases: user can add CIDv1s after migration, which may reference blocks that existed as CIDv0 before migration. The revert heavily aims to ensure that no data becomes unavailable on downgrade. |
Thank you @hsanjuan – looks great! – I've moved it to the top-comment and added some sub-headers. |
I have updated the gc interval and the revert workers mention based on the fs-repo-migrations#144. |
@guseggert : from 2021-12-14 standup, when we update Discuss to announce RC1, lets ensure we make it clear there is 0.11.0 and 0.12.0-rc1 Lets also please ensure we deploy the RC to some of our infra. |
2021-12-16 conversation:
Currently 0.12-rc1 is one bank. That is good. |
2022-01-04 conversation:
|
We need to test a large migration on the nft1-cluster nodes before decommissioning it. This will be possible in a few days. |
@hsanjuan : is this possible now? Who is owning this? |
I'm owning. We can probably start that migration today. |
|
2022-01-18 Remaining work:
|
|
After I saw what @aschmahmann described my MFS was partly broken (I couldn't remove a file which This is obviously a blocker for 0.12 for me. The corresponding ticket: #8694 |
@aschmahmann I can confirm this issue for 0.11 as well. Basically zero IO on the disk, high CPU load (above two cores), and slow as a snake to add more files or to
|
2022-02-15 notes:
|
The spike has been successful and ipfs/fs-repo-migrations#152 should make things a lot faster (I got 10x on a slower HDD, but have seen greater speedups on machines with faster disk access). The plan is to get the last changes reviewed and merged so we can ship. This will come with updated release notes explaining more about how the migration works as well as some tunable parameters. Also, for those following the issue #8694 is not related to v0.12.0 so it's not going to block the v0.12.0 release |
2022-02-28 update: 0.12 has been released: https://github.com/ipfs/go-ipfs/releases/tag/v0.12.0 We're working on wrap up this week (announcements, ipfs-desktop) |
@guseggert : I don't see the blog post under: https://blog.ipfs.io/ |
Ah thanks, it was premature to resolve this. I don't have a link to the AirTable req, it is just a form. I can follow up with Emily. |
Brave release for go-ipfs 0.12 is here: https://github.com/brave/brave-browser/milestone/269 |
Confirmed blog entry is published. |
go-ipfs 0.12.0 Release Notes
We're happy to announce go-ipfs 0.12.0. This release switches the storage of IPLD blocks to be keyed by multihash instead of CID.
As usual, this release includes important fixes, some of which may be critical for security. Unless the fix addresses a bug being exploited in the wild, the fix will not be called out in the release notes. Please make sure to update ASAP. See our release process for details.
🛠 BREAKING CHANGES
ipfs refs local
will now list all blocks as if they were raw CIDv1 instead of with whatever CID version and IPLD codecs they were stored with. All other functionality should remain the same.Note: This change also effects ipfs-update so if you use that tool to mange your go-ipfs installation then grab ipfs-update v1.8.0 from dist.
Keep reading to learn more details.
🔦 Highlights
There is only one change since 0.11:
Blockstore migration from full CID to Multihash keys
We are switching the default low level datastore to be keyed only by the Multihash part of the CID, and deduplicate some blocks in the process. The blockstore will become codec-agnostic.
Rationale
The blockstore/datastore layers are not concerned with data interpretation, only with storage of binary blocks and verification that the Multihash they are addressed with (which comes from the CID), matches the block. In fact, different CIDs, with different codecs prefixes, may be carrying the same multihash, and referencing the same block. Carrying the CID abstraction so low on the stack means potentially fetching and storing the same blocks multiple times just because they are referenced by different CIDs. Prior to this change, a CIDv1 with a
dag-cbor
codec and a CIDv1 with araw
codec, both containing the same multihash, would result in two identical blocks stored. A CIDv0 and CIDv1 both being the samedag-pb
block would also result in two copies.How migration works
In order to perform the switch, and start referencing all blocks by their multihash, a migration will occur on update. This migration will take the repository version from 11 (current) to 12.
One thing to note is that any content addressed CIDv0 (all the hashes that start with
Qm...
, the current default in go-ipfs), does not need any migration, as CIDv0 are raw multihashes already. This means the migration will be very lightweight for the majority of users.The migration process will take care of re-keying any CIDv1 block so that it is only addressed by its multihash. Large nodes with lots of CIDv1-addressed content will need to go through a heavier process as the migration happens. This is how the migration works:
11-to-12-cids.txt
, in the go-ipfs configuration folder. Nothing is written in this first phase and it only serves to identify keys that will be migrated in phase 2.At every sync, the migration emits a log message showing how many blocks need to be rewritten and how far the process is.
FlatFS specific migration
For those using a single FlatFS datastore as their backing blockstore (i.e. the default behavior), the migration (but not reversion) will take advantage of the ability to easily move/rename the blocks to improve migration performance.
Unfortunately, other common datastores do not support renames which is what makes this FlatFS specific. If you are running a large custom datastore that supports renames you may want to consider running a fork of fs-repo-11-to-12 specific to your datastore.
If you want to disable this behavior, set the environment variable
IPFS_FS_MIGRATION_11_TO_12_ENABLE_FLATFS_FASTPATH
tofalse
.Migration configuration
For those who want to tune the migration more precisely for their setups, there are two environment variables to configure:
IPFS_FS_MIGRATION_11_TO_12_NWORKERS
: an integer describing the number of migration workers - defaults to 1IPFS_FS_MIGRATION_11_TO_12_SYNC_SIZE_BYTES
: an integer describing the number of bytes after which migration workers will sync - defaults to 104857600 (i.e. 100MiB)Migration caveats
Large repositories with very large numbers of CIDv1s should be mindful of the migration process:
Migration interruptions and retries
If a problem occurs during the migration, it is be possible to simply re-start and retry it:
11-to-12-cids.txt
file, but only append to it (so that a list of things we were supposed to have migrated during our first attempt is not lost - this is important for reverts, see below).Migration reverts
It is also possible to revert the migration after it has succeeded, for example to go to a previous go-ipfs version (<=0.11), even after starting and using go-ipfs in the new version (>=0.12). The revert process works as follows:
11-to-12-cids.txt
file is read, which has the list of all the CIDv1s that had to be rewritten for the migration.The revert process does not delete any blocks--it only makes sure that blocks that were accessible with CIDv1s before the migration are again keyed with CIDv1s. This may result in a datastore becoming twice as large (i.e. if all the blocks were CIDv1-addressed before the migration). This is however done this way to cover corner cases: user can add CIDv1s after migration, which may reference blocks that existed as CIDv0 before migration. The revert aims to ensure that no data becomes unavailable on downgrade.
While go-ipfs will auto-run the migration for you, it will not run the reversion. To do so you can download the latest migration binary or use ipfs-update.
Custom datastores
As with previous migrations if you work with custom datastores and want to leverage the migration you can run a fork of fs-repo-11-to-12 specific to your datastore. The repo includes instructions on building for different datastores.
For this migration, if your datastore has fast renames you may want to consider writing some code to leverage the particular efficiencies of your datastore similar to what was done for FlatFS.
🚢 Estimated shipping date
RC1: 2021-12-13
ECD: 2022-01-06
✅ Release Checklist
For each RC published in each stage:
version.go
has been updated (in therelease-vX.Y.Z
branch).vX.Y.Z-rcN
Checklist:
release-vX.Y.Z
) frommaster
and make any further release related changes to this branch. If any "non-trivial" changes (see the footnotes of docs/releases.md for a definition) get added to the release, uncheck all the checkboxes and return to this stage.version.go
in themaster
branch tovX.(Y+1).0-dev
.make test
)make test_go_lint
)./bin/mkreleaselog
to generate a nice starter listversion.go
has been updated.release-vX.Y.Z
into therelease
branch.release
branch) withvX.Y.Z
.release
branch back intomaster
, ignoring the changes toversion.go
(keep the-dev
version from master).The best place to ask your questions about IPFS, how it works and what you can do with it is at discuss.ipfs.io. We are also available at the
#lobby:ipfs.io
Matrix channel which is bridged with other chat platforms.Release improvements for next time
< Add any release improvements that were observed this cycle here so they can get incorporated into future releases. >
The text was updated successfully, but these errors were encountered: