Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encrypted Indirect BPs erroneously MAC byteorder and compression bits #6845

Closed
tcaputi opened this issue Nov 7, 2017 · 28 comments
Closed

Encrypted Indirect BPs erroneously MAC byteorder and compression bits #6845

tcaputi opened this issue Nov 7, 2017 · 28 comments
Assignees

Comments

@tcaputi
Copy link
Contributor

tcaputi commented Nov 7, 2017

While looking into #6806 I discovered 2 small errors with the on-disk format for encrypted datasets that present problems with regards to raw sends. Indirect BPs include a checksum-of-MACs of a few fields in all of the BPs below. The way this is supposed to work is that the checksum-of-MACs only protects fields which can be preserved when doing a raw zfs send -w. However, the bug is that compression and byte order are included in these MACs, which is not portable to other systems.

On its own, this wouldn't be a big problem. We could simply adjust the on-disk format so that it overrides the real values with LZ4 compression and little endian byte order in all cases, since these 2 values are by far the mostly commonly used in production. This would mean virtually nobody would notice the on-disk format "change". Unfortunately, there is another much less serious bug where indirect dnode blocks are not getting compressed. The way that these 2 bugs interact would require us to always disable compression for encrypted indirect dnode blocks which could have a significant performance impact.

I am currently working on a patch to correct this issue, although it will almost definitely require breaking existing pools that are using encryption. I am creating this ticket to help people watch the progress on this issue and to try to address any concerns they may have.

@tcaputi tcaputi self-assigned this Nov 7, 2017
@sempervictus
Copy link
Contributor

Something about eggs and omelettes...? The likelyhood of having to merc existing pools isnt great, but that's why its not in 0.7.x.
Would we have to destroy the entire pool, or could dropping encrypted DS' suffice?

@tcaputi
Copy link
Contributor Author

tcaputi commented Nov 8, 2017

Would we have to destroy the entire pool, or could dropping encrypted DS' suffice?

Dropping encrypted filesystems is good enough. I just worry about people doing something like sending their data to an unencrypted dataset and then sending it back after patching since technically this writes out all of their data in plaintext and zfs doesn't have a secure delete functionality yet.

We did delay the tagging of encryption until 0.8.0 for this reason specifically, but I still feel bad for everyone who has been helping to test it.

@sempervictus
Copy link
Contributor

sempervictus commented Nov 8, 2017 via email

@tcaputi
Copy link
Contributor Author

tcaputi commented Nov 8, 2017

Re secure discard, what ever happened to forcing TRIM anyway? Thought for sure that'd land in 0.7.0.

There are still some people working on it, but I'm not sure what happened to it upstream.....

@cytrinox
Copy link

cytrinox commented Nov 8, 2017

I have a productive system with encryption patches from Sep '16 running and would migrate to the current git master the next few weeks (I've a data mirror of all zfs data on a luks ext4 volume). Good to see that there is an pending issue, I will wait until this issue is fixed.

May it possible to add a github issue tag für issues which will break ODS? I don't look everyday into the zfs issue tracker and a tag would allow early adaptors to check if there are ODS issues before switching to a new code revision. If 0.8.0 is expected in 3-5 months, we don't need a tag. But if it may take 1-2 years until 0.8.0 with crypto is released, it will really help.

@behlendorf
Copy link
Contributor

@cytrinox we could add a new tags for PRs which change the on-disk format but I'm not sure how helpful it would be. To be clear the only time we change the on-disk format is when introducing a new feature flag, and we do our best to ensure those changes have been finalized before the PR is merged. To date this is the first time we've changed the format after merging a PR which adds a feature flag. And it was only an option because the feature has not yet been included in any tagged release.

@cytrinox
Copy link

cytrinox commented Nov 9, 2017

@behlendorf then my request for a tag is nonsense.

tcaputi pushed a commit to datto/zfs that referenced this issue Nov 9, 2017
The current on-disk format for encrypted datasets protects
not only the encrypted and authenticated blocks, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw sends
the indirect bps maintain a secure checksum of all the MACs
in the block below it, along with a few other fields that
determine how the data is interpretted.

Unfortunately, the current on-disk format erroniously
includes the byteorder and compression of the blocks below,
which is not portable and thus cannot support raw sends.
Unfortunately, it is also not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for dnodes to not be compressed.

This patch zero's out the byteorder and compression when
computing the MAC (as they should have been) and registers
an errata for the on-disk format bug.

Signed-off-by: Tom Caputi <tcaputi@datto.com>
@ronnyegner
Copy link

ronnyegner commented Nov 9, 2017

@tcaputi There nothing to be worried about that some users (including me) now have some work to do. In the end we all know or should know) that this can happen in master.

What is currently a bit unclear to me is the fix. I see the fix for #6845 has been merged into master. So obviously the first step is to upgrade to master.
And then? Would it be enough to create a new mater encryption root and just "copy" (= use cp command) the data? And for zfs send i understand that we need to send to an uncompressed file system, create a new encryption root and zfs send unencrypted data to the new encrytion root (or below)?

@tcaputi
Copy link
Contributor Author

tcaputi commented Nov 9, 2017

I see the fix for #6845 has been merged into master.

It has not. I have not made the PR yet. That branch is where I am staging my work for this but it is not done yet. Most notably, it still needs some decisions about how to handle reporting the errata to the user and a new test to ensure this doesn't get broken once its fixed.

And then? Would it be enough to create a new mater encryption root and just "copy" (= use cp command) the data?

Unfortunately its a little more complicated than this. You can only read the data on the old software version and the problem won't be corrected until the new version. So you will actually have to move it somewhere else, delete the encrypted datasets, upgrade your software and copy it back. This isn't ideal because for complete security you wouldn't want to put your encrypted data in plaintext on the pool (since zfs doesn't currently support secure deletion), so you need separate storage for this elsewhere. I don't really have a better answer for this at the moment.

And for zfs send i understand that we need to send to an uncompressed file system, create a new encryption root and zfs send unencrypted data to the new encryption root (or below)?

I think you meant unencrypted, but yes.

As a part of this PR, I am making a related PR to the ZoL website to include information about this errata and what the recommended actions are. zpool status will display a link to this page if it detects the problem (as it does for the existing ones).

@ronnyegner
Copy link

ronnyegner commented Nov 9, 2017

@tcaputi Thanks for the clarification. Yes, i mean unencrypted and not uncompressed.

Since it has not yet been merged into master: Could you please give us a heads-up here when we can start migrating (however painful it will be)?

@tcaputi
Copy link
Contributor Author

tcaputi commented Nov 9, 2017

I will. @behlendorf and I might have a way to make this a lot less painful (you'd just have to do zfs send | zfs recv). I'll have more updates when I'm further along in the implementation.

tcaputi pushed a commit to datto/zfs that referenced this issue Nov 13, 2017
The current on-disk format for encrypted datasets protects
not only the encrypted and authenticated blocks, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw sends
the indirect bps maintain a secure checksum of all the MACs
in the block below it, along with a few other fields that
determine how the data is interpretted.

Unfortunately, the current on-disk format erroniously
includes some fields which are not portable and thus cannot
support raw sends. It is also not possible to easily work
around this issue due to a separate and much smaller bug
which causes indirect blocks for encrypted dnodes to not
be compressed, which conflicts with the previous bug. In
addition, raw send streams do not currently include
dn_maxblkid which is needed in order to ensure that we are
correctly maintaining the portable objset MAC.

This patch zero's out the offending fields when computing the
bp MAC (as they should have been) and registers an errata for
the on-disk format bug. We detect the errata by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted for
read so that they can easily be migrated. We also now include
dn_maxblkid in raw send streams to ensure the MAC can be
maintained correctly.

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Nov 13, 2017
The current on-disk format for encrypted datasets protects
not only the encrypted and authenticated blocks, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw sends
the indirect bps maintain a secure checksum of all the MACs
in the block below it, along with a few other fields that
determine how the data is interpretted.

Unfortunately, the current on-disk format erroniously
includes some fields which are not portable and thus cannot
support raw sends. It is also not possible to easily work
around this issue due to a separate and much smaller bug
which causes indirect blocks for encrypted dnodes to not
be compressed, which conflicts with the previous bug. In
addition, raw send streams do not currently include
dn_maxblkid which is needed in order to ensure that we are
correctly maintaining the portable objset MAC.

This patch zero's out the offending fields when computing the
bp MAC (as they should have been) and registers an errata for
the on-disk format bug. We detect the errata by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted for
read so that they can easily be migrated. We also now include
dn_maxblkid in raw send streams to ensure the MAC can be
maintained correctly.

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Nov 13, 2017
The current on-disk format for encrypted datasets protects
not only the encrypted and authenticated blocks, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw sends
the indirect bps maintain a secure checksum of all the MACs
in the block below it, along with a few other fields that
determine how the data is interpretted.

Unfortunately, the current on-disk format erroniously
includes some fields which are not portable and thus cannot
support raw sends. It is also not possible to easily work
around this issue due to a separate and much smaller bug
which causes indirect blocks for encrypted dnodes to not
be compressed, which conflicts with the previous bug. In
addition, raw send streams do not currently include
dn_maxblkid which is needed in order to ensure that we are
correctly maintaining the portable objset MAC.

This patch zero's out the offending fields when computing the
bp MAC (as they should have been) and registers an errata for
the on-disk format bug. We detect the errata by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted for
read so that they can easily be migrated. We also now include
dn_maxblkid in raw send streams to ensure the MAC can be
maintained correctly.

Signed-off-by: Tom Caputi <tcaputi@datto.com>
@tcaputi
Copy link
Contributor Author

tcaputi commented Nov 14, 2017

I just pushed a PR (#6864) for this issue. Please note that it is not yet complete and it should not be used (apart from testing purposes) until it is merged.

tcaputi pushed a commit to datto/zfs that referenced this issue Dec 1, 2017
The current on-disk format for encrypted datasets protects
not only the encrypted and authenticated blocks, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw sends
the indirect bps maintain a secure checksum of all the MACs
in the block below it, along with a few other fields that
determine how the data is interpretted.

Unfortunately, the current on-disk format erroniously
includes some fields which are not portable and thus cannot
support raw sends. It is also not possible to easily work
around this issue due to a separate and much smaller bug
which causes indirect blocks for encrypted dnodes to not
be compressed, which conflicts with the previous bug. In
addition, raw send streams do not currently include
dn_maxblkid which is needed in order to ensure that we are
correctly maintaining the portable objset MAC.

This patch zero's out the offending fields when computing the
bp MAC (as they should have been) and registers an errata for
the on-disk format bug. We detect the errata by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted for
read so that they can easily be migrated. We also now include
dn_maxblkid in raw send streams to ensure the MAC can be
maintained correctly.

Signed-off-by: Tom Caputi <tcaputi@datto.com>
@Redsandro
Copy link

Redsandro commented Dec 5, 2017

@tcaputi said:

@behlendorf and I might have a way to make this a lot less painful (you'd just have to do zfs send | zfs recv).

This sounds interesting. Would this mean you could keep the same pool, keep the unencrypted volumes, and just transfer the encrypted volumes to new volumes within the same pool, on the same system, using a single ZoL version?

Effectively making said new ZoL version able to read both old broken and new fixed type encrypted storage?

I would love to see this trick make it into the eventual future tagged release that officially supports encryption. I'm ashamed to admit that during testing, encryption worked so beautifully, I let my dataset grow beyond what I initially bargained for. I'm considering keeping this around until the final tagged version makes it to mainline.

@tcaputi
Copy link
Contributor Author

tcaputi commented Dec 5, 2017

@Redsandro
This is implemented in #6864 which I am currently working on wrapping up. I wouldn't start using that until its merged though.

@Redsandro
Copy link

@tcaputi I will leave it alone until it is tagged together with the final version of native encryption.
I was wondering about one thing. The 'broken' filesystem is considered version=0? Because my 'broken' filesystem seems to be at version 5:

$ sudo zfs list -o name,mountpoint,encryption,version | grep -v legacy
NAME          MOUNTPOINT     ENCRYPTION  VERSION
pool          /pool                 off        5
pool/store    /mnt/store    aes-256-ccm        5
pool/files    /mnt/files            off        5
pool/test     /mnt/test     aes-256-ccm        5
pool/media    /mnt/media            off        5

I also notice that pool version is empty:

$ sudo zpool get version
NAME  PROPERTY  VALUE    SOURCE
pool  version   -        default

If this is not a bug but not a redundant parameter either, it could be improved with a message saying why the version is empty and what could be done to determine the compatibility with a certain zfs version on a certain server alternatively.

@tcaputi
Copy link
Contributor Author

tcaputi commented Dec 5, 2017

The 'broken' filesystem is considered version=0? Because my 'broken' filesystem seems to be at version 5

The version that you are looking at is the ZPL version, which is different from the encryption version that we are adding here. The ZPL version determines how objects in a ZFS dataset relate to each other to present a filesystem. The encryption version refers to how these objects are protected.

I also notice that pool version is empty:

The pool version is essentially deprecated (although we can never really get rid of it). Since OpenZFS became an open source project, the latest version of a ZFS pool is 5000 (which shows up as blank as you have seen). Instead, we now use feature flags these days which are a bit more conducive to having many developers and companies work on the project at once. This is documented in the man pages.

For the moment we are not planning on exposing the encryption version. Since ZFS native encryption is still not in a tagged release, we are handling the old format by calling it an on-disk errata. Unlike older ZPL versions which were functional, the version 0 encryption implementation cannot work the way it was intended in all circumstances, so we don't want to support it (beyond allowing users to fix the problem) going forward.

@Redsandro
Copy link

Redsandro commented Dec 5, 2017

@tcaputi thank you for the elaborate response. Just curious if the "allowing users to fix the problem" PR will be only in a non-tagged release for fixing purposes (to which we should pay close attention), or if it is planned to be available with the finalized encryption in a tagged version.

@tcaputi
Copy link
Contributor Author

tcaputi commented Dec 5, 2017

It will be in the tagged release and maintained going forward.

tcaputi pushed a commit to datto/zfs that referenced this issue Dec 21, 2017
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
prometheanfire pushed a commit to prometheanfire/zfs that referenced this issue Dec 30, 2017
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
prometheanfire pushed a commit to prometheanfire/zfs that referenced this issue Dec 30, 2017
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
prometheanfire pushed a commit to prometheanfire/zfs that referenced this issue Jan 10, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
prometheanfire pushed a commit to prometheanfire/zfs that referenced this issue Jan 11, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
prometheanfire pushed a commit to prometheanfire/zfs that referenced this issue Jan 11, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
prometheanfire pushed a commit to prometheanfire/zfs that referenced this issue Jan 12, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Jan 16, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Jan 16, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
prometheanfire pushed a commit to prometheanfire/zfs that referenced this issue Jan 17, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
prometheanfire pushed a commit to prometheanfire/zfs that referenced this issue Jan 17, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

Fixes openzfs#6845

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Jan 17, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

This patch also contains minor bug fixes and cleanups.

Fixes openzfs#6845
Fixes openzfs#7052

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Jan 19, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

This patch also contains minor bug fixes and cleanups.

Fixes openzfs#6845
Fixes openzfs#7052

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Jan 24, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

This patch also contains minor bug fixes and cleanups.

Fixes openzfs#6845
Fixes openzfs#7052

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Jan 24, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

This patch also contains minor bug fixes and cleanups.

Fixes openzfs#6845
Fixes openzfs#7052

Signed-off-by: Tom Caputi <tcaputi@datto.com>
behlendorf pushed a commit to behlendorf/zfs that referenced this issue Jan 25, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

This patch also contains minor bug fixes and cleanups.

Fixes openzfs#6845
Fixes openzfs#7052

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Jan 26, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

This patch also contains minor bug fixes and cleanups.

Fixes openzfs#6845
Fixes openzfs#7052

Signed-off-by: Tom Caputi <tcaputi@datto.com>
tcaputi pushed a commit to datto/zfs that referenced this issue Jan 31, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

This patch also contains minor bug fixes and cleanups.

Fixes openzfs#6845
Fixes openzfs#7052

Signed-off-by: Tom Caputi <tcaputi@datto.com>
@Redsandro
Copy link

Redsandro commented Feb 3, 2018

I'm happy to learn this issue was closed. Looking forward to converting some encrypted datasets. I understand that this was a complex issue, and I'm not sure if there are other issues related to encryption that need to be fixed. What tag should I be watching for?

Moved to archzfs/archzfs#222

@sjau
Copy link

sjau commented Feb 3, 2018

The #6864 stability patch did fix a lot of issues with encryption. For me things work well with the integrated patch. I haven't tested raw sending so far.

Also I'd recommend that you have at least two live usb drives available. One that loads the "old" zfs without the stability patch and one that loads the new zfs with the stability patch. The reason is that when you load the system with the new zfs, the old encrypted datasets become read-only. While with the old live usb the old encrypted datasets can be still accessed as normal but you can't open the new encrypted ones.

I had to use this because one dataset was too big. So I did rsync from the ro-mounted old one parts.... booted into the old zfs usb.... removed the rsynced part... and booted back into new zfs and rsynced the rest.

@Redsandro
Copy link

Hi @sjau

Also I'd recommend that you have at least two live usb drives available.

This is actually really clever. I was planning to use Timeshift to sort of go back and forth in order to do some testing and comparing file hashes, but I haven't ever gone back so I'm unsure how reliable it is.

Can you recommend a way to get a stick with ZFS as simple as possible? Did you wrote an automation for personal use that you can share?

I had to use this because one dataset was too big.

This sounds worrying. zfs send has a size limit? Oh you mean there wasn't enough free space in the pool! Gotcha.

@sjau
Copy link

sjau commented Feb 3, 2018

give me your keyboard layout and I can generate isos for you... I use NixOS and it has some really great features (reproducable builts, atomic upgrades yadda yadda yadda) but it's a bitch trying to package new software for it....

Those will be "installer" isos but basically it boots up and drops you to root shell so that you could then actually setup nixos etc... but you can also do partitioning and stuff and play with zfs just nicely.

As for the pool: In nixos it was recomended to make a container for the encrypted datasets like tank/encryption/nixos. Sine top-level dataset "tank" isn't encrypted, you could just create a new encryption dataset, load the zfs key for the old one as well and just do zfs send/recv.... this only works if you have enough storage in your pool left for that dataset :)

(I had a 300GB dataset and only 150GB left on my notebook)

@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 3, 2018

@Redsandro The patch covered all of the issues that I am currently aware of and have been able to reproduce. Hopefully, this should be the last of the on-disk changes for this feature (although we now have a mechanism to deal with them if they arise in the future). There is currently one other encryption-related patch #7115 that I expect should be merged in the next few days. It fixes a small issue that we have only ever hit in ztest, which basically races as much code as possible against each other.

My next biggest priority (with regards to encryption) is to implement support for zfs recv -o / -x with regards to encryption properties, which will make non-raw sends of encrypted zfs filesystems a lot easier to do. This actually has a few implications for the command line utilities, particularly entering keys via stdin. Currently, you can create a dataset and pass encryption keys in via:

<command that outputs key to stdout> | zfs create -o encryption=on ...

However, if you want to receive a new filesystem and encrypt it this won't work because zfs recv already uses stdin for the send file. We will need a way to work around this, but we have a few ideas. Other than this, I do not foresee any other changes to the encryption code (other than bug fixes).

@Redsandro
Copy link

Redsandro commented Feb 3, 2018

@tcaputi said:

However, if you want to receive a new filesystem and encrypt it this won't work because zfs recv already uses stdin for the send file.

My first thought was file descriptors and/or process substitution, but after hacking in bash for approximately the time between your comment and this one, I haven't been able to pipe data and import a password <(echo "like so") and keep them separate. I'm looking forward to hear what you've come up with.

@tcaputi said:

<command that outputs key to stdout> | zfs 

Speaking of this, (how) can I use a key pipe with the mount command? I'm tring to figure out the best way to mount an encrypted dataset on a server from a laptop where the key is only on the laptop. But I can't seem to get it to work. The following non-working command illustrates what I'm trying to accomplish.

`<command that outputs key to stdout>` | ssh user@server zfs mount -l pool/encrypted

@sjau
Copy link

sjau commented Feb 3, 2018

I somehow fail to see the issue here. If you can zfs send / recv then you already have access to both sides... why not just load-key first and then just issue zfs send / recv command?

@tcaputi
Copy link
Contributor Author

tcaputi commented Feb 3, 2018

My first thought was file descriptors and/or process substitution, but after hacking in bash for approximately the time between your comment and this one, I haven't been able to pipe data and import a password <(echo "like so") and keep them separate. I'm looking forward to hear what you've come up with.

Yeah, I was able to do some things, but nothing with a clean user interface.

Speaking of this, (how) can I use a key pipe with the mount command? I'm tring to figure out the best way to mount an encrypted dataset on a server from a laptop where the key is only on the laptop. But I can't seem to get it to work. The following non-working command illustrates what I'm trying to accomplish.

I don't know what's up. This worked for me. (You should not use echo in production):

echo 'password' | ssh tom@localhost 'sudo zfs mount -l pool/encrypted'

I somehow fail to see the issue here. If you can zfs send / recv then you already have access to both sides... why not just load-key first and then just issue zfs send / recv command?

The problem is that receiving for the first time requires both the passphrase and the stream in the same command. I can't load the key first because the dataset for it doesn't exist yet.

@dswartz
Copy link
Contributor

dswartz commented Feb 3, 2018 via email

Nasf-Fan pushed a commit to Nasf-Fan/zfs that referenced this issue Feb 13, 2018
The on-disk format for encrypted datasets protects not only
the encrypted and authenticated blocks themselves, but also
the order and interpretation of these blocks. In order to
make this work while maintaining the ability to do raw
sends, the indirect bps maintain a secure checksum of all
the MACs in the block below it along with a few other
fields that determine how the data is interpreted.

Unfortunately, the current on-disk format erroneously
includes some fields which are not portable and thus cannot
support raw sends. It is not possible to easily work around
this issue due to a separate and much smaller bug which
causes indirect blocks for encrypted dnodes to not be
compressed, which conflicts with the previous bug. In
addition, the current code generates incompatible on-disk
formats on big endian and little endian systems due to an
issue with how block pointers are authenticated. Finally,
raw send streams do not currently include dn_maxblkid when
sending both the metadnode and normal dnodes which are
needed in order to ensure that we are correctly maintaining
the portable objset MAC.

This patch zero's out the offending fields when computing
the bp MAC and ensures that these MACs are always
calculated in little endian order (regardless of the host
system's byte order). This patch also registers an errata
for the old on-disk format, which we detect by adding a
"version" field to newly created DSL Crypto Keys. We allow
datasets without a version (version 0) to only be mounted
for read so that they can easily be migrated. We also now
include dn_maxblkid in raw send streams to ensure the MAC
can be maintained correctly.

This patch also contains minor bug fixes and cleanups.

Reviewed-by: Jorgen Lundman <lundman@lundman.net>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed by: Matthew Ahrens <mahrens@delphix.com>
Signed-off-by: Tom Caputi <tcaputi@datto.com>
Closes openzfs#6845
Closes openzfs#6864
Closes openzfs#7052
jasonbking pushed a commit to TritonDataCenter/illumos-joyent that referenced this issue Mar 14, 2019
@Redsandro
Copy link

Redsandro commented Feb 27, 2021

Edit: Nevermind, I'll destroy the pool and try something else.

Original message below.


@tcaputi @behlendorf I have this dataset from back in 2017 giving problems during transfer for the purpose of updating, and then I remember we discussed this issue above. The faulty received sets cannot be removed. Can this be fixed, or do I need to destroy the whole pool, including the datasets that are fine?

PS - The original data is fine and is also safely backed up. Just wondering if this is doable locally because it would safe me a lot of time.

For more details, see #11661

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants