-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Native encryption - datasets do not mount with newer version #6597
Comments
same here, 100% reproducible after zfs send | zfs recv of an encrypted filesystem. zpool scrub is clean, all other operations work fine, only mount fails. |
TL;DR More detail for developers: There was already a mechanism in-place for masking out dn_flags that are not portable across sends / recvs, so we simply added this flag to the blacklist, which resolved the issue. Unfortunatley this was an on-disk format change but it was definitely the clean and "correct" solution and the code wasn't merged into master yet so we felt ok making it. PS: The scrubs return clean because scrub does NOT check the MAC (which allows scrubs to work without loaded keys). |
@tcaputi - a patch against latest master would be much appreciated, if you have the time. I looked for references to DNODE_FLAG_USED_BYTES, but couldn't find where it was being blacklisted in |
@numinit I think this should do it (its actually a whitelist instead of a blacklist):
Keep in mind, this should not be used as a long-term fix since this will not be supported down the line. |
@tcaputi thanks, have a thin client I was testing ZFS encryption on that I'd like to get a couple files off of :-) |
Let me know if it doesn't work... (I can't say I have any pools from then lying around). |
Hi,
I have created a pool with an encrypted dataset using tcaputi@4188aa3
I experienced some issues and updated to tcaputi@5aef9be
Now I can't mount the dataset anymore. Loading the encryption keys seems to work but when I try to actually mount the dataset, I get
After going back to the older version, the dataset mounts without problems and I do have a backup ... but I would rather not move 12T data, again. So is there anything I can do?
The text was updated successfully, but these errors were encountered: