-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature-request: native encryption with ZFS #177
Comments
For me this sounds like a feature request for the As for an encryption of the whole zpool, the user would have to enable encryption before |
Yes looks like |
So you also want to have a password? Well.. in that case, I have to agree, that makes it a bit more difficult. I am not a zfs expert, but if I understand this zfs-encryption correctly (for some reasons I could test it myself for now, I'll have to look into my test-setup) and you need to re-enter the password when you are mounting / start using the device, that will cause some problems in Linstor which would require quite a lot of changes... First, right now, basically every layer in Linstor might be "on top" of the StorageLayer with a ZFS provider (except the StorageLayer itself). Second, storing / managing the password itself is a problem. The LuksLayer requires the user to remember only one passphrase (the master-passphrase), which Linstor never remembers and keeps in memory as short as possible. The actual Luks-passwords are generated, salted and encrypted (with the master-passphrase) before stored in the database. All in all, although I do understand that one might want to use the feature especially "if it is already there". But on the other hand, Linstor already has a dedicated LuksLayer for encryption, which also works fine with ZFS. Unless someone explains me what other differences there are than "it is not the ZFS built in encryption", I guess this issue will get such a low priority, that it most likely will never get implemented. |
Some small notes on topic:
|
Thanks for the info. The Regarding the benefits: Linstor does not call / use scrub, but Linstor does use send/recv for snapshot shipping. But as you mentioned, if that also works without keys, |
@kvaps What do you expect with encryption? I use encryption to send disks to repair or throw away broken disks or protect data on stolen hardware. |
Well, right now I don't need an encryption for the LINSTOR. |
Only my root filesystem is LVM LUKS EXT4, because I don't trust to boot with ZFS and it's easier to repair grub and initramfs with debian's netinst.iso thru an IPMI session. If you encouraged you can boot from an encrypted ZFS. |
Another vote for ZFS encryption handling. |
+1 as I think it would be faster than LUKS for the ZFS case IMO one key for everything is wrong. You encrypt zfs datasets not pools. Last time I checked, the encryption feature is turned on by default on all new Zpool (zfs-2.2.2). So you only need to decide if you turn it on/off at the dataset level. So in this case it would make sense to have a a per-pvc annotation along the lines of :
Assuming you dont allow passphrases (it doesn't makes sense. this is a key used by a machine) you are left with having to add the following things to your volume creation path:
This implementation would mean that your disks are encrypted until they are mounted by a running This is just pseudocode. Of course I understand this is the k8s case and linstor-server also exists outside of k8s but you get the idea. But I think it would be a good start. We can also discuss later about the security of having keys in tmps and in an annotation, or the optimization of loading the keys locally at container boot time instead as for each mount, but these are problems that can be solved subsequently. What do you think? |
OpenZFS does support native encryption since 0.8.0.
Encryption can be enabled for whole pool, dataset or per ZVOL (our case).
It would be nice to add its support in roadmap for LINSTOR.
Here is examples:
https://blog.heckel.io/2017/01/08/zfs-encryption-openzfs-zfs-on-linux/
The text was updated successfully, but these errors were encountered: