Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mount generator fixes #10477

Closed
wants to merge 3 commits into from
Closed

Conversation

didrocks
Copy link
Contributor

@didrocks didrocks commented Jun 19, 2020

This set of changes makes the zfs mount generated units more effective and tight in case of encryption selected.

Motivation and Context

The goal of this PR is to make the systemd zfs generator a little bit more robust against failure, especially when encryption is involved.

Those are done in multiple ways (each in separate commits):

  • Make unload-key logic similar to load-key as unloading an unloaded key via zfs command will fail.
  • Make tighter the dependency between the mount unit and its load-key unit, so that stopping or failure load-key unit will impact directly its .mount service
  • Allow the mount unit to be imported at any time (like via systemd automount for on demand mount or via a service dependency request) instead of forcing it before zfs-mount.service (which will just skip this dataset silently as the key is not loaded)

Description

Taking by each point in the commit message:

  • Make unloading the key more robust

    The unit was failing instead of stopping if someone manually unloaded the key before stopping the unit (zfs unload-key is failing on an unavailable key). Follow a similar logic than for loading the key, checking for the key status before unloading it.

  • BindsTo dataset keyload unit to mount associate unit

    We need a stronger dependency between the mount unit and its keyload unit when we know that the dataset is encrypted.
    If the keyload unit fails, Wants= will still try to mount the dataset, which will then fail.
    It’s better to show that the failure is due to a dependency failing, the keyload unit, by tighting up the dependency. We can do this as we know that we generate both units in the generator and so, it’s not an optional dependency. BindsTo enable as well that if the keyload unit fails at any point, the associated mountpoint will be then unmounted.
    Note: Requires could be enough as this is a simple oneshot service, but if it will evolve in a real service, the relationship is better addressed that way.

  • Ensure mount unit pilots when its ZFS key is loaded

    Drop Before=zfs.mount dependency explicity on generated load-key .service unit.
    Indeed, the associated mount unit is After=.service.
    This is thus the mount point which controls at what point it wants to be mounted (Before=zfs-mount.service in stock generator), but this can be an automount point, or triggered by another service.
    This additional dependency from the key load service is not needed thus.

How Has This Been Tested?

We checked after a systemctl daemon-reload that:

  • Stopping load-key unit twice or stopping after a manual zfs unload-key doesn’t fail anymore.
  • Stopping the key-load unit will try to stop the mount service.
  • If the load-key unit is failing, the .mount unit will not try to start with "Dependency failed for". Without the fix, it was starting and failing with "Failed to mount ..."
  • Stopping the load-key service state will depend on the .mount unit state. if the dataset can be unmounted (busy for instance), the key-load unit was simply failing with "Key unload error: '...' is busy". Now, if the mount unit can’t be unmounted successfully, then the key-load service will be in failure referring that it can’t stop the mount unit.
  • We can now start the .mount units at any time on boot, by a systemd service or via zfs-mount.service if the system administrators adds a .conf.d/ file for wanting it.
  • Stopping the .mount unit still keep the load-key service active, and thus the key loaded. This is a no-change compared to current behavior (note that in ubuntu: for user home datasets only, this is not true as we will link load-key and mount units lifecycle, to have per-user home encryption and decryption on demand with a separate automount unit. We are happy to upstream such a feature with a dedicated user property if interested).
  • Regular mount unit (not encrypted datasets) are not impacted.

Those changes of course only impacts systemd systems with some encrypted datasets, when the user enabled the zfs mount generator via the list cache zed hook.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (a change to man pages or other documentation)

Checklist:

  • My code follows the ZFS on Linux code style requirements.
  • I have updated the documentation accordingly.
  • I have read the contributing document.
  • I have added tests to cover my changes.
  • I have run the ZFS Test Suite with this change applied.
  • All commit messages are properly formatted and contain Signed-off-by.

Note: they are indeed multiple commits, but those are small and impacts the same areas of code. We separated them for better articulations of the changes.service
The generator doesn’t have any automated tests, hence this box not checked. However, as previously explained, a lot of manual tests have been processed.
No documentation is impacted AFAIK.

jibel and others added 3 commits June 19, 2020 12:04
Drop Before=zfs.mount dependency explicity on generated key-load .service
unit.
Indeed, the associated mount unit is After=<dataset-key-load>.service.
This is thus the mount point which controls at what point it wants to be
mounted (Before=zfs-mount.service in stock generator), but this can be
an automount point, or triggered by another service.
This additional dependency from the key load service is not needed thus.

Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
We need a stronger dependency between the mount unit and its keyload unit
when we know that the dataset is encrypted.
If the keyload unit fails, Wants= will still try to mount the dataset,
which will then fail.
It’s better to show that the failure is due to a dependency failing, the
keyload unit, by tighting up the dependency. We can do this as we know
that we generate both units in the generator and so, it’s not an
optional dependency.
BindsTo enable as well that if the keyload unit fails at any point, the
associated mountpoint will be then unmounted.

Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
The unit was failing instead of stopping if someone manually unloaded
the key before stopping the unit (zfs unload-key is failing on an
unavailable key).
Follow a similar logic than for loading the key, checking for the key
status before unloading it.

Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
@codecov
Copy link

codecov bot commented Jun 19, 2020

Codecov Report

Merging #10477 into master will increase coverage by 0.11%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #10477      +/-   ##
==========================================
+ Coverage   79.65%   79.77%   +0.11%     
==========================================
  Files         393      393              
  Lines      123861   123861              
==========================================
+ Hits        98663    98810     +147     
+ Misses      25198    25051     -147     
Flag Coverage Δ
#kernel 80.17% <ø> (+0.17%) ⬆️
#user 66.44% <ø> (+0.46%) ⬆️
Impacted Files Coverage Δ
module/zfs/vdev_indirect.c 75.00% <0.00%> (-5.34%) ⬇️
lib/libzpool/kernel.c 64.09% <0.00%> (-2.96%) ⬇️
module/os/linux/zfs/vdev_file.c 82.24% <0.00%> (-1.87%) ⬇️
lib/libzfs/libzfs_changelist.c 85.15% <0.00%> (-1.18%) ⬇️
module/icp/api/kcf_mac.c 38.28% <0.00%> (-0.58%) ⬇️
module/zfs/vdev_label.c 93.85% <0.00%> (-0.47%) ⬇️
cmd/zpool/zpool_iter.c 86.69% <0.00%> (-0.36%) ⬇️
module/zfs/dnode.c 94.78% <0.00%> (-0.28%) ⬇️
module/nvpair/nvpair.c 83.26% <0.00%> (-0.18%) ⬇️
module/zfs/dsl_scan.c 85.50% <0.00%> (-0.13%) ⬇️
... and 56 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7564073...365c7d4. Read the comment docs.

@behlendorf behlendorf requested a review from rlaager June 19, 2020 16:56
@behlendorf
Copy link
Contributor

cc: @aerusso

@behlendorf behlendorf added the Status: Code Review Needed Ready for review and testing label Jun 19, 2020
@rlaager
Copy link
Member

rlaager commented Jun 19, 2020

I’m concerned about the idea of stopping the mount unit trying to stop the key-load unit. The key-load unit is for the encryption root, which may have many mounts under it. Stopping one mount unit should not cascade into stopping all the mounts under that encryption root. I think the desired behavior is that stopping the mount unit leaves the key unit running.

It is unclear which is the current behavior with this PR, as these seem to conflict:

  • “Stopping the mount unit still try to stop the load-key service”
  • “Stopping the .mount unit still keep the load-key service active, and thus the key loaded.”

@didrocks
Copy link
Contributor Author

didrocks commented Jun 20, 2020

It is unclear which is the current behavior with this PR, as these seem to conflict:
“Stopping the mount unit still try to stop the load-key service”
“Stopping the .mount unit still keep the load-key service active, and thus the key loaded.”

Sorry, this is wrong (probably due to trying to be exhaustive :p) and I meant:

  • “Stopping the key-load unit will try to stop the mount service.”
  • “Stopping the .mount unit still keep the load-key service active, and thus the key loaded.” (note that in ubuntu: for user home datasets only, this is not true as we will link load-key and mount units lifecycle, to have per-user home encryption and decryption on demand with a separate automount unit. We are happy to upstream such a feature with a dedicated user property if interested).

I have fixed the above description.

I’m concerned about the idea of stopping the mount unit trying to stop the key-load unit. The key-load unit is for the encryption root, which may have many mounts under it. Stopping one mount unit should not cascade into stopping all the mounts under that encryption root. I think the desired behavior is that stopping the mount unit leaves the key unit running.
With the above correction, yeah, stopping the mount unit has no impact on the key-load state at all.
The idea is to have the reverse dependency clear (this is why we binds the mount unit state TO the key-load state. This is unidirectional (like a strong Requires) in systemd terminology: trying to stop the key-load service explicitely will try to unmount the associated unit. It means that:

  • the unmount will be done if this the directory is not busy (it’s not a lazy unmount) and if so, the key-load will then be stopped.
  • if the directory is busy (like if you try to stop the key-load unit for the pool, which has any dataset depending on this mounted under it like /, the mount will fail (ressource busy) and the key-load will not unload as it is already the case today. However, the error is way more clear now:
  • key-load unit will state that it can’t stop due to dependency not being stopped.
  • previously, we would just get the zfs unload-key output error.

Hope this is more clear now :)

@@ -191,19 +198,19 @@ Documentation=man:zfs-mount-generator(8)
DefaultDependencies=no
Wants=${wants}
After=${after}
Before=${before}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do this, should we also drop Wants=, After=, and Requires=?

The key load may apply to multiple encryption roots, right? So it's inappropriate for any of the particular mount x-systemd.* options to apply. So my vote is to just drop all of these.

Caveat: I just woke up. So read my arguments very critically.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW we had some discussions about this way back when: #9649 (comment) (and at least the following comment by @rlaager)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@didrocks What is the effect of removing this Before= and why did you make that change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@didrocks What is the effect of removing this Before= and why did you make that change?
This was done in this commit (e80ded6). To expand a little bit more, the idea is to delay the home user mount once the keyfile is accessible. The keyfile can be made accessible way later than zfs-mount.service (which is what is in the before= here), like via a pam module on login for instance, but can be also be made accessible during the boot process (single user, plymouth asking for passphrase) or so on.

Basically, the idea is to not necessary link those units to zfs-mount.service, which will try to mount it, then, by dependency chain, try loading the key-load unit which will fail at this point, creating spam in logs.

@InsanePrawn
Copy link
Contributor

* “Stopping the .mount unit still keep the load-key service active, and thus the key loaded.”

great! I was also unsure about that part.

(note that in ubuntu: for user home datasets only, this is not true as we will link load-key and mount units lifecycle, to have per-user home encryption and decryption on demand with a separate automount unit.
We are happy to upstream such a feature with a dedicated user property if interested).

How would this be implemented? Some changes to the generator that conditionally generate slightly different units depending on a control property? If so, then as a humble ZFS user, I'd very much like to see that upstreamed, even if only to stop other people from trying to reinvent that wheel.

Speaking of new properties to modify key-load unit behaviour: Back in January (simpler times!), when I was working on adding the initial control properties, I was considering whether we should add similar properties to control the key-load units' dependencies individually. Seeing how my PR dragged on and got bigger and bigger (we started with two(?) properties and ended with a bunch more), I decided to postpone the idea in the name of future work and niche interest that even I myself didn't have a concrete, real use for back then and settled for a simple solution instead of more 'property explosions'. Apparently it's time to at least reconsider it.
Are there good reasons for people to want to override the key-load unit's ordering/dependencies individually? Should we add properties for that? (Is property inheritance ever a real problem here?)

@didrocks
Copy link
Contributor Author

didrocks commented Jun 22, 2020

How would this be implemented? Some changes to the generator that conditionally generate slightly different units depending on a control property? If so, then as a humble ZFS user, I'd very much like to see that upstreamed, even if only to stop other people from trying to reinvent that wheel.

Indeed, we already has some changes in the generator (like invalid cache update and things hooking up with ZSys) that we want to upstream (for upstreamable one).

Right now, the idea is that we have a keystore which will have all keyfile for encrypted directory. However, this keystore is made accessible for the whole system in initramfs, and the children per-user keyfile is made accessible via PAM modules, on login. Then, an automount unit will trigger the mount unit, which triggers key-load service (those are the 2 changes here: to keep a common mount and key-load service) that we BindsTo to a service that make unaccessible the keyfile.
For instance, when the home isn’t used (no more seat for this user or process running in it), after a timeout, the automount kicks a stop on the mount, which requests (by BindsTo) a stop on key-load service which requests (by BindsTo) a stop to make the keyfile unaccessible.

Speaking of new properties to modify key-load unit behaviour: Back in January (simpler times!), when I was working on adding the initial control properties, I was considering whether we should add similar properties to control the key-load units' dependencies individually. Seeing how my PR dragged on and got bigger and bigger (we started with two(?) properties and ended with a bunch more), I decided to postpone the idea in the name of future work and niche interest that even I myself didn't have a concrete, real use for back then and settled for a simple solution instead of more 'property explosions'. Apparently it's time to at least reconsider it.
Are there good reasons for people to want to override the key-load unit's ordering/dependencies individually? Should we add properties for that? (Is property inheritance ever a real problem here?)

So, you have the scheme over there. I would be glad if we can upstream this whole behavior if possible so that we pilot that and the whole property thing (I saw indeed a big expansion in 0.8.3 of them :)). The idea is really to allow a scheme where mount of encrypted dataset can be on demand, with a timeout to unmount it (which decreases the timeframe some files can be made accessible to other user/process on the system).

Edit: We only modify this mount logic generation for encrypted home user datasets controlled by ZSys to limit the impact. Other datasets (encrypted or not) have the pure upstream dependencies.

Note: I will be away for the next 3 weeks, which gives time for a second thought on that, but jibel should be around if any further discussion is needed.

@didrocks
Copy link
Contributor Author

After discussing with jibel, an other way think about it without making it too easy for people to shoot in theit feet by tweaking systemd properties via zfs systems is to consider some "modes" on the dataset. For instance, let’s imagine one user property "mountmode" which would be:

  • automatic if not set (Wants=zfs-mount.service), which is the current behavior
  • on-demand value would create the .automount unit, not generates Wants=zfs-mount.service and do the necessary tweaking.

That way, we control more the system relationships between units, and this is more robust in general user-wide.

@behlendorf
Copy link
Contributor

@didrocks @jibel I wanted to draw your attention to #9903 which adds to contrib/ a PAM module which loads the zfs encryption keys for home datasets. Did you have something like this in mind?

@didrocks
Copy link
Contributor Author

@behlendorf Thanks for the link! We have a mix setup for our PAM authentication (as we need to load from Luks the keyfile that are then used by native ZFS encryption), but I think we can converge the solution. We went with writing a PAM module available at https://github.com/ubuntu/kstore. The idea is to converge our ZFS and non ZFS story for encryption from a Luks store (so that you have all automation we already have from Luks through ZFS), but I think we can take and reuse part of the work made in contrib/ here. Thanks again for pointing that out.

Is there any chance to move on with this branch (our feature freeze is soon and I would like to avoid distro-patching if possible)?

@behlendorf
Copy link
Contributor

@didrocks thanks for reminding me about this. Yes, I don't see an issue with merging this soon unless @aerusso @rlaager or @InsanePrawn have concerns about these changes.

@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Jul 19, 2020
behlendorf pushed a commit that referenced this pull request Jul 19, 2020
We need a stronger dependency between the mount unit and its keyload unit
when we know that the dataset is encrypted.
If the keyload unit fails, Wants= will still try to mount the dataset,
which will then fail.
It’s better to show that the failure is due to a dependency failing, the
keyload unit, by tighting up the dependency. We can do this as we know
that we generate both units in the generator and so, it’s not an
optional dependency.
BindsTo enable as well that if the keyload unit fails at any point, the
associated mountpoint will be then unmounted.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes #10477
behlendorf pushed a commit that referenced this pull request Jul 19, 2020
The unit was failing instead of stopping if someone manually unloaded
the key before stopping the unit (zfs unload-key is failing on an
unavailable key).
Follow a similar logic than for loading the key, checking for the key
status before unloading it.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Laager <rlaager@wiktel.com>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes #10477
@didrocks
Copy link
Contributor Author

Thanks a lot!

@aerusso aerusso mentioned this pull request Jul 31, 2020
12 tasks
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Sep 22, 2020
Drop Before=zfs.mount dependency explicity on generated key-load .service
unit.
Indeed, the associated mount unit is After=<dataset-key-load>.service.
This is thus the mount point which controls at what point it wants to be
mounted (Before=zfs-mount.service in stock generator), but this can be
an automount point, or triggered by another service.
This additional dependency from the key load service is not needed thus.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Sep 22, 2020
We need a stronger dependency between the mount unit and its keyload unit
when we know that the dataset is encrypted.
If the keyload unit fails, Wants= will still try to mount the dataset,
which will then fail.
It’s better to show that the failure is due to a dependency failing, the
keyload unit, by tighting up the dependency. We can do this as we know
that we generate both units in the generator and so, it’s not an
optional dependency.
BindsTo enable as well that if the keyload unit fails at any point, the
associated mountpoint will be then unmounted.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Sep 22, 2020
The unit was failing instead of stopping if someone manually unloaded
the key before stopping the unit (zfs unload-key is failing on an
unavailable key).
Follow a similar logic than for loading the key, checking for the key
status before unloading it.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Laager <rlaager@wiktel.com>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
jsai20 pushed a commit to jsai20/zfs that referenced this pull request Mar 30, 2021
Drop Before=zfs.mount dependency explicity on generated key-load .service
unit.
Indeed, the associated mount unit is After=<dataset-key-load>.service.
This is thus the mount point which controls at what point it wants to be
mounted (Before=zfs-mount.service in stock generator), but this can be
an automount point, or triggered by another service.
This additional dependency from the key load service is not needed thus.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
jsai20 pushed a commit to jsai20/zfs that referenced this pull request Mar 30, 2021
We need a stronger dependency between the mount unit and its keyload unit
when we know that the dataset is encrypted.
If the keyload unit fails, Wants= will still try to mount the dataset,
which will then fail.
It’s better to show that the failure is due to a dependency failing, the
keyload unit, by tighting up the dependency. We can do this as we know
that we generate both units in the generator and so, it’s not an
optional dependency.
BindsTo enable as well that if the keyload unit fails at any point, the
associated mountpoint will be then unmounted.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
jsai20 pushed a commit to jsai20/zfs that referenced this pull request Mar 30, 2021
The unit was failing instead of stopping if someone manually unloaded
the key before stopping the unit (zfs unload-key is failing on an
unavailable key).
Follow a similar logic than for loading the key, checking for the key
status before unloading it.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Laager <rlaager@wiktel.com>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
sempervictus pushed a commit to sempervictus/zfs that referenced this pull request May 31, 2021
Drop Before=zfs.mount dependency explicity on generated key-load .service
unit.
Indeed, the associated mount unit is After=<dataset-key-load>.service.
This is thus the mount point which controls at what point it wants to be
mounted (Before=zfs-mount.service in stock generator), but this can be
an automount point, or triggered by another service.
This additional dependency from the key load service is not needed thus.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
sempervictus pushed a commit to sempervictus/zfs that referenced this pull request May 31, 2021
We need a stronger dependency between the mount unit and its keyload unit
when we know that the dataset is encrypted.
If the keyload unit fails, Wants= will still try to mount the dataset,
which will then fail.
It’s better to show that the failure is due to a dependency failing, the
keyload unit, by tighting up the dependency. We can do this as we know
that we generate both units in the generator and so, it’s not an
optional dependency.
BindsTo enable as well that if the keyload unit fails at any point, the
associated mountpoint will be then unmounted.

Reviewed-by: Richard Laager <rlaager@wiktel.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
sempervictus pushed a commit to sempervictus/zfs that referenced this pull request May 31, 2021
The unit was failing instead of stopping if someone manually unloaded
the key before stopping the unit (zfs unload-key is failing on an
unavailable key).
Follow a similar logic than for loading the key, checking for the key
status before unloading it.

Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Richard Laager <rlaager@wiktel.com>
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
Signed-off-by: Didier Roche <didrocks@ubuntu.com>
Closes openzfs#10477
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants