Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenZFS 2.2.7 - cannot mount with zfs mount -v -l and no error message is printed #17033

Open
erikschul opened this issue Feb 6, 2025 · 4 comments
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@erikschul
Copy link

erikschul commented Feb 6, 2025

System information

Type Version/Name
Distribution Name debian
Distribution Version bookworm
Kernel Version 6.1.0-30-amd64
Architecture amd64
OpenZFS Version zfs-2.2.7-1 bpo12+1. zfs-kmod-2.2.7-1 bpo12+1

Describe the problem you're observing

I have created a pool with encryption, and normal dataset, e.g. mypool/normal. This pool is unlocked at boot.

I have also created a separate dataset on this pool with a different encryption key, e.g. mypool/mydataset.

I'm unable to mount this dataset, but there are no error messages.

Is it problematic that I used a different encryption on the pool and on the dataset? I read that if not set, it just inherits the encryption, so it's perfectly fine to encrypt the dataset differently?

Scrub returns no errors.

zfs mount -v -l mypool/mydataset

returns nothing.
There are no messages in dmesg.
It creates the mount directory (/mnt/mypool/mydataset), but it remains empty.
and zfs get mounted mypool/mydataset returns mounted no.
keystatus is available

I'm unsure how to debug this problem.

The bug I would report is that zfs mount -v doesn't return any message despite being supposed to be verbose?

Describe how to reproduce the problem

Include any warning/errors/backtraces from the system logs

@erikschul erikschul added the Type: Defect Incorrect behavior (e.g. crash, hang) label Feb 6, 2025
@erikschul erikschul changed the title OpenZFS 2.2.7 OpenZFS 2.2.7 - cannot mount with zfs mount -v -l and no error message is printed Feb 6, 2025
@erikschul
Copy link
Author

erikschul commented Feb 6, 2025

I found out by using journalctl -xe that the zfs service was giving errors:

journalctl -u  mnt-mypool-mydataset.mount -n 1000 --no-pager | tail
-- Boot <...> --
Feb 06 17:47:04 mypc systemd[1]: Dependency failed for mnt-mypool-mydataset.mount - /mnt/mypool/mydataset.
Feb 06 17:47:04 mypc systemd[1]: mnt-mypool-mydataset.mount: Job mnt-mypool-mydataset.mount/start failed with result 'dependency'.
Feb 06 18:36:44 mypc systemd[1]: Unmounting mnt-mypool-mydataset.mount - /mnt/mypool/mydataset...
Feb 06 18:36:45 mypc systemd[1]: mnt-mypool-mydataset.mount: Deactivated successfully.
Feb 06 18:36:45 mypc systemd[1]: Unmounted mnt-mypool-mydataset.mount - /mnt/mypool/mydataset.
Feb 06 21:40:23 mypc systemd[1]: Unmounting mnt-mypool-mydataset.mount - /mnt/mypool/mydataset...
Feb 06 21:40:23 mypc systemd[1]: mnt-mypool-mydataset.mount: Deactivated successfully.
Feb 06 21:40:23 mypc systemd[1]: Unmounted mnt-mypool-mydataset.mount - /mnt/mypool/mydataset.
Feb 06 21:42:06 mypc systemd[1]: Unmounting mnt-mypool-mydataset.mount - /mnt/mypool/mydataset...
Feb 06 21:42:06 mypc systemd[1]: mnt-mypool-mydataset.mount: Deactivated successfully.
Feb 06 21:42:06 mypc systemd[1]: Unmounted mnt-mypool-mydataset.mount - /mnt/mypool/mydataset.

and it seems related to /etc/zfs/zfs-list.cache/mypool
which contains

...
mypool/mydataset   /mnt/mypool/mydataset      on      off     on      on      on      off     on      off     mypool/mydataset   prompt  -       -       -      --       -       -       -

And I'm wondering if it fails to mount on boot because of prompt. And then gets stuck somehow?


If I remove the entry and restart, zfs mount works, and boots much faster.
If I then restart again, it boots slow (presumably timeout after a few mins), and can no longer mount with zfs mount.

Can I prevent it from being entered into the list.cache?

@erikschul
Copy link
Author

Setting zfs set canmount=noauto mypool/mydataset did not help.

@erikschul
Copy link
Author

My conclusion so far is that:

  • zfs mount -v should provide an error message
  • canmount=noauto should maybe not be written to the cachefile?

I have a theory that maybe zfs-mount-generator generates a service for the dataset at /run/systemd/generator/mypool-mydata.mount which contains:

...

[Mount]
Where=/mypool/mydataset
What=mypool/mydataset
Type=zfs
Options=defaults,noatime,dev,exec,rw,suid,nomand,zfsutil

which seems to interfere when trying to run zfs mount manually?

I got it to work by running

systemctl restart mypool-mydataset.mount

but boot time (IIUC) is still much slower.

Perhaps the zfs-mount-generator should not generate mounts when canmount=noauto, and also not for prompt types? (or configurable to disable on headless systems)

@erikschul
Copy link
Author

Setting zfs set mountpoint=legacy mypool/mydataset disabled the zfs-mount-generator service for mydataset.

This in turn made it possible to mount manually using

zfs load-key mypool/mydataset
mkdir /mnt/mypool/mydataset
mount -t zfs mypool/mydataset /mnt/mypool/mydataset

which is fine,
but I argue that there's a bug somewhere in the current design/implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

1 participant