Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZFS mount behavior #119

Closed
behlendorf opened this issue Feb 23, 2011 · 2 comments
Closed

ZFS mount behavior #119

behlendorf opened this issue Feb 23, 2011 · 2 comments

Comments

@behlendorf
Copy link
Contributor

It was noted on the mailing list that filesystems are not automatically mounted when the modules are loaded. Pools are imported but datasets will not be mounted. The right thing to do here is to update the code to preserve the Solaris behavior. As described by Steve Costaras:

"From solaris the function of 'mountpoint' and 'canmount' are the indicators as to what the user wants to do with a particular file system.

If mountpoint is set to none; do not mount or if legacy to mount only via fstab/vfstab

the 'canmount' property can be used in conjunction with the above to over-ride mounting (for example if you want to do some testing and don't want to change the mountpoint pointer for the file system you can just change the canmount property).

So my suggestion would be to follow zfs standards to mount all file systems that have a valid mountpoint property set and are also marked as 'canmount'. Anything else won't be auto-mounted. That puts it directly on the user as to what action is to be taken."

@behlendorf
Copy link
Contributor Author

The ZFS mount behavior has been update to be as Solaris-like as possible. This was achieved by using a mount.zfs helper which is now installed as /sbin/mount.zfs. As part of this you should now never need to explicitly load the zfs modules. Running the zpool, zfs, or mount command will all automatically pull in the modules. This means if you want your zfs pools mounted on boot you just need to run zfs mount -a somewhere in the boot process. Since the init scripts are a very distro specific I leave it up to the user to do it in a distro-approved-way.

[behlendo@rhel-6-0-amd64 zfs]$ lsmod | grep zfs
[behlendo@rhel-6-0-amd64 zfs]$ df -t zfs
df: no file systems processed
[behlendo@rhel-6-0-amd64 zfs]$ sudo zfs mount -a
[behlendo@rhel-6-0-amd64 zfs]$ df -t zfs
Filesystem           1K-blocks      Used Available Use% Mounted on
tank                    478976         0    478976   0% /tank
tank/fish               478976         0    478976   0% /tank/fish
tank/fish2              478976         0    478976   0% /tank/fish2
tank/fish4              478976         0    478976   0% /tank/fish4

[behlendo@rhel-6-0-amd64 zfs]$ sudo zfs mount tank/fish3
cannot mount 'tank/fish3': legacy mountpoint
use mount(1M) to mount this filesystem

[behlendo@rhel-6-0-amd64 zfs]$ sudo mount -t zfs tank/fish3 /tank/fish3
[behlendo@rhel-6-0-amd64 zfs]$ df -t zfs
Filesystem           1K-blocks      Used Available Use% Mounted on
tank                    478976         0    478976   0% /tank
tank/fish               478976         0    478976   0% /tank/fish
tank/fish2              478976         0    478976   0% /tank/fish2
tank/fish4              478976         0    478976   0% /tank/fish4
tank/fish3              478976         0    478976   0% /tank/fish3

[behlendo@rhel-6-0-amd64 zfs]$ sudo zfs umount tank/fish4
[behlendo@rhel-6-0-amd64 zfs]$ sudo mount -t zfs tank/fish4 /tank/fish4
filesystem 'tank/fish4' cannot be mounted using 'mount'.
Use 'zfs set mountpoint=legacy' or 'zfs mount tank/fish4'.
See zfs(8) for more information.

[behlendo@rhel-6-0-amd64 zfs]$ sudo zfs mount tank/fish4
[behlendo@rhel-6-0-amd64 zfs]$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
tank                    478976         0    478976   0% /tank
tank/fish               478976         0    478976   0% /tank/fish
tank/fish2              478976         0    478976   0% /tank/fish2
tank/fish3              478976         0    478976   0% /tank/fish3
tank/fish4              478976         0    478976   0% /tank/fish4

[behlendo@rhel-6-0-amd64 zfs]$ sudo zfs set canmount=off tank/fish2
[behlendo@rhel-6-0-amd64 zfs]$ sudo zfs mount tank/fish2
cannot mount 'tank/fish2': 'canmount' property is set to 'off'

@behlendorf
Copy link
Contributor Author

Fix mount helper

Several issues related to strange mount/umount behavior were reported
and this commit should address most of them. The original idea was
to put in place a zfs mount helper (mount.zfs). This helper is used
to enforce 'legacy' mount behavior, and perform any extra mount argument
processing (selinux, zfsutil, etc). This helper wasn't ready for the
0.6.0-rc1 release but with this change it's functional but needs to
extensively tested.

This change addresses the following open issues.
Closed by 6adf458
Closed by 6adf458
Closed by 6adf458
Closed by 6adf458
Closed by 6adf458

dajhorn referenced this issue in zfsonlinux/pkg-zfs Sep 7, 2012
Usage of get_current() is not supported across all architectures.
The correct interface to use is the '#define current' which will
map to the appropriate function, usually current_thread_info().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #119
richardelling pushed a commit to richardelling/zfs that referenced this issue Oct 15, 2018
Signed-off-by: satbir <satbir.chhikara@gmail.com>
pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Jan 6, 2020
We get EOVERFLOW (which causes the "Value too large" message) from
zap_entry_read() when trying to read a ZAP entry and the provided buffer
is too small to hold the entire value. In this case we are doing
zap_lookup_impl() in the bookmark zap object (ds_bookmarks_obj).

The dsl_bookmark_phys_t is one word longer on Linux than on (our version
of) illumos. (The additional word is zbm_ivset_guid, used by the
encryption feature.) We aren't creating any bookmarks on linux (before
migration completes), but there are some additional code paths that can
result in updating existing bookmarks to use the new, larger size which
is incompatible with illumos.

Specifically, dsl_bookmark_sync_done() updates bookmarks that are at or
after the most recent snapshot, whenever there's a write to the dataset.
This update increases the size to be incompatible with illumos. Based on
preliminary testing, that seems to be the case we're hitting.
Additionally, when we destroy a snapshot that has bookmarks at or just
before it, we will update those bookmarks via
dsl_bookmark_ds_destroyed().

The solution is to maintain the existing bookmark length when updating
the bookmark's FBN values.

External issue: DLPX-67752
sdimitro pushed a commit to sdimitro/zfs that referenced this issue Feb 14, 2022
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant