Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No automatic mounting when using a zdev.conf file. #904

Closed
Phoenixxl opened this issue Aug 28, 2012 · 7 comments
Closed

No automatic mounting when using a zdev.conf file. #904

Phoenixxl opened this issue Aug 28, 2012 · 7 comments
Milestone

Comments

@Phoenixxl
Copy link

@dajhorn in particular since I think this could maybe be mountall related ? Could be udev too shrug

Hello , let me mess up this otherwise perfectly fine Tuesday morning with an irritating bug report.

context:

Yesterday I installed a fresh copy of the ubuntu server 12.04.1 maintenance release on a ALiveNF7G-HD720p R5.0 motherboard .
I used an old promise TX4000 pata controller to make a software raid 1 boot array .
I used the internal sata controller (nForce 630A) to add four 320 gb maxtor hdd's to use with ZFS.
I installed it with ssh as the only package , then added the daily ppa . did an update/upgrade , then installed native zfs.
(it's not that this issue crept up due to using 12.04.1 , this particular computer didn't have a previous ubuntu 12.04 nor zfs install )

The problem:

When creating a pool , then creating a filesystem with a mount point /storage1
(zfs create -o mountpoint=/storage1 Tank_630a/storage1)
The mount point does not auto mount on boot when I use a /etc/zfs/zdev.conf file
But it does automount when I use /dev/disk/by-path drives directly.

ie:
working (auto mount):
zpool create -o ashift=12 Tank_630a raidz1 /dev/disk/by-path/pci-0000:00:09.0-scsi-0:0:0:0 /dev/disk/by-path/pci-0000:00:09.0-scsi-1:0:0:0 /dev/disk/by-path/pci-0000:00:09.0-scsi-2:0:0:0 /dev/disk/by-path/pci-0000:00:09.0-scsi-3:0:0:0

Not working (auto mount):
zpool create -o ashift=12 Tank_630a raidz1 630a-Ch-01 630a-Ch-02 630a-Ch-03 630a-Ch-04
In this case I use a zdev.conf that looks like this :
630a-Ch-01 pci-0000:00:09.0-scsi-0:0:0:0
630a-Ch-02 pci-0000:00:09.0-scsi-1:0:0:0
630a-Ch-03 pci-0000:00:09.0-scsi-2:0:0:0
630a-Ch-04 pci-0000:00:09.0-scsi-3:0:0:0

In the "not working" case , mounting does happen when I manually type mountall. Zpools are there on boot , zfs shows filesystems.
I have tried adding a delay of one minute , 2 minutes and 5 minutes to mountall.conf , it doesn't change anything.

As requested here : https://github.com/dajhorn/pkg-zfs/wiki/Ubuntu-ZFS-mountall-FAQ-and-troubleshooting
I added the requested files to this report.

I made a working setup , using by-path for zpool creation and generated the files with "working" in the name.
Then destroyed pool , made a zdev.conf file again , made a new pool , rebooted and made the files with "notworking" in them

I 7zipped the lot and will place them on some ftp space.:

http://users.skynet.be/bk318745/BugRepFiles.7z

Thank you in advance for looking into this.

@dajhorn
Copy link
Contributor

dajhorn commented Aug 28, 2012

This is a duplicate of #811. @pdf proposes a fix in dajhorn/pkg-zfs#39.

@Phoenixxl
Copy link
Author

I'll use /dev/disk/by-path for now I guess. I really don't see anything catastrophic that could happen , and the functionality that comes with using interfaces directly should be identical.. The only drawback I see is if 1 of the 4 ports dies , I can't transparently use another sata port . But , I would never use a piece of hardware where 1/4th of it's functionality is compromised anyway.

Bottom line , that "fix" mentioned on the other thread doesn't seem to be a fix at all . I'll just wait until eventually the zdev.conf gets put in somewhere on initrd level. At which timer I'll switch over again.

Thnx for the reply.

@pdf
Copy link

pdf commented Aug 29, 2012

@Phoenixxl the fix proposed in dajhorn/pkg-zfs#39 is exactly that - move zdev.conf and required udev helpers into the initrd. I want it primarily for systems with many ports on backplanes with weird port ordering, across multiple controllers (zdev.conf makes it vastly more convenient to work out which bay actually contains the failed disk).

@Phoenixxl
Copy link
Author

I meant it's not a "fix" I can slap on my current config , it's something that has to be changed in the zfs package and shipped with the next release. So I'll just have to wait.

Yes, I do use zdev.conf on a production system. Which worked out of the box unlike this one. It is convenient , especially if a controller dies you can just reassign the ports in there and to zfs it all still looks the same. If it's only 4 ports , I'll live ; the one with a 3 in it is the bottom caddy and the one with a 0 is the top one .

@pdf
Copy link

pdf commented Oct 31, 2012

FYI, @dajhorn has just released this in daily builds for all Ubuntu releases. Thanks Darik!

@Phoenixxl
Copy link
Author

Nice to hear !
I'll test it out later today.
Thnx.

Edit:
Since I started this particular issue , I'll take pdf's word that it's fixed and mark it as closed. The machine in question is located a 2 hour drive from here. I'll test it when I'm there . I don't want to risk losing the rest of my day.

@pdf
Copy link

pdf commented Oct 31, 2012

I don't know if it's documented anywhere, but if you have a zdev.conf in place already, after upgrade (and export/import from the /dev/disk/zpool path) I think it should 'just work'. If you update zdev.conf down the track, update-initramfs -k all -c will populate the initramfs with your new values for next boot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants