Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vdev_id and multiple disk enclosures #2074

Closed
ramonfernandez opened this issue Jan 22, 2014 · 21 comments
Closed

vdev_id and multiple disk enclosures #2074

ramonfernandez opened this issue Jan 22, 2014 · 21 comments
Labels
Type: Feature Feature request or new feature

Comments

@ramonfernandez
Copy link

Greetings!

I've daisy-chained two MSA60 12-disk enclosures to a box running ZFS via a SAS1068E controller (the second port has a third MSA60 attached to it, so I've come to the point where I need the daisy-chaining). I use the vdev_id.conf displayed below. Using one enclosure per port I do get devices A1..A12 and B1..B12 in /dev/disk/by-vdev, but when the second enclosure is connected I don't get any more devices (ie: A13..A24).

The disks are recognized by the system (ie: I get /dev/sd?? nodes) and are accessible. I'm using 0.6.2-1~precise from the Ubuntu ppa on a 3.11.0-15-generic kernel (but this is not yet a production box so I'm open to trying other versions of ZFS).

Is this expected? Or am I doing something wrong? I can set up aliases for all new disks, but having vdev IDs is such a nice feature that I'm sad to not have it :-)

Thanks in advance fro any help you can provide!
Cheers,
Ramón.

/etc/zfs/vdev_id.conf

multipath no
topology sas_direct
phys_per_port 4
channel 0c:00.0 0 A
channel 0c:00.0 1 B

@nedbass
Copy link
Contributor

nedbass commented Jan 22, 2014

Hi @ramonfernandez, we've only tested vdev_id with a limited range of hardware and software configurations, so it's not surprising to find cases where it doesn't work as expected.

Can you post the output from this command in a gist:

sh -x /lib/udev/vdev_id -d <block_device>

where block_device is one of the disk names (just the name, not the full path, i.e. sda) that isn't getting a by-vdev entry?

@ramonfernandez
Copy link
Author

On Wed, Jan 22, 2014 at 11:20:41AM -0800, Ned Bass wrote:

Hi @ramonfernandez, we've only tested vdev_id with a limited range of
hardware and software configurations, so it's not surprising to find
cases where it doesn't work as expected.

Can you post the output from this command in a gist:

sh -x /lib/udev/vdev_id -d <block_device>

where block_device is one of the disk names (just the name, not the full
path, i.e. sda) that isn't getting a by-vdev entry?

Attached. This is for "sdy", which is on slot-1 of the second enclosure in
the chain, and gets mapped to A1 (as opposed to A13). I also checked
"sde", the disk on slot-1 on the first enclosure in the chain, which also
gets mapped to A1 -- let me know if you need that. Ah, and thanks for your
prompt response!

Cheers,
Ramón.

  • PATH=/bin:/sbin:/usr/bin:/usr/sbin
  • CONFIG=/etc/zfs/vdev_id.conf
  • PHYS_PER_PORT=
  • DEV=
  • MULTIPATH=
  • TOPOLOGY=
  • getopts c:d:g:mp:h OPTION
  • case ${OPTION} in
  • DEV=sdy
  • getopts c:d:g:mp:h OPTION
  • '[' '!' -r /etc/zfs/vdev_id.conf ']'
  • '[' -z sdy ']'
  • '[' -z '' ']'
    ++ awk '$1 == "topology" {print $2; exit}' /etc/zfs/vdev_id.conf
  • TOPOLOGY=sas_direct
    ++ alias_handler
    ++ local DM_PART=
    ++ echo
    ++ grep -q -E 'p[0-9][0-9]*$'
  • ID_VDEV=
  • '[' -z '' ']'
  • TOPOLOGY=sas_direct
  • case $TOPOLOGY in
    ++ sas_handler
    ++ '[' -z '' ']'
    +++ awk '$1 == "phys_per_port" {print $2; exit}' /etc/zfs/vdev_id.conf
    ++ PHYS_PER_PORT=4
    ++ PHYS_PER_PORT=4
    ++ echo 4
    ++ grep -q -E '^[0-9]+$'
    ++ '[' -z '' ']'
    +++ awk '$1 == "multipath" {print $2; exit}' /etc/zfs/vdev_id.conf
    ++ MULTIPATH_MODE=no
    ++ '[' no = yes ']'
    ++ echo sdy
    ++ grep -q '^/devices/'
    +++ udevadm info -q path -p /sys/block/sdy
    ++ sys_path=/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0/port-9:0:9/expander-9:2/port-9:2:0/end_device-9:2:0/target9:0:22/9:0:22:0/block/sdy
    +++ echo /devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0/port-9:0:9/expander-9:2/port-9:2:0/end_device-9:2:0/target9:0:22/9:0:22:0/block/sdy
    +++ tr / ' '
    ++ set -- devices pci0000:00 0000:00:04.0 0000:0c:00.0 host9 port-9:0 expander-9:0 port-9:0:9 expander-9:2 port-9:2:0 end_device-9:2:0 target9:0:22 9:0:22:0 block sdy
    ++ num_dirs=15
    ++ scsi_host_dir=/sys
    ++ i=1
    ++ '[' 1 -le 15 ']'
    +++ eval echo '${1}'
    ++++ echo devices
    ++ d=devices
    ++ scsi_host_dir=/sys/devices
    ++ echo devices
    ++ grep -q -E '^host[0-9]+$'
    ++ i=2
    ++ '[' 2 -le 15 ']'
    +++ eval echo '${2}'
    ++++ echo pci0000:00
    ++ d=pci0000:00
    ++ scsi_host_dir=/sys/devices/pci0000:00
    ++ echo pci0000:00
    ++ grep -q -E '^host[0-9]+$'
    ++ i=3
    ++ '[' 3 -le 15 ']'
    +++ eval echo '${3}'
    ++++ echo 0000:00:04.0
    ++ d=0000:00:04.0
    ++ scsi_host_dir=/sys/devices/pci0000:00/0000:00:04.0
    ++ echo 0000:00:04.0
    ++ grep -q -E '^host[0-9]+$'
    ++ i=4
    ++ '[' 4 -le 15 ']'
    +++ eval echo '${4}'
    ++++ echo 0000:0c:00.0
    ++ d=0000:0c:00.0
    ++ scsi_host_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0
    ++ echo 0000:0c:00.0
    ++ grep -q -E '^host[0-9]+$'
    ++ i=5
    ++ '[' 5 -le 15 ']'
    +++ eval echo '${5}'
    ++++ echo host9
    ++ d=host9
    ++ scsi_host_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9
    ++ echo host9
    ++ grep -q -E '^host[0-9]+$'
    ++ break
    ++ '[' 5 = 15 ']'
    +++ eval echo '${4}'
    +++ awk -F: '{print $2":"$3}'
    ++++ echo 0000:0c:00.0
    ++ PCI_ID=0c:00.0
    ++ port_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9
    ++ case $TOPOLOGY in
    ++ j=6
    ++ i=6
    ++ '[' 6 -le 6 ']'
    +++ eval echo '${6}'
    ++++ echo port-9:0
    ++ port_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0
    ++ i=7
    ++ '[' 7 -le 6 ']'
    +++ head -1
    +++ awk -F: '{print $NF}'
    ++ PHY=0
    ++ '[' -z 0 ']'
    ++ PORT=0
    ++ end_device_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0
    ++ '[' 7 -lt 15 ']'
    +++ eval echo '${7}'
    ++++ echo expander-9:0
    ++ d=expander-9:0
    ++ end_device_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0
    ++ grep -q '^end_device'
    ++ i=8
    ++ '[' 8 -lt 15 ']'
    +++ eval echo '${8}'
    ++++ echo port-9:0:9
    ++ d=port-9:0:9
    ++ end_device_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0/port-9:0:9
    ++ echo port-9:0:9
    ++ grep -q '^end_device'
    ++ i=9
    ++ '[' 9 -lt 15 ']'
    +++ eval echo '${9}'
    ++++ echo expander-9:2
    ++ d=expander-9:2
    ++ end_device_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0/port-9:0:9/expander-9:2
    ++ echo expander-9:2
    ++ grep -q '^end_device'
    ++ i=10
    ++ '[' 10 -lt 15 ']'
    +++ eval echo '${10}'
    ++++ echo port-9:2:0
    ++ d=port-9:2:0
    ++ end_device_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0/port-9:0:9/expander-9:2/port-9:2:0
    ++ echo port-9:2:0
    ++ grep -q '^end_device'
    ++ i=11
    ++ '[' 11 -lt 15 ']'
    +++ eval echo '${11}'
    ++++ echo end_device-9:2:0
    ++ d=end_device-9:2:0
    ++ end_device_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0/port-9:0:9/expander-9:2/port-9:2:0/end_device-9:2:0
    ++ echo end_device-9:2:0
    ++ grep -q '^end_device'
    ++ end_device_dir=/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0/port-9:0:9/expander-9:2/port-9:2:0/end_device-9:2:0/sas_device/end_device-9:2:0
    ++ break
    +++ cat /sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0/port-9:0:9/expander-9:2/port-9:2:0/end_device-9:2:0/sas_device/end_device-9:2:0/bay_identifier
    ++ SLOT=1
    ++ '[' -z 1 ']'
    +++ map_slot 1
    +++ local LINUX_SLOT=1
    +++ local MAPPED_SLOT=
    ++++ awk '$1 == "slot" && $2 == 1 { print $3; exit }' /etc/zfs/vdev_id.conf
    +++ MAPPED_SLOT=
    +++ '[' -z '' ']'
    +++ MAPPED_SLOT=1
    +++ printf %d 1
    ++ SLOT=1
    +++ map_channel 0c:00.0 0
    +++ local MAPPED_CHAN=
    +++ local PCI_ID=0c:00.0
    +++ local PORT=0
    +++ case $TOPOLOGY in
    ++++ awk '$1 == "channel" && $2 == "0c:00.0" && $3 == 0 { print $4; exit }' /etc/zfs/vdev_id.conf
    +++ MAPPED_CHAN=A
    +++ printf %s A
    ++ CHAN=A
    ++ '[' -z A ']'
    ++ echo A1
  • ID_VDEV=A1
  • '[' -n A1 ']'
  • echo ID_VDEV=A1
    ID_VDEV=A1
  • echo ID_VDEV_PATH=disk/by-vdev/A1
    ID_VDEV_PATH=disk/by-vdev/A1

@nedbass
Copy link
Contributor

nedbass commented Jan 22, 2014

I would think it would be more useful to have the new enclosure show up as another "channel" (i.e. C) rather than extending the slot numbers of A. After all, the idea is to simplify physically locating a disk, and it may not be obvious which enclosure has slots 1-12 and which has 13-24.

The "sas_direct" topology is based on the notion of one enclosure per HBA port. Since both of your enclosures appear on the same port their disks are getting conflicting names. To define channels in daisy-chained topologies we need an additional piece of location information beside PCI slot and port number; we need to know the position in the chain. To preserve backward compatibility with the existing "channel" syntax I think we'd need to define a new topology to support this, i.e. "sas_daisychain".

I don't have access to a system like this, so I'm not sure how the daisy-chain architecture gets reflected under /sys. But please do also post the vdev_id trace output for sde, so I can see where the paths under /sys start to diverge.

@ramonfernandez
Copy link
Author

I attached a third enclosure and I think I have it. All enclosures show up
under the following common path:

/sys/devices/pci0000:00/0000:00:04.0/0000:0c:00.0/host9/port-9:0/expander-9:0

After that each disk in the first enclosure is on a port number, and the
connection to the next enclosure also has its own port number. For
example:

udevadm info -q path -p /sys/block/sde

.../port-9:0:0/end_device-9:0:0/target9:0:0/9:0:0:0/block/sde

(where "..." is the common path above). This is disk 1 on enclosure 1,
sitting on port 0. The connection to the next enclosure shows up on port
9:

udevadm info -q path -p /sys/block/sdm

.../port-9:0:9/expander-9:1/port-9:1:0/end_device-9:1:0/target9:0:9/9:0:9:0/block/sdm

This is disk 1 on enclosure 2. Same thing for enclosure 3:

udevadm info -q path -p /sys/block/sdy

.../port-9:0:9/expander-9:1/port-9:1:13/expander-9:2/port-9:2:0/end_device-9:2:0/target9:0:22/9:0:22:0/block/sdy

It seems that the names expander-X:Y, end_device-X:Y:Z, and port-X:Y:W all
contain the enclosure number: Y. I'm attaching a quick-and-dirty script
that given a device name (eg: sde) produces an ID in the form of eYdX,
where Y is the enclosure number and X is the disk number, both starting at
0.

I fully agree that disk identification is the main goal, and increasing
the numbering as originally proposed (eg: A13..A24) won't be as useful as
having some sort of enclosure identifier... I guess I'd be happy if I
could have, eg, cXtYdZ, where X is the channel, Y is the enclosure, and d
is the bay_identifier... too many years using Solaris perhaps? :))

Hope this helps. I'll be happy to send further information if needed, and
also test any patches to /lib/udev/vdev_id.

Cheers,
Ramón.
#!/bin/bash

set -e
path="udevadm info -q path -p /sys/block/$1"

while [ $path != "/" ]; do
name="basename $path"
(echo $name | grep -q end_device) && break
path="dirname $path"
done

case $name in
end_device*) ;;
*) exit 1 ;;
esac

echo $name | sed -e 's/.*-.:(.):(.)$/e\1d\2/'

@ramonfernandez
Copy link
Author

It seems that the names expander-X:Y, end_device-X:Y:Z, and port-X:Y:W all contain the enclosure number: Y.

When parsing the patah provided by udevadm in reverse, that is :)

@nedbass
Copy link
Contributor

nedbass commented Jan 23, 2014

Thanks, that helps a lot. In hindsight, the config file format I chose isn't very easily extensible 😃

But, I'm thinking we could add a new optional parameter for the "channel" keyword in vdev_id.conf. The script would just have to parse the fields according to the number of parameters present. So the current forms would continue to work as they do today, while the new form would support daisy-chained configurations:

#       PCI_ID  HBA PORT  ENCLOSURE NUMBER  CHANNEL NAME
channel 85:00.0 1         0                 A   
channel 85:00.0 1         1                 B
channel 85:00.0 0         0                 C
channel 86:00.0 0         1                 D

This would address the current case without overly-complicating the parsing logic. But I can see that vdev_id will need to move to a more flexible config file format if it is to be extended much further.

@ramonfernandez
Copy link
Author

Hi Ned -- yes, I think that would work. You wrote:

PCI_ID HBA PORT ENCLOSURE NUMBER CHANNEL NAME

Wouldn't it be easier for backwards compatibility to swap enclosure number
and channel name? Just curious, I don't know enough about the vdev_id to
know whether that will break other topologies or not.

BTW, another thing I found out is that looking at the bay_identifier file
is still needed (in other words, the ID produced by the script I sent is
only 100% accurate if all bays are populated; and it doesn't support more
than 9 bays -- too quick and too dirty).

Anway, thanks a lot for your help, much appreciated.

Cheers,
Ramón.

@nedbass
Copy link
Contributor

nedbass commented Jan 23, 2014

Wouldn't it be easier for backwards compatibility to swap enclosure number and channel name?

Yes, it would be easier that way, but it seems more intuitive to me to keep the location "coordinates" grouped together, followed by the channel name. In hindsight, I should have made the channel name be the first parameter. (Well, in hindsight the order should have been insignificant, but this was a bit quick and dirty too.)

Anway, thanks a lot for your help, much appreciated.

You're welcome. I'm happy to get the feedback. I'll post a patched vdev_id for testing shortly.

nedbass added a commit to nedbass/zfs that referenced this issue Jan 23, 2014
Disks in enclosures in daisy-chained configurations will currently get
conflicting names when using the sas_direct and sas_switch topologies.
This is because the "channel" keyword syntax lacks sufficient location
information to distinguish between enclosures connected the same HBA
and port.  The channel keyword now supports an optional numberic
enclosure_id parameter to identify the position of an enclosure in a
daisy-chained configuration.  Daisy-chained enclosures are numbered
starting from 0.

Signed-off-by: Ned Bass <bass6@llnl.gov>

Fixes openzfs#2074
nedbass added a commit to nedbass/zfs that referenced this issue Jan 23, 2014
Disks in enclosures in daisy-chained configurations will currently get
conflicting names when using the sas_direct and sas_switch topologies.
This is because the "channel" keyword syntax lacks sufficient location
information to distinguish between enclosures connected the same
physical port.  The channel keyword now supports an optional numeric
enclosure_id parameter to identify the position of an enclosure in a
daisy-chained configuration.  Daisy-chained enclosures are numbered
starting from 0.

Signed-off-by: Ned Bass <bass6@llnl.gov>

Fixes openzfs#2074
nedbass added a commit to nedbass/zfs that referenced this issue Jan 23, 2014
Disks in enclosures in daisy-chained configurations will currently get
conflicting names when using the sas_direct and sas_switch topologies.
This is because the "channel" keyword syntax lacks sufficient location
information to distinguish between enclosures connected the same
physical port.  The channel keyword now supports an optional numeric
enclosure_id parameter to identify the position of an enclosure in a
daisy-chained configuration.  Daisy-chained enclosures are numbered
starting from 0.

Signed-off-by: Ned Bass <bass6@llnl.gov>

Fixes openzfs#2074
@nedbass
Copy link
Contributor

nedbass commented Jan 23, 2014

@ramonfernandez please give 62f2d9e a try.

@nedbass
Copy link
Contributor

nedbass commented Jan 23, 2014

btw, you can run vdev_id by hand to test it out before installing the patched version. e.g.

./vdev_id -d sdy -c /path/to/config/file

It should hopefully print something like

ID_VDEV=C1
ID_VDEV_PATH=disk/by-vdev/C1

@nedbass
Copy link
Contributor

nedbass commented Jan 23, 2014

As for cXtYdZ naming, you could always simulate that by encoding the cXtYd part in your channel names.

@ramonfernandez
Copy link
Author

Ned,

in 62f2d9e, line 132 ("sas_switch" in case statement) of the original
vdev_id was removed by mistake; after adding I was able to get some
results with the following config file and three enclosures on the same
chain:

  multipath     no
  topology      sas_direct
  phys_per_port 4
  channel 0c:00.0  0    0    c0e1d
  channel 0c:00.0  0    1    c0e2d
  channel 0c:00.0  0    2    c0e3d

So far so good. I then connected the last enclosure of this chain to the
second controller port (so 2 enclosures on one chain, 1 on the other), and
used the following config file:

  multipath     no
  topology      sas_direct
  phys_per_port 4
  channel 0c:00.0  0    0    c0e1d
  channel 0c:00.0  0    1    c0e2d
  channel 0c:00.0  1    2    c1e3d

This did not work well, because the number assigned to the enclosures is
not always the same. I was able to get devices created changing that "2"
to a "3", but upon reboot my enclosures where from:

  channel 0c:00.0  0    0    c0e1d
  channel 0c:00.0  0    1    c0e2d
  channel 0c:00.0  1    3    c1e3d

to

  channel 0c:00.0  0    0    c0e1d
  channel 0c:00.0  0    2    c0e2d
  channel 0c:00.0  1    1    c1e3d

Looks like the controller number from vdev_id and the one in the config
file have to match exactly. I understand switching enclosures around is
probably not a common operation, so I think that if this is properly
documented then it's a good solution.

A more flexible option would be to add a new "enclosure" keyword to
vdev_id.conf, and then use the sas_address of the enclosure to produce a
name alias; for example, when I run "lspci -t | grep enclo" I get:

  [9:0:8:0]    enclosu sas:0x50014380004fa7a5          -
  [9:0:21:0]   enclosu sas:0x50014380004db4a5          -
  [9:0:34:0]   enclosu sas:0x50014380002d1ca5          -

so a vdev_id.conf like this:

  channel 0c:00.0  0    A-
  channel 0c:00.0  1    B-
  enclosure E1- 0x50014380004fa7a5
  enclosure E2- 0x50014380004db4a5
  enclosure E3- 0x50014380002d1ca5

could produce the right devices (eg: A-E1-1, A-E3-1, B-E2-1, etc.). I can
give this a bit more thought and even try to patch vdev_id, but that will
have to be some other day. In the meantime 62f2d9e allows me to move
forward.

Cheers,
Ramón.

@nedbass
Copy link
Contributor

nedbass commented Jan 24, 2014

Thanks for testing it out.

This did not work well, because the number assigned to the enclosures is not always the same
...
I understand switching enclosures around is probably not a common operation

Actually I suspect this would also be a problem for things like power supply failure that can cause an enclosure to disappear and come back. This is not an uncommon situation for large scale production environments, so we'd need to come up with a more reliable means of identifying the enclosure before this change can land.

A more flexible option would be to add a new "enclosure" keyword to vdev_id.conf, and then use the sas_address of the enclosure

That's an interesting idea. I prefer solutions that use generic location information as opposed to unique hardware identifiers. This is because we run clusters of identical servers in my environment, and it's much easier to manage a single common configuration file. However, I'm not opposed to providing the functionality you suggest if you want to propose a patch. My suggestion would be to use a name=value syntax for the parameters, i.e.

enclosure E1 sas_addr=0x50014380004fa7a5

That would allow for new parameters to be added in the future to support other identification methods.

@behlendorf behlendorf removed this from the 0.6.4 milestone Oct 30, 2014
@dbdavids
Copy link

I am having this issue as well with a supermicro storage server. The second enclosure in the system is not being recognized by vdev as a second set of disks on the same channel and does not show up. I am going to try the patch above (out of the box it is crashing on line 144 as s drop in vdev replacement) with the current 6.4 release.

@dbdavids
Copy link

As Ramon said earlier, with this code, you need to add:
"sas_switch")
after line 133 to make this work. I will submit this change back later.

@DeHackEd
Copy link
Contributor

Just to add my $0.02, this fixes my chained enclosure problems as well. Supermicro's "front and back" enclosures (eg: this beast) uses distinct, chained expanders for the front and back sides so this fix is needed.

For quick reference 62f2d9e plus the syntax fix for the case statement works. I can make a quick patch that's rebased if needed.

@behlendorf
Copy link
Contributor

@DeHackEd please do, let's get a PR open with the proposed fix.

DeHackEd pushed a commit to DeHackEd/zfs that referenced this issue Jul 26, 2016
Disks in enclosures in daisy-chained configurations will currently get
conflicting names when using the sas_direct and sas_switch topologies.
This is because the "channel" keyword syntax lacks sufficient location
information to distinguish between enclosures connected the same
physical port.  The channel keyword now supports an optional numeric
enclosure_id parameter to identify the position of an enclosure in a
daisy-chained configuration.  Daisy-chained enclosures are numbered
starting from 0.

Original-version-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: DHE <git@dehacked.net>
Fixes openzfs#2074
shuttgl added a commit to shuttgl/zfs that referenced this issue Sep 7, 2018
DeHackEd pushed a commit to DeHackEd/zfs that referenced this issue Jun 29, 2019
Disks in enclosures in daisy-chained configurations will currently get
conflicting names when using the sas_direct and sas_switch topologies.
This is because the "channel" keyword syntax lacks sufficient location
information to distinguish between enclosures connected the same
physical port.  The channel keyword now supports an optional numeric
enclosure_id parameter to identify the position of an enclosure in a
daisy-chained configuration.  Daisy-chained enclosures are numbered
starting from 0.

Original-version-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: DHE <git@dehacked.net>
Fixes openzfs#2074
DeHackEd pushed a commit to DeHackEd/zfs that referenced this issue Jun 29, 2019
Disks in enclosures in daisy-chained configurations will currently get
conflicting names when using the sas_direct and sas_switch topologies.
This is because the "channel" keyword syntax lacks sufficient location
information to distinguish between enclosures connected the same
physical port.  The channel keyword now supports an optional numeric
enclosure_id parameter to identify the position of an enclosure in a
daisy-chained configuration.  Daisy-chained enclosures are numbered
starting from 0.

Original-version-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: DHE <git@dehacked.net>
Fixes openzfs#2074
Requires-builders: style build
@behlendorf
Copy link
Contributor

The vdev_id support was extend some time ago to support multiple enclosures in various arrangements. Closing.

@Nihlus
Copy link

Nihlus commented Nov 3, 2024

How was this issue resolved? I'm looking at the current documentation for vdev_id to try and set up a daisy-chained enclosure configuration but I don't see any of the discussed solutions in there. I've been unable to find a proper configuration for this type of topology.

@behlendorf
Copy link
Contributor

The changes in #11526 and #12660 were intended to support daisy-changed JBODs.

@Nihlus
Copy link

Nihlus commented Nov 9, 2024

The changes in #11526 and #12660 were intended to support daisy-changed JBODs.

Thanks! I didn't see the multijbod option documented anywhere, but using it appears to work fine. I am hitting the same issue as #16572, however (I have two head nodes accessing daisy-chained enclosures from opposite ends of the redundant path), so I'll continue the discussion there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
6 participants