You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 27, 2022. It is now read-only.
on logging into the node again:
another shell script will have to be executed that does the following :
1.multipath -a /dev/sdb (which is where my boot drive is)
2.dracut --force -H --add multipath
3.shutdown
4.reboot
Multipathing gets established and successful hand off takes place in the event of any of the tgt servers going down. This is how it looks like once we log inn to the node:
[root@localhost ~]# multipath -l
mpatha (360000000000000000e00000000010001) dm-0 IET ,VIRTUAL-DISK
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:0:1 sda 8:0 active undef unknown
`- 8:0:0:1 sdb 8:16 active undef unknown
[root@localhost ~]# multipath -ll
mpatha (360000000000000000e00000000010001) dm-0 IET ,VIRTUAL-DISK
size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 7:0:0:1 sda 8:0 active ready running
`- 8:0:0:1 sdb 8:16 active ready running
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
+-mpatha 253:0 0 10G 0 mpath
+-mpatha1 253:1 0 476M 0 part /boot
+-mpatha2 253:2 0 5G 0 part /
sdb 8:16 0 10G 0 disk
+-mpatha 253:0 0 10G 0 mpath
+-mpatha1 253:1 0 476M 0 part /boot
+-mpatha2 253:2 0 5G 0 part /
At this point
A deep snapshot of the node from BMI can be taken to provision other nodes with multipathing enabled over it.
"N" TGT servers can be multipathed by simply discovering and logging into them using adm commands, those targets automatically get multipathed.
Problems with this approach :
Automating the shell scripts as two reboots are needed
As a solution to the above problem, multipathing with three tgt servers instead of two had been suggested by @SahilTikale and @apoorvemohan , that would look like this:
Here are the steps to enable iscsi multipathing :
For example : the file /etc/tgt/conf.d/kumo-dan-installscript-img49.conf with the below configuration will have to be replicated.
2. Make changes in the /var/lib/tftpboot/*.ipxe File (Add the 2nd target in the ipxe file)
For Example :
wherein 10.20.30.1 and 10..20.30.2 are the IP addresses of the respective tgt servers.
3. Boot the node with the above change(Reprovision the node)
On booting the node will have the entry of both the tgt servers in the /sys/firmware/ibft
(uncommenting the defaults settings)
on logging into the node again:
another shell script will have to be executed that does the following :
Multipathing gets established and successful hand off takes place in the event of any of the tgt servers going down. This is how it looks like once we log inn to the node:
At this point
A deep snapshot of the node from BMI can be taken to provision other nodes with multipathing enabled over it.
"N" TGT servers can be multipathed by simply discovering and logging into them using adm commands, those targets automatically get multipathed.
Problems with this approach :
Automating the shell scripts as two reboots are needed
Both the TGT servers are needed to be up for booting the node,
otherwise the node does not boot up and gets stuck in dracut.
https://docs.oracle.com/cd/E50245_01/E50246/html/ch06s101.html
As a solution to the above problem, multipathing with three tgt servers instead of two had been suggested by @SahilTikale and @apoorvemohan , that would look like this:
However, There seems to be some line limit on the numbers of arguments that can be passed in the ipxe file. #176
The text was updated successfully, but these errors were encountered: