Skip to content

gonzoleeman/open-iscsi

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

=================================================================

                Linux* Open-iSCSI

=================================================================

                                                   Jun 6, 2022
Contents
========

- 1. In This Release
- 2. Introduction
- 3. Installation
- 4. Open-iSCSI daemon
- 5. Open-iSCSI Configuration Utility
- 6. Configuration
- 7. Getting Started
- 8. Advanced Configuration
- 9. iSCSI System Info


1. In This Release
==================

This file describes the Linux* Open-iSCSI Initiator. The software was
tested on AMD Opteron (TM) and Intel Xeon (TM).

The latest development release is available at:

	https://github.com/open-iscsi/open-iscsi

For questions, comments, contributions post an issue on github, or
send e-mail to:

	open-iscsi@googlegroups.com

You can also raise an issue on the github page.


1.1. Features
=============

- highly optimized and very small-footprint data path
- persistent configuration database
- SendTargets discovery
- CHAP
- PDU header Digest
- multiple sessions


1.2  Licensing
==============

The daemon and other top-level commands are licensed as GPLv3, while the
libopeniscsiusr library used by some of those commmands is licensed as LGPLv3.


2. Introduction
===============

The Open-iSCSI project is a high-performance, transport independent,
multi-platform implementation of RFC3720 iSCSI.

Open-iSCSI is partitioned into user and kernel parts.

The kernel portion of Open-iSCSI was originally part of this project
repository, but now is built into the linux kernel itself. It
includes loadable modules: scsi_transport_iscsi.ko, libiscsi.ko and
scsi_tcp.ko. The kernel code handles the "fast" path, i.e. data flow.

User space contains the entire control plane: configuration
manager, iSCSI Discovery, Login and Logout processing,
connection-level error processing, Nop-In and Nop-Out handling,
and (perhaps in the future:) Text processing, iSNS, SLP, Radius, etc.

The user space Open-iSCSI consists of a daemon process called
iscsid, and a management utility iscsiadm. There are also helper
programs, and iscsiuio, which is used for certain iSCSI adapters.


3. Installation
===============

NOTE:	You will need to be root to install the Open-iSCSI code, and
	you will also need to be root to run it.

As of today, the Open-iSCSI Initiator requires a host running the
Linux operating system with kernel.

The userspace components iscsid, iscsiadm and iscsistart require the
open-isns library, unless open-isns use is diabled when building (see
below).

If this package is not available for your distribution, you can download
and install it yourself.  To install the open-isns headers and library
required for Open-iSCSI, download the current release from:

	https://github.com/open-iscsi/open-isns

Then, from the top-level directory, run:

	./configure [<OPTIONS>]
	make
	make install

For the open-iscsi project and iscsiuio, the original build
system used make and autoconf the build the project. These
build systems are being depcreated in favor of meson (and ninja).
See below for how to build using make and autoconf, but
migrating as soon as possible to meson would be a good idea.

Building open-iscsi/iscsiuio using meson
----------------------------------------
For Open-iSCSI and iscsiuio, the system is built using meson and ninja
(see https://github.com/mesonbuild/meson). If these packages aren't
available to you on your Linux distribution, you can download
the latest release from: https://github.com/mesonbuild/meson/releases).
The README.md file describes in detail how to build it yourself, including
how to get ninja.

To build the open-iscsi project, including iscsiuio, first run meson
to configure the build, from the top-level open-iscsi directory, e.g.:

	rm -rf builddir
	mkdir builddir
	meson [<MESON-OPTIONS>] builddir

Then, to build the code:

	ninja -C builddir

If you change any code and want to rebuild, you simply run ninja again.

When you are ready to install:

	[DESTDIR=<SOME-DIR>] ninja -C builddir install

This will install the iSCSI tools, configuration files, interfaces, and
documentation. If you do not set DESTDIR, it defaults to "/".


MESON-OPTIONS:
--------------
One can override several default values when building with meson:


Option			Description
=====================	=====================================================

--libdir=<LIBDIR>	Where library files go [/lib64]
--sbindir=<DIR>		Meson 0.63 or newer: Where binaries go [/usr/sbin]
-Dc_flags="<C-FLAGS>"	Add in addition flags to the C compiler
-Dno_systemd=<BOOL>	Enables systemd usage [false]
			(set to "true" to disable systemd)
-Dsystemddir=<DIR>	Set systemd unit directory [/usr/lib/systemd]
-Dhomedir=<DIR>		Set config file directory [/etc/iscsi]
-Ddbroot=<DIR>		Set Database directory [/etciscsi]
-Dlockdir=<DIR>		Set Lock directory [/run/lock/iscsi]
-Drulesdir=<DIR>	Set udev rules directory [/usr/lib/udev/rules.d]
-Discsi_sbindir=<DIR>	Where binaries go [/usr/sbin]
			(for use when sbindir can't be set, in older versions
			 of meson)
-Disns_supported=<BOOL>	Enable/disable iSNS support [true]
			(set to "false" to disable use of open-isns)


Building open-iscsi/iscsiuio using make/autoconf (Deprecated)
-------------------------------------------------------------
If you wish to build using the older deprecated system, you can
simply run:

	make [<MAKE-OPTIONS>]
	make [DESTDIR=<SOME-DIR>] install

Where MAKE-OPTIONS are from:
	* SBINDIR=<some-dir>  [/usr/bin]   for executables
	* DBROOT=<some-dir>   [/etc/iscsi] for iscsi database files
	* HOMEDIR=<some-dir>  [/etc/iscsi] for iscsi config files


4. Open-iSCSI daemon
====================

The iscsid daemon implements control path of iSCSI protocol, plus some
management facilities. For example, the daemon could be configured to
automatically re-start discovery at startup, based on the contents of
persistent iSCSI database (see next section).

For help, run:

	iscsid --help

The output will be similar to the following (assuming a default install):

Usage: iscsid [OPTION]

  -c, --config=[path]     Execute in the config file (/etc/iscsi/iscsid.conf).
  -i, --initiatorname=[path]     read initiatorname from file (/etc/iscsi/initiatorname.iscsi).
  -f, --foreground        run iscsid in the foreground
  -d, --debug debuglevel  print debugging information
  -u, --uid=uid           run as uid, default is current user
  -g, --gid=gid           run as gid, default is current user group
  -n, --no-pid-file       do not use a pid file
  -p, --pid=pidfile       use pid file (default /run/iscsid.pid).
  -h, --help              display this help and exit
  -v, --version           display version and exit


5. Open-iSCSI Configuration and Administration Utility
======================================================

Open-iSCSI persistent configuration is stored in a number of
directories under a configuration root directory, using a flat-file
format. This configuration root directory is /etc/iscsi by default,
but may also commonly be in /var/lib/iscsi (see "dbroot" in the meson
options discussed earlier).

Configuration is contained in directories for:

	- nodes
	- isns
	- static
	- fw
	- send_targets
	- ifaces

The iscsiadm utility is a command-line tool to manage (update, delete,
insert, query) the persistent database, as well manage discovery,
session establishment (login), and ending sessions (logout).

This utility presents set of operations that a user can perform
on iSCSI node, session, connection, and discovery records.

Open-iSCSI does not use the term node as defined by the iSCSI RFC,
where a node is a single iSCSI initiator or target. Open-iSCSI uses the
term node to refer to a portal on a target, so tools like iscsiadm
require that the '--targetname' and '--portal' arguments be used when
in node mode.

For session mode, a session id (sid) is used. The sid of a session can be
found by running:

	iscsiadm -m session -P 1

The session id is not currently persistent and is partially determined by
when the session is setup.

Note that some of the iSCSI Node and iSCSI Discovery operations
do not require iSCSI daemon (iscsid) loaded.

For help on the command, run:

	iscsiadm --help

The output will be similar to the following.

iscsiadm -m discoverydb [-hV] [-d debug_level] [-P printlevel] [-t type -p ip:port -I ifaceN ... [-Dl]] | [[-p ip:port -t type] [-o operation] [-n name] [-v value] [-lD]]
iscsiadm -m discovery [-hV] [-d debug_level] [-P printlevel] [-t type -p ip:port -I ifaceN ... [-l]] | [[-p ip:port] [-l | -D]] [-W]
iscsiadm -m node [-hV] [-d debug_level] [-P printlevel] [-L all,manual,automatic,onboot] [-W] [-U all,manual,automatic,onboot] [-S] [[-T targetname -p ip:port -I ifaceN] [-l | -u | -R | -s]] [[-o operation ] [-n name] [-v value]]
iscsiadm -m session [-hV] [-d debug_level] [-P printlevel] [-r sessionid | sysfsdir [-R | -u | -s] [-o operation] [-n name] [-v value]]
iscsiadm -m iface [-hV] [-d debug_level] [-P printlevel] [-I ifacename | -H hostno|MAC] [[-o operation ] [-n name] [-v value]] [-C ping [-a ip] [-b packetsize] [-c count] [-i interval]]
iscsiadm -m fw [-d debug_level] [-l] [-W] [[-n name] [-v value]]
iscsiadm -m host [-P printlevel] [-H hostno|MAC] [[-C chap [-x chap_tbl_idx]] | [-C flashnode [-A portal_type] [-x flashnode_idx]] | [-C stats]] [[-o operation] [-n name] [-v value]]
iscsiadm -k priority


The first parameter specifies the mode to operate in:

  -m, --mode <op>	specify operational mode op =
			<discoverydb|discovery|node|session|iface|fw|host>

Mode "discoverydb"
------------------

  -m discoverydb --type=[type] --interface=[iface…] --portal=[ip:port] \
			--print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT] \
			--discover

			  This command will use the discovery record settings
			  matching the record with type=type and
			  portal=ip:port]. If a record does not exist, it will
			  create a record using the iscsid.conf discovery
			  settings.

			  By default, it will then remove records for
			  portals no longer returned. And,
			  if a portal is returned by the target, then the
			  discovery command will create a new record or modify
			  an existing one with values from iscsi.conf and the
			  command line.

			  [op] can be passed in multiple times to this
			  command, and it will alter the node DB manipulation.

			  If [op] is passed in and the value is
			  "new", iscsiadm will add records for portals that do
			  not yet have records in the db.

			  If [op] is passed in and the value is
			  "update", iscsiadm will update node records using
			  info from iscsi.conf and the command line for portals
			  that are returned during discovery and have
			  a record in the db.

			  If [op] is passed in and the value is "delete",
			  iscsiadm will delete records for portals that
			  were not returned during discovery.

			  If [op] is passed in and the value is
			  "nonpersistent", iscsiadm will not store
			  the portals found in the node DB. This is
			  only useful with the --login command.

			  See the example section for more info.

			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  Multiple ifaces can be passed in during discovery.

			  For the above commands, "print" is optional. If
			  used, N can be 0 or 1.
			  0 = The old flat style of output is used.
			  1 = The tree style with the inteface info is used.

			  If print is not used, the old flat style is used.

  -m discoverydb --interface=[iface...] --type=[type] --portal=[ip:port] \
			--print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT] \
			--discover --login

			  This works like the previous discoverydb command
			  with the --login argument passed in will also
			  log into the portals that are found.

  -m discoverydb --portal=[ip:port] --type=[type] \
			--op=[op] [--name=[name] --value=[value]]

			  Perform specific DB operation [op] for
			  discovery portal. It could be one of:
			  [new], [delete], [update] or [show]. In case of
			  [update], you have to provide [name] and [value]
			  you wish to update

			  Setting op=NEW will create a new discovery record
			  using the iscsid.conf discovery settings. If it
			  already exists, it will be overwritten using
			  iscsid.conf discovery settings.

			  Setting op=DELETE will delete the discovery record
			  and records for the targets found through
			  Phat discovery source.

			  Setting op=SHOW will display the discovery record
			  values. The --show argument can be used to
			  force the CHAP passwords to be displayed.

Mode "discovery"
----------------

  -m discovery --type=[type] --interface=iscsi_ifacename \
			--portal=[ip:port] --login --print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT]

			  Perform [type] discovery for target portal with
			  ip-address [ip] and port [port].

			  This command will not use the discovery record
			  settings. It will use the iscsid.conf discovery
			  settings and it will overwrite the discovery
			  record with iscsid.conf discovery settings if it
			  exists. By default, it will then remove records for
			  portals no longer returned. And,
			  if a portal is returned by the target, then the
			  discovery command will create a new record or modify
			  an existing one with values from iscsi.conf and the
			  command line.

			  [op] can be passed in multiple times to this
			  command, and it will alter the DB manipulation.

			  If [op] is passed in and the value is
			  "new", iscsiadm will add records for portals that do
			  not yet have records in the db.

			  If [op] is passed in and the value is
			  "update", iscsiadm will update node records using
			  info from iscsi.conf and the command line for portals
			  that are returned during discovery and have
			  a record in the db.

			  If [op] is passed in and the value is "delete",
			  iscsiadm will delete records for portals that
			  were not returned during discovery.

			  If [op] is passed in and the value is
			  "nonpersistent", iscsiadm will not store
			  the portals found in the node DB.

			  See the example section for more info.

			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  Multiple ifaces can be passed in during discovery.

  -m discovery --print=[N]

			  Display all discovery records from internal
			  persistent discovery database.

Mode "node"
-----------

  -m node		  display all discovered nodes from internal
			  persistent discovery database

  -m node --targetname=[name] --portal=[ip:port] \
			--interface=iscsi_ifacename] \
			[--login|--logout|--rescan|--stats] [-W]

  -m node --targetname=[name] --portal=[ip:port]
			--interface=[driver,HWaddress] \
			--op=[op] [--name=[name] --value=[value]]

  -m node --targetname=[name] --portal=[ip:port]
			--interface=iscsi_ifacename] \
			--print=[level]

			  Perform specific DB operation [op] for specific
			  interface on host that will connect to portal on
			  target. targetname, portal and interface are optional.
			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  The op could be one of [new], [delete], [update] or
			  [show]. In case of [update], you have to provide
			  [name] and [value] you wish to update.
			  For [delete], note that if a session is using the
			  node record, the session will be logged out then
			  the record will be deleted.

			  Using --rescan will perform a SCSI layer scan of the
			  session to find new LUNs.

			  Using --stats prints the iSCSI stats for the session.

			  Using --login normally sends a login request to the
			  specified target and normally waits for the results.
			  If -W/--no_wait is supplied return success if we are
			  able to send the login request, and do not wait
			  for the response. The user will have to poll for
			  success

			  Print level can be 0 to 1.

  -m node --logoutall=[all|manual|automatic]
			  Logout "all" the running sessions or just the ones
			  with a node startup value manual or automatic.
			  Nodes marked as ONBOOT are skipped.

  -m node --loginall=[all|manual|automatic] [-W]
			  Login "all" the running sessions or just the ones
			  with a node startup value manual or automatic.
			  Nodes marked as ONBOOT are skipped.

			  If -W is supplied then do not wait for the login
			  response for the target, returning success if we
			  are able to just send the request. The client
			  will have to poll for success.

Mode "session"
--------------

  -m session		  display all active sessions and connections

  -m session --sid=[sid] [ --print=level | --rescan | --logout ]
			--op=[op] [--name=[name] --value=[value]]

			  Perform operation for specific session with
			  session id sid. If no sid is given, the operation
			  will be performed on all running sessions if possible.
			  --logout and --op work like they do in node mode,
			  but in session mode targetname and portal info
			  is not passed in.

			  Print level can be 0 to 3.
			  0 = Print the running sessions.
			  1 = Print basic session info like node we are
			  connected to and whether we are connected.
			  2 = Print iSCSI params used.
			  3 = Print SCSI info like LUNs, device state.

			  If no sid and no operation is given print out the
			  running sessions.

Mode "iface"
------------

  -m iface --interface=iscsi_ifacename --op=[op] [--name=[name] --value=[value]]
			--print=level

			  Perform operation on given interface with name
			  iscsi_ifacename.

			  See below for examples.

  -m iface --interface=iscsi_ifacename -C ping --ip=[ipaddr] --packetsize=[size]
			--count=[count] --interval=[interval]

Mode "host"
-----------

  -m host [--host=hostno|MAC] --print=level -C chap --op=[SHOW]

			  Display information for a specific host. The host
			  can be passed in by host number or by MAC address.
			  If a host is not passed in, then info
			  for all hosts is printed.

			  Print level can be 0 to 4.
			  1 = Print info for how like its state, MAC, and
			      netinfo if possible.
			  2 = Print basic session info for nodes the host
			      is connected to.
			  3 = Print iSCSI params used.
			  4 = Print SCSI info like LUNs, device state.

  -m host --host=hostno|MAC -C chap --op=[DELETE] --index=[chap_tbl_idx]

			  Delete chap entry at the given index from chap table.

  -m host --host=hostno|MAC -C chap --op=[NEW | UPDATE] --index=[chap_tbl_idx] \
			--name=[name] --value=[value]

			  Add new or update existing chap entry at the given
			  index with given username and password pair. If index
			  is not passed then entry is added at the first free
			  index in chap table.

  -m host --host=hostno|MAC -C flashnode

			  Display list of all the targets in adapter's
			  flash (flash node), for the specified host,
			  with ip, port, tpgt and iqn.

  -m host --host=hostno|MAC -C flashnode --op=[NEW] --portal_type=[ipv4|ipv6]

			  Create new flash node entry for the given host of the
			  specified portal_type. This returns the index of the
			  newly created entry on success.

  -m host --host=hostno|MAC -C flashnode --index=[flashnode_index] \
			--op=[UPDATE] --name=[name] --value=[value]

			  Update the params of the specified flash node.
			  The [name] and [value] pairs must be provided for the
			  params that need to be updated. Multiple params can
			  be updated using a single command.

  -m host --host=hostno|MAC -C flashnode --index=[flashnode_index] \
			--op=[SHOW | DELETE | LOGIN | LOGOUT]

			  Setting op=DELETE|LOGIN|LOGOUT will perform
			  deletion/login/ logout operation on the specified
			  flash node.

			  Setting op=SHOW will list all params with the values
			  for the specified flash node. This is the default
			  operation.

			  See the iscsiadm example section below for more info.

Other arguments
---------------

  -d, --debug debuglevel  print debugging information

  -V, --version		  display version and exit

  -h, --help		  display this help and exit


5.1 iSCSI iface setup
=====================

The next sections describe how to setup iSCSI ifaces so you can bind
a session to a NIC port when using software iSCSI (section 5.1.1), and
it describes how to setup ifaces for use with offload cards from Chelsio
and Broadcom (section 5.1.2).


5.1.1 How to setup iSCSI interfaces (iface) for binding
=======================================================

If you wish to allow the network susbsystem to figure out
the best path/NIC to use, then you can skip this section. For example
if you have setup your portals and NICs on different subnets, then
the following is not needed for software iSCSI.

Warning!!!!!!
This feature is experimental. The interface may change. When reporting
bugs, if you cannot do a "ping -I ethX target_portal", then check your
network settings first. Make sure the rp_filter setting is set to 0 or 2
(see Prep section below for more info). If you cannot ping the portal,
then you will not be able to bind a session to a NIC.

What is a scsi_host and iface for software, hardware and partial
offload iSCSI?

Software iSCSI, like iscsi_tcp and iser, allocates a scsi_host per session
and does a single connection per session. As a result
/sys/class_scsi_host and /proc/scsi will report a scsi_host for
each connection/session you have logged into. Offload iSCSI, like
Chelsio cxgb3i, allocates a scsi_host for each PCI device (each
port on a HBA will show up as a different PCI device so you get
a scsi_host per HBA port).

To manage both types of initiator stacks, iscsiadm uses the interface (iface)
structure. For each HBA port or for software iSCSI for each network
device (ethX) or NIC, that you wish to bind sessions to you must create
a iface config /etc/iscsi/ifaces.

Prep
----

The iface binding feature requires the sysctl setting
net.ipv4.conf.default.rp_filter to be set to 0 or 2.
This can be set in /etc/sysctl.conf by having the line:
	net.ipv4.conf.default.rp_filter = N

where N is 0 or 2. Note that when setting this you may have to reboot
for the value to take effect.


rp_filter information from Documentation/networking/ip-sysctl.txt:

rp_filter - INTEGER
	0 - No source validation.
	1 - Strict mode as defined in RFC3704 Strict Reverse Path
	    Each incoming packet is tested against the FIB and if the interface
	    is not the best reverse path the packet check will fail.
	    By default failed packets are discarded.
	2 - Loose mode as defined in RFC3704 Loose Reverse Path
	    Each incoming packet's source address is also tested against the FIB
	    and if the source address is not reachable via any interface
	    the packet check will fail.

Running
-------

The command:

	iscsiadm -m iface

will report iface configurations that are setup in /etc/iscsi/ifaces:

	iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-06.com.redhat:madmax
	iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-06.com.redhat:madmax

The format is:

	iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname

For software iSCSI, you can create the iface configs by hand, but it is
recommended that you use iscsiadm's iface mode. There is an iface.example in
/etc/iscsi/ifaces which can be used as a template for the daring.

For each network object you wish to bind a session to, you must create
a separate iface config in /etc/iscsi/ifaces and each iface config file
must have a unique name which is less than or equal to 64 characters.

Example
-------

If you have NIC1 with MAC address 00:0F:1F:92:6B:BF and NIC2 with
MAC address 00:C0:DD:08:63:E7, and you wanted to do software iSCSI over
TCP/IP, then in /etc/iscsi/ifaces/iface0 you would enter:

	iface.transport_name = tcp
	iface.hwaddress = 00:0F:1F:92:6B:BF

and in /etc/iscsi/ifaces/iface1 you would enter:

	iface.transport_name = tcp
	iface.hwaddress = 00:C0:DD:08:63:E7

Warning: Do not name an iface config file  "default" or "iser".
They are special values/files that are used by the iSCSI tools for
backward compatibility. If you name an iface default or iser, then
the behavior is not defined.

To use iscsiadm to create an iface0 similar to the above example, run:

	iscsiadm -m iface -I iface0 --op=new

(This will create a new empty iface config. If there was already an iface
with the name "iface0", this command will overwrite it.)

Next, set the hwaddress:

	iscsiadm -m iface -I iface0 --op=update \
		-n iface.hwaddress -v 00:0F:1F:92:6B:BF

If you had sessions logged in, iscsiadm will not update or overwrite
an iface. You must log out first. If you have an iface bound to a node/portal
but you have not logged in, then iscsiadm will update the config and
all existing bindings.

You should now skip to 5.1.3 to see how to log in using the iface, and for
some helpful management commands.


5.1.2 Setting up an iface for an iSCSI offload card
===================================================

This section describes how to setup ifaces for use with Chelsio, Broadcom and
QLogic cards.

By default, iscsiadm will create an iface for each Broadcom, QLogic and Chelsio
port. The iface name will be of the form:

	$transport/driver_name.$MAC_ADDRESS

Running the following command:

	iscsiadm -m iface

will report iface configurations that are setup in /etc/iscsi/ifaces:

	default tcp,<empty>,<empty>,<empty>,<empty>
	iser iser,<empty>,<empty>,<empty>,<empty>
	cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,<empty>
	qla4xxx.00:0e:1e:04:8b:2e qla4xxx,00:0e:1e:04:8b:2e,<empty>,<empty>,<empty>

The format is:

	iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname

where:	iface_name:		name of iface
	transport_name:		name of driver
	hwaddress:		MAC address
	ipaddress:		IP address to use for this port
	net_iface_name:		will be <empty> because change between reboots.
				It is used for software iSCSI's vlan or alias binding.
	initiatorname:		Initiatorname to be used if you want to override the
				default one in /etc/iscsi/initiatorname.iscsi.

To display these values in a more friendly way, run:

	iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07

Example output:

	# BEGIN RECORD 2.0-871
	iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07
	iface.net_ifacename = <empty>
	iface.ipaddress = <empty>
	iface.hwaddress = 00:07:43:05:97:07
	iface.transport_name = cxgb3i
	iface.initiatorname = <empty>
	# END RECORD

Before you can use the iface, you must set the IP address for the port.
We determine the corresponding variable name that we want to update from
the output above, which is "iface.ipaddress".
Then we fill this empty variable with the value we desire, with this command:

	iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update \
		-n iface.ipaddress -v 20.15.0.66

Note for QLogic ports: After updating the iface record, you must apply or
applyall the settings for the changes to take effect:

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2e -o apply
	iscsiadm -m iface -H 00:0e:1e:04:8b:2e -o applyall

With "apply", the network settings for the specified iface will take effect.
With "applyall", the network settings for all ifaces on a specific host will
take effect. The host can be specified using the -H/--host argument by either
the MAC address of the host or the host number.

Here is an example of setting multiple IPv6 addresses on a single iSCSI
interface port.
First interface (no need to set iface_num, it is 0 by default):

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.ipaddress -v fec0:ce00:7014:0041:1111:2222:1e04:9392

Create the second interface if it does not exist (iface_num is mandatory here):

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a.1 -op=new
	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.iface_num -v 1
	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.ipaddress -v fec0:ce00:7014:0041:1111:2222:1e04:9393
	iscsiadm -m iface -H 00:0e:1e:04:8b:2a --op=applyall

Note: If there are common settings for multiple interfaces then the
settings from 0th iface would be considered valid.

Now, we can use this iface to login into targets, which is described in the
next section.


5.1.3 Discovering iSCSI targets/portals
========================================

Be aware that iscsiadm will use the default route to do discovery. It will
not use the iface specified. So if you are using an offload card, you will
need a separate network connection to the target for discovery purposes.

*This should be fixed in the some future version of Open-iSCSI*

For compatibility reasons, when you run iscsiadm to do discovery, it
will check for interfaces in /etc/iscsi/iscsi/ifaces that are using
tcp for the iface.transport, and it will bind the portals that are discovered
so that they will be logged in through those ifaces. This behavior can also
be overridden by passing in the interfaces you want to use. For the case
of offload, like with cxgb3i and bnx2i, this is required because the transport
will not be tcp.

For example if you had defined two interfaces but only wanted to use one,
you can use the --interface/-I argument:

	iscsiadm -m discoverydb -t st -p ip:port -I iface1 --discover -P 1

If you had defined interfaces but wanted the old behavior, where we do not
bind a session to an iface, then you can use the special iface "default":

	iscsiadm -m discoverydb -t st -p ip:port -I default --discover -P 1

And if you did not define any interfaces in /etc/iscsi/ifaces and do
not pass anything into iscsiadm, running iscsiadm will do the default
behavior, allowing the network subsystem to decide which device to use.

If you later want to remove the bindings for a specific target and
iface, then you can run:

	iscsiadm -m node -T my_target -I iface0 --op=delete

To do this for a specific portal on a target, run:

	iscsiadm -m node -T my_target -p ip:port -I iface0 --op=delete

If you wanted to delete all bindinds for iface0, then you can run:

	iscsiadm -m node -I iface0 --op=delete

And for equalogic targets it is sometimes useful to remove just by portal:

	iscsiadm -m node -p ip:port -I iface0 --op=delete


Now logging into targets is the same as with software iSCSI. See section 7
for how to get started.


5.2 iscsiadm examples
=====================

Usage examples using the one-letter options (see iscsiadm man page
for long options):

Discovery mode
--------------

- SendTargets iSCSI Discovery using the default driver and interface and
		using the discovery settings for the discovery record with the
		ID [192.168.1.1:3260]:

	iscsiadm -m discoverydb -t st -p 192.168.1.1:3260 --discover

  This will search /etc/iscsi/send_targets for a record with the
  ID [portal = 192.168.1.1:3260 and type = sendtargets. If found it
  will perform discovery using the settings stored in the record.
  If a record does not exist, it will be created using the iscsid.conf
  discovery settings.

  The argument to -p may also be a hostname instead of an address:

		iscsiadm -m discoverydb -t st -p somehost --discover

  For the ifaces, iscsiadm will first search /etc/iscsi/ifaces for
  interfaces using software iSCSI. If any are found then nodes found
  during discovery will be setup so that they can logged in through
  those interfaces. To specify a specific iface, pass the
  -I argument for each iface.

- SendTargets iSCSI Discovery updating existing target records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o update --discover

  If there is a record for targetX, and portalY exists in the DB, and
  is returned during discovery, it will be updated with the info from
  the iscsi.conf. No new portals will be added and stale portals
  will not be removed.

- SendTargets iSCSI Discovery deleting existing target records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o delete --discover

  If there is a record for targetX, and portalY exists in the DB, but
  is not returned during discovery, it will be removed from the DB.
  No new portals will be added and existing portal records will not
  be changed.

  Note: If a session is logged into portal we are going to delete
  a record for, it will be logged out then the record will be
  deleted.

- SendTargets iSCSI Discovery adding new records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o new --discover

  If there is targetX, and portalY is returned during discovery, and does
  not have a record, it will be added. Existing records are not modified.

- SendTargets iSCSI Discovery using multiple ops:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o new -o delete --discover

  This command will add new portals and delete records for portals
  no longer returned. It will not change the record information for
  existing portals.

- SendTargets iSCSI Discovery in nonpersistent mode:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o nonpersistent --discover

  This command will perform discovery, but not manipulate the node DB.

- SendTargets iSCSI Discovery with a specific interface.  If you wish
  to only use a subset of the interfaces in
  /etc/iscsi/ifaces, then you can pass them in during discovery:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		--interface=iface0 --interface=iface1 --discover

  Note that for software iSCSI, we let the network layer select
  which NIC to use for discovery, but for later logins iscsiadm
  will use the NIC defined in the iface configuration.

  qla4xxx support is very basic and experimental. It does not store
  the record info in the card's FLASH or the node DB, so you must
  rerun discovery every time the driver is reloaded.

- Manipulate SendTargets DB: Create new SendTargets discovery record or
  overwrite an existing discovery record with iscsid.conf
  discovery settings:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o new

- Manipulate SendTargets DB: Display discovery settings:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o show

- Manipulate SendTargets DB: Display hidden discovery settings like
		 CHAP passwords:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o show --show

- Manipulate SendTargets DB: Set discovery setting.

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o update -n name -v value

- Manipulate SendTargets DB: Delete discovery record. This will also delete
  the records for the targets found through the discovery source.

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o delete

- Show all records in discovery database:

	iscsiadm -m discovery

- Show all records in discovery database and show the targets that were
  discovered from each record:

	iscsiadm -m discovery -P 1

Node mode
---------

In node mode you can specify which records you want to log
into by specifying the targetname, ip address, port or interface
(if specifying the interface it must already be setup in the node db).
iscsiadm will search the node db for records which match the values
you pass in, so if you pass in the targetname and interface, iscsiadm
will search for records with those values and operate on only them.
Passing in none of them will result in all node records being operated on.

- iSCSI Login to all portals on every node/starget through each interface
  set in the db:

	iscsiadm -m node -l

- iSCSI login to all portals on a node/target through each interface set
  in the db, but do not wait for the login response:

	iscsiadm -m node -T iqn.2005-03.com.max -l -W

- iSCSI login to a specific portal through each interface set in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 -l

  To specify an iPv6 address, the following can be used:

	iscsiadm -m node -T iqn.2005-03.com.max \
		-p 2001:c90::211:9ff:feb8:a9e9 -l

  The above command would use the default port, 3260. To specify a
  port, use the following:

	iscsiadm -m node -T iqn.2005-03.com.max \
		-p [2001:c90::211:9ff:feb8:a9e9]:3260 -l

  To specify a hostname, the following can be used:

	iscsiadm -m node -T iqn.2005-03.com.max -p somehost -l

- iSCSI Login to a specific portal through the NIC setup as iface0:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-I iface0  -l

- iSCSI Logout of all portals on every node/starget through each interface
  set in the db:

	iscsiadm -m node -u

  Warning: this does not check startup values like the logout/login all
  option. Do not use this if you are running iSCSI on your root disk.

- iSCSI logout of all portals on a node/target through each interface set
  in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -u

- iSCSI logout of a specific portal through each interface set in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 -u

- iSCSI Logout of a specific portal through the NIC setup as iface0:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-I iface0

- Changing iSCSI parameter:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-o update -n node.cnx[0].iscsi.MaxRecvDataSegmentLength -v 65536

  You can also change parameters for multiple records at once, by
  specifying different combinations of target, portal and interface
  like above.

- Adding custom iSCSI portal:

	iscsiadm -m node -o new -T iqn.2005-03.com.max \
		-p 192.168.0.1:3260,2 -I iface4

  The -I/--interface is optional. If not passed in, "default" is used.
  For tcp or iser, this would allow the network layer to decide what is
  best.

  Note that for this command, the Target Portal Group Tag (TPGT) should
  be passed in. If it is not passed in on the initial creation command,
  then the user must run iscsiadm again to set the value. Also,
  if the TPGT is not initially passed in, the old behavior of not
  tracking whether the record was statically or dynamically created
  is used.

- Adding custom NIC config to multiple targets:

	iscsiadm -m node -o new -I iface4

  This command will add an interface config using the iSCSI and SCSI
  settings from iscsid.conf to every target that is in the node db.

- Removing iSCSI portal:

	iscsiadm -m node -o delete -T iqn.2005-03.com.max -p 192.168.0.4:3260

  You can also delete multiple records at once, by specifying different
  combinations of target, portal and interface like above.

- Display iSCSI portal onfiguration:

	iscsiadm -m node [-o show] -T iqn.2005-03.com.max -p 192.168.0.4:3260

  You can also display multiple records at once, by specifying different
  combinations of target, portal and interface like above.

  Note: running "iscsiadm -m node" will only display the records. It
  will not display the configuration info. For the latter, run:

	iscsiadm -m node -o show

- Show all node records:

	iscsiadm -m node

  This will print the nodes using the old flat format where the
  interface and driver are not displayed. To display that info
  use the -P option with the argument "1":

	iscsiadm -m node -P 1

Session mode
------------

- Display session statistics:

	iscsiadm -m session -r 1 --stats

  This function also works in node mode. Instead of the "-r $sid"
  argument, you would pass in the node info like targetname and/or portal,
  and/or interface.

- Perform a SCSI scan on a session

	iscsiadm -m session -r 1 --rescan

  This function also works in node mode. Instead of the "-r $sid"
  argument, you would pass in the node info like targetname and/or portal,
  and/or interface.

  Note: Rescanning does not delete old LUNs. It will only pick up new
  ones.

- Display running sessions:

	iscsiadm -m session -P 1

Host mode with flashnode submode
--------------------------------

- Display list of flash nodes for a host

	iscsiadm -m host -H 6 -C flashnode

  This will print list of all the flash node entries for the given host
  along with their ip, port, tpgt and iqn values.

- Display all parameters of a flash node entry for a host

	iscsiadm -m host -H 6 -C flashnode -x 0

  This will list all the parameter name,value pairs for the
  flash node entry at index 0 of host 6.

- Add a new flash node entry for a host

	iscsiadm -m host -H 6 -C flashnode -o new -A [ipv4|ipv6]

  This will add new flash node entry for the given host 6 with portal
  type of either ipv4 or ipv6. The new operation returns the index of
  the newly created flash node entry.

- Update a flashnode entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o update \
		-n flashnode.conn[0].ipaddress -v 192.168.1.12 \
		-n flashnode.session.targetname \
		-v iqn.2002-03.com.compellent:5000d310004b0716

  This will update the values of ipaddress and targetname params of
  the flash node entry at index 1 of host 6.

- Login to a flash node entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o login

- Logout from a flash node entry
	Logout can be performed either using the flash node index:

	iscsiadm -m host -H 6 -C flashnode -x 1 -o logout

  or by using the corresponding session index:

	iscsiadm -m session -r $sid -u

- Delete a flash node entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o delete

Host mode with chap submode
---------------------------

- Display list of chap entries for a host

	iscsiadm -m host -H 6 -C chap -o show

- Delete a chap entry for a host

	iscsiadm -m host -H 6 -C chap -o delete -x 5

  This will delete any chap entry present at index 5.

- Add/Update a local chap entry for a host

	iscsiadm -m host -H 6 -C chap -o update -x 4 -n username \
			-v value -n password -v value

  This will update the local chap entry present at index 4. If index 4
  is free, then a new entry of type local chap will be created at that
  index with given username and password values.

- Add/Update a bidi chap entry for a host

	iscsiadm -m host -H 6 -C chap -o update -x 5 -n username_in \
		-v value -n password_in -v value

  This will update the bidi chap entry present at index 5. If index 5
  is free then entry of type bidi chap will be created at that index
  with given username_in and password_in values.

Host mode with stats submode
----------------------------

- Display host statistics:

	iscsiadm -m host -H 6 -C stats

  This will print the aggregate statistics on the host adapter port.
  This includes MAC, TCP/IP, ECC & iSCSI statistics.


6. Configuration
================

The default configuration file is /etc/iscsi/iscsid.conf, but the
directory is configurable with the top-level make option "homedir".
The remainder of this document will assume the /etc/iscsi directory.
This file contains only configuration that could be overwritten by iSCSI
discovery, or manually updated via iscsiadm utility. Its OK if this file
does not exist, in which case compiled-in default configuration will take place
for newer discovered Target nodes.

See the man page and the example file for the current syntax.
The manual pages for iscsid, iscsiadm are in the doc subdirectory and can be
installed in the appropriate man page directories and need to be manually
copied into e.g. /usr/local/share/man8.


7. Getting Started
==================

There are three steps needed to set up a system to use iSCSI storage:

7.1. iSCSI startup using the systemd units or manual startup.
7.2. Discover targets.
7.3. Automate target logins for future system reboots.

The systemd startup units will start the iSCSI daemon and log into any
portals that are set up for automatic login (discussed in 7.2)
or discovered through the discover daemon iscsid.conf params
(discussed in 7.1.2).

If your distro does not have systemd units for iSCSI, then you will have
to start the daemon and log into the targets manually.


7.1.1 iSCSI startup using the init script
=========================================

Red Hat or Fedora:
-----------------
To start Open-iSCSI in Red Hat/Fedora you can do:

	systemctl start open-iscsi

To get Open-iSCSI to automatically start at run time you may have to
run:
	systemctl enable open-iscsi

And, to automatically mount a file system during startup
you must have the partition entry in /etc/fstab marked with the "_netdev"
option. For example this would mount an iSCSI disk sdb:

	/dev/sdb /mnt/iscsi ext3 _netdev 0 0

SUSE or Debian:
---------------
The Open-iSCSI service is socket activated, so there is no need to
enable the Open-iSCSI service. Likewise, the iscsi.service login
service is enabled automatically, so setting 'startup' to "automatic'
will enable automatic login to Open-iSCSI targets.


7.1.2 Manual Startup
====================

7.1.2.1 Starting up the iSCSI daemon (iscsid) and loading modules
=================================================================

If there is no initd script, you must start the tools by hand. First load the
iSCSI modules:

	modprobe -q iscsi_tcp

After that, start iSCSI as a daemon process:

	iscsid

or alternatively, start it with debug enabled, in a separate window,
which will force it into "foreground" mode:

	iscsid -d 8


7.1.2.2 Logging into Targets
============================

Use the configuration utility, iscsiadm, to add/remove/update Discovery
records, iSCSI Node records or monitor active iSCSI sessions (see above or the
iscsiadm man files and see section 7.2 below for how to discover targets):

	iscsiadm  -m node

This will print out the nodes that have been discovered as:

	10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311
	10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311

The format is:

	ip:port,target_portal_group_tag targetname

If you are using the iface argument or want to see the driver
info, use the following:

	iscsiadm -m node -P 1

Example output:

	Target: iqn.1992-08.com.netapp:sn.33615311
	        Portal: 10.15.84.19:3260,2
	                Iface Name: iface2
	        Portal: 10.15.85.19:3260,3
	                Iface Name: iface2

The format is:

	Target: targetname
		Portal ip_address:port,tpgt
			Iface: ifacename

Here, where targetname is the name of the target and ip_address:port
is the address and port of the portal. tpgt is the Target Portal Group
Tag of the portal, and is not used in iscsiadm commands except for static
record creation. ifacename is the name of the iSCSI interface
defined in /etc/iscsi/ifaces. If no interface was defined in
/etc/iscsi/ifaces or passed in, the default behavior is used.
Default here is iscsi_tcp/tcp to be used over whichever NIC the
network layer decides is best.

To login, take the ip, port and targetname from above and run:

	iscsiadm -m node -T targetname -p ip:port -l

In this example we would run:

	iscsiadm -m node -T iqn.1992-08.com.netapp:sn.33615311 \
		-p 10.15.84.19:3260 -l

Note: drop the portal group tag from the "iscsiadm -m node" output.

If you wish, for example to login to all targets represented in the node
database, but not wait for the login responses:

	iscsiadm -m node -l -W

After this, you can use "session" mode to detect when the logins complete:

	iscsiadm -m session


7.2. Discover Targets
=====================

Once the iSCSI service is running, you can perform discovery using
SendTarget with:

	iscsiadm -m discoverydb -t sendtargets -p ip:port --discover

Here, "ip" is the address of the portal and "port" is the port.

To use iSNS you can run the discovery command with the type as "isns"
and pass in the ip:port:

	iscsiadm -m discoverydb -t isns -p ip:port --discover

Both commands will print out the list of all discovered targets and their
portals, e.g.:

	iscsiadm -m discoverydb -t st -p 10.15.85.19:3260 --discover

This might produce:

	10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
	10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

The format for the output is:

	ip:port,tpgt targetname

In this example, for the first target the ip address is 10.15.85.19, and
the port is 3260. The target portal group is 3. The target name
is iqn.1992-08.com.netapp:sn.33615311.

If you would also like to see the iSCSI inteface which will be used
for each session then use the --print=[N]/-P [N] option:

	iscsiadm -m discoverydb -t sendtargets -p ip:port -P 1 --discover

This might print:

    Target: iqn.1992-08.com.netapp:sn.33615311
        Portal: 10.15.84.19:3260,2
           Iface Name: iface2
        Portal: 10.15.85.19:3260,3
           Iface Name: iface2

In this example, the IP address of the first portal is 10.15.84.19, and
the port is 3260. The target portal group is 3. The target name
is iqn.1992-08.com.netapp:sn.33615311. The iface being used is iface2.

While discovery targets are kept in the discovery db, they are
useful only for re-discovery. The discovered targets (a.k.a. nodes)
are stored as records in the node db.

The discovered targets are not logged into yet. Rather than logging
into the discovered nodes (making LUs from those nodes available as
storage), it is better to automate the login to the nodes we need.

If you wish to log into a target manually now, see section
"7.1.2.2 Logging in targets" above.


7.3. Automate Target Logins for Future System Startups
======================================================

Note: this may only work for distros with systemd iSCSI login scripts.

To automate login to a node, use the following with the record ID
(record ID is the targetname and portal) of the node discovered in the
discovery above:

	iscsiadm -m node -T targetname -p ip:port --op update -n node.startup -v automatic

To set the automatic setting to all portals on a target through every
interface setup for each protal, the following can be run:

	iscsiadm -m node -T targetname --op update -n node.startup -v automatic

Or to set the "node.startup" attribute to "automatic" as default for
all sessions add the following to the /etc/iscsi/iscsid.conf:

	node.startup = automatic

Setting this in iscsid.conf will not affect existing nodes. It will only
affect nodes that are discovered after setting the value.

To login to all automated nodes, simply restart the iSCSI login service, e.g. with:

	systemctl restart iscsi.service

On your next startup the nodes will be logged into automatically.


7.4 Automatic Discovery and Login
=================================

Instead of running the iscsiadm discovery command and editing the
startup setting, iscsid can be configured so that every X seconds
it performs discovery and logs in and out of the portals returned or
no longer returned. In this mode, when iscsid starts it will check the
discovery db for iSNS records with:

	discovery.isns.use_discoveryd = Yes

This tells iscsi to check for SendTargets discovery records that have the
setting:

	discovery.sendtargets.use_discoveryd = Yes

If set, iscsid will perform discovery to the address every
discovery.isns.discoveryd_poll_inval or
discovery.sendtargets.discoveryd_poll_inval seconds,
and it will log into any portals found from the discovery source using
the ifaces in /etc/iscsi/ifaces.

Note that for iSNS the poll_interval does not have to be set. If not set,
iscsid will only perform rediscovery when it gets a SCN from the server.

#   iSNS Note:
#   For servers like Microsoft's where they allow SCN registrations, but do not
#   send SCN events, discovery.isns.poll_interval should be set to a non zero
#   value to auto discover new targets. This is also useful for servers like
#   linux-isns (SLES's iSNS server) where it sometimes does not send SCN
#   events in the proper format, so they may not get handled.

Examples
--------

SendTargets
-----------

- Create a SendTargets record by passing iscsiadm the "-o new" argument in
		discoverydb mode:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260 -o new

  On success, this will output something like:

  New discovery record for [20.15.0.7,3260] added.

- Set the use_discoveryd setting for the record:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260  -o update \
		-n discovery.sendtargets.use_discoveryd -v Yes

- Set the polling interval:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260  -o update \
		-n discovery.sendtargets.discoveryd_poll_inval -v 30

To have the new settings take effect, restart iscsid by restarting the
iSCSI services.

NOTE:	When iscsiadm is run with the -o new argument, it will use the
	discovery.sendtargets.use_discoveryd and
	discovery.sendtargets.discoveryd_poll_inval
	settings in iscsid.conf for the records initial settings. So if those
	are set in iscsid.conf, then you can skip the iscsiadm -o update
	commands.

iSNS
----

- Create an iSNS record by passing iscsiadm the "-o new" argument in
		discoverydb mode:

	iscsiadm -m discoverydb -t isns -p 20.15.0.7:3205 -o new

  Response on success:

	New discovery record for [20.15.0.7,3205] added.

- Set the use_discoveryd setting for the record:

	iscsiadm -m discoverydb -t isns -p 20.15.0.7:3205  -o update \
		-n discovery.isns.use_discoveryd -v Yes

- [OPTIONAL: see iSNS note above] Set the polling interval if needed:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3205  -o update \
		-n discovery.isns.discoveryd_poll_inval -v 30

To have the new settings take effect, restart iscsid by restarting the
iscsi services.

Note:	When iscsiadm is run with the -o new argument, it will use the
	discovery.isns.use_discoveryd and discovery.isns.discoveryd_poll_inval
	settings in iscsid.conf for the record's initial settings. So if those
	are set in iscsid.conf, then you can skip the iscsiadm -o update
	commands.


8. Advanced Configuration
=========================

8.1 iSCSI settings for dm-multipath
===================================

When using dm-multipath, the iSCSI timers should be set so that commands
are quickly failed to the dm-multipath layer. For dm-multipath you should
then set values like queue if no path, so that IO errors are retried and
queued if all paths are failed in the multipath layer.


8.1.1 iSCSI ping/Nop-Out settings
=================================
To quickly detect problems in the network, the iSCSI layer will send iSCSI
pings (iSCSI NOP-Out requests) to the target. If a NOP-Out times out, the
iSCSI layer will respond by failing the connection and starting the
replacement_timeout. It will then tell the SCSI layer to stop the device queues
so no new IO will be sent to the iSCSI layer and to requeue and retry the
commands that were running if possible (see the next section on retrying
commands and the replacement_timeout).

To control how often a NOP-Out is sent, the following value can be set:

	node.conn[0].timeo.noop_out_interval = X

Where X is in seconds and the default is 10 seconds. To control the
timeout for the NOP-Out the noop_out_timeout value can be used:

	node.conn[0].timeo.noop_out_timeout = X

Again X is in seconds and the default is 15 seconds.

Normally for these values you can use:

	node.conn[0].timeo.noop_out_interval = 5
	node.conn[0].timeo.noop_out_timeout = 10

If there are a lot of IO error messages like

	detected conn error (22)

in the kernel log then the above values may be too aggressive. You may need to
increase the values for your network conditions and workload, or you may need
to check your network for possible problems.


8.1.2 SCSI command retries
==========================

SCSI disk commands get 5 retries by default. In newer kernels this can be
controlled via the sysfs file:

	/sys/block/$sdX/device/scsi_disk/$host:$bus:$target:LUN/max_retries

by writing a integer lower than 5 to reduce retries or setting to -1 for
infinite retries.

The number of actual retries a command gets may be less than 5 or what is
requested in max_retries if the replacement timeout expires. When that timer
expires it tells the SCSI layer to fail all new and queued commands.


8.1.3 replacement_timeout
=========================

The iSCSI layer timer:

	node.session.timeo.replacement_timeout = X

controls how long to wait for session re-establishment before failing all SCSI
commands:

	1. commands that have been requeued and awaiting a retry
	2. commands that are being operated on by the SCSI layer's error handler
	3. all new commands that are queued to the device

up to a higher level like multipath, filesystem layer, or to the application.

The setting is in seconds. zero means to fail immediately. -1 means an infinite
timeout which will wait until iscsid does a relogin, the user runs the iscsiadm
logout command or until the node.session.reopen_max limit is hit.

When this timer is started, the iSCSI layer will stop new IO from executing
and requeue running commands to the Block/SCSI layer. The new and requeued
commands will then sit in the Block/SCSI layer queue until the timeout has
expired, there is userspace intervention like a iscsiadm logout command, or
there is a successful relogin. If the command has run out of retries, the
command will be failed instead of being requeued.

After this timer has expired iscsid can continue to try to relogin. By default
iscsid will continue to try to relogin until there is a successful relogin or
until the user runs the iscsiadm logout command. The number of relogin retries
is controlled by the Open-iSCSI setting node.session.reopen_max. If that is set
too low, iscsid may give up and forcefully logout the session (equivalent to
running the iscsiadm logout command on a failed session) before replacement
timeout seconds. This will result in all commands being failed at that time.
The user would then have to manually relogin.

This timer starts when you see the connection error messsage:

	detected conn error (%d)

in the kernel log. The %d will be a integer with the following mappings
and meanings:

Int     Kernel define           Description
value
------------------------------------------------------------------------------
1	ISCSI_ERR_DATASN	Low level iSCSI protocol error where a data
				sequence value did not match the expected value.
2	ISCSI_ERR_DATA_OFFSET	There was an error where we were asked to
				read/write past a buffer's length.
3	ISCSI_ERR_MAX_CMDSN	Low level iSCSI protocol error where we got an
				invalid MaxCmdSN value.
4	ISCSI_ERR_EXP_CMDSN	Low level iSCSI protocol error where the
				ExpCmdSN from the target didn't match the
				expected value.
5	ISCSI_ERR_BAD_OPCODE	The iSCSI Target has sent an invalid or unknown
				opcode.
6	ISCSI_ERR_DATALEN	The iSCSI target has send a PDU with a data
				length that is invalid.
7	ISCSI_ERR_AHSLEN	The iSCSI target has sent a PDU with an invalid
				Additional Header Length.
8	ISCSI_ERR_PROTO		The iSCSI target has performed an operation that
				violated the iSCSI RFC.
9	ISCSI_ERR_LUN		The iSCSI target has requested an invalid LUN.
10	ISCSI_ERR_BAD_ITT       The iSCSI target has sent an invalid Initiator
				Task Tag.
11	ISCSI_ERR_CONN_FAILED   Generic error that can indicate the transmission
				of a PDU, like a SCSI cmd or task management
				function, has timed out. Or, we are not able to
				transmit a PDU because the network layer has
				returned an error, or we have detected a
				network error like a link down. It can
				sometimes be an error that does not fit the
				other error codes like a kernel function has
				returned a failure and there no other way to
				recovery from it except to try and kill the
				existing session and relogin.
12	ISCSI_ERR_R2TSN		Low level iSCSI protocol error where the R2T
				sequence numbers to not match.
13	ISCSI_ERR_SESSION_FAILED
				Unused.
14	ISCSI_ERR_HDR_DGST	iSCSI Header Digest error.
15	ISCSI_ERR_DATA_DGST	iSCSI Data Digest error.
16	ISCSI_ERR_PARAM_NOT_FOUND
				Userspace has passed the kernel an unknown
				setting.
17	ISCSI_ERR_NO_SCSI_CMD	The iSCSI target has sent a ITT for an unknown
				task.
18	ISCSI_ERR_INVALID_HOST	The iSCSI Host is no longer present or being
				removed.
19	ISCSI_ERR_XMIT_FAILED	The software iSCSI initiator or cxgb was not
				able to transmit a PDU becuase of a network
				layer error.
20	ISCSI_ERR_TCP_CONN_CLOSE
				The iSCSI target has closed the connection.
21	ISCSI_ERR_SCSI_EH_SESSION_RST
				The SCSI layer's Error Handler has timed out
				the SCSI cmd, tried to abort it and possibly
				tried to send a LUN RESET, and it's now
				going to drop the session.
22	ISCSI_ERR_NOP_TIMEDOUT	An iSCSI Nop as a ping has timed out.


8.1.4 Running Commands, the SCSI Error Handler, and replacement_timeout
=======================================================================

Each SCSI command has a timer controlled by:

	/sys/block/sdX/device/timeout

The value is in seconds and the default ranges from 30 - 60 seconds
depending on the distro's udev scripts.

When a command is sent to the iSCSI layer the timer is started, and when it's
returned to the SCSI layer the timer is stopped. This could be for successful
completion or due to a retry/requeue due to a conn error like described
previously. If a command is retried the timer is reset.

When the command timer fires, the SCSI layer will ask the iSCSI layer to abort
the command by sending an ABORT_TASK task management request. If the abort
is successful the SCSI layer retries the command if it has enough retries left.
If the abort times out, the iSCSI layer will report failure to the SCSI layer
and will fire a ISCSI_ERR_SCSI_EH_SESSION_RST error. In the logs you will see:

	detected conn error (21)

The ISCSI_ERR_SCSI_EH_SESSION_RST will cause the connection/session to be
dropped and the iSCSI layer will start the replacement_timeout operations
described in that section.

The SCSI layer will then eventually call the iSCSI layer's target/session reset
callout which will wait for the replacement timeout to expire, a successful
relogin to occur, or for userspace to logout the session.

- If the replacement timeout fires, then commands will be failed upwards as
described in the replacement timeout section. The SCSI devices will be put
into an offline state until iscsid performs a relogin.

- If a relogin occurs before the timer fires, commands will be retried if
possible.

To check if the SCSI error handler is running, iscsiadm can be run as:

	iscsiadm -m session -P 3

and you will see:

	Host Number: X State: Recovery

To modify the timer that starts the SCSI EH, you can either write
directly to the device's sysfs file:

	echo X > /sys/block/sdX/device/timeout

where X is in seconds.
Alternatively, on most distros you can modify the udev rule.

To modify the udev rule open /etc/udev/rules.d/50-udev.rules, and find the
following lines:

	ACTION=="add", SUBSYSTEM=="scsi" , SYSFS{type}=="0|7|14", \
		RUN+="/bin/sh -c 'echo 60 > /sys$$DEVPATH/timeout'"

And change the "echo 60" part of the line to the value that you want.

The default timeout for normal File System commands is 30 seconds when udev
is not being used. If udev is used the default is the above value which
is normally 60 seconds.


8.1.4 Optimal replacement_timeout Value
=======================================

The default value for replacement_timeout is 120 seconds, but because
multipath's queue_if_no_path and no_path_retry setting can prevent IO errors
from being propagated to the application, replacement_timeout can be set to a
shorter value like 5 to 15 seconds. By setting it lower, pending IO is quickly
sent to a new path and executed while the iSCSI layer attempts
re-establishment of the session. If all paths end up being failed, then the
multipath and device mapper layer will internally queue IO based on the
multipath.conf settings, instead of the iSCSI layer.


8.2 iSCSI settings for iSCSI root
=================================

When accessing the root partition directly through an iSCSI disk, the
iSCSI timers should be set so that iSCSI layer has several chances to try to
re-establish a session and so that commands are not quickly requeued to
the SCSI layer. Basically you want the opposite of when using dm-multipath.

For this setup, you can turn off iSCSI pings (NOPs) by setting:

	node.conn[0].timeo.noop_out_interval = 0
	node.conn[0].timeo.noop_out_timeout = 0

And you can turn the replacement_timer to a very long value:

	node.session.timeo.replacement_timeout = 86400


8.3 iSCSI settings for iSCSI tape
=================================

It is possible to use open-iscsi to connect to a remote tape drive,
making available locally. In such a case, you need to disable NOPs out,
since tape drives don't handle those well at all. See above (section 8.2)
for how to disable these NOPs.


9. iSCSI System Info
====================

To get information about the running sessions: including the session and
device state, session ids (sid) for session mode, and some of the
negotiated parameters, run:

	iscsiadm -m session -P 2

If you are looking for something shorter, like just the sid to node mapping,
run:

	iscsiadm -m session [-P 0]

This will print the list of running sessions with the format:

	driver [sid] ip:port,target_portal_group_tag targetname

Example output of "iscsiadm -m session":

	tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
	tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

To print the hw address info use the -P option with "1":

	iscsiadm -m session -P 1

This will print the sessions with the following format:

	Target: targetname
		Current Portal: portal currently logged into
		Persistent Portal: portal we would fall back to if we had got
				   redirected during login
			Iface Transport: driver/transport_name
			Iface IPaddress: IP address of iface being used
			Iface HWaddress: HW address used to bind session
			Iface Netdev: netdev value used to bind session
			SID: iscsi sysfs session id
			iSCSI Connection State: iscsi state

Note: if an older kernel is being used or if the session is not bound,
then the keyword "default" is printed to indicate that the default
network behavior is being used.

Example output of "iscsiadm -m session -P 1":

	Target: iqn.1992-08.com.netapp:sn.33615311
		Current Portal: 10.15.85.19:3260,3
		Persistent Portal: 10.15.85.19:3260,3
			Iface Transport: tcp
			Iface IPaddress: 10.11.14.37
			Iface HWaddress: default
			Iface Netdev: default
			SID: 7
			iSCSI Connection State: LOGGED IN
			Internal iscsid Session State: NO CHANGE

The connection state is currently not available for qla4xxx.

To get a HBA/Host view of the session, there is the host mode:

	iscsiadm -m host

This prints the list of iSCSI hosts in the system with the format:

	driver [hostno] ipaddress,[hwaddress],net_ifacename,initiatorname

Example output:

	cxgb3i: [7] 10.10.15.51,[00:07:43:05:97:07],eth3 <empty>

To print this info in a more user friendly way, the -P argument can be used:

	iscsiadm -m host -P 1

Example output:

	Host Number: 7
		State: running
		Transport: cxgb3i
		Initiatorname: <empty>
		IPaddress: 10.10.15.51
		HWaddress: 00:07:43:05:97:07
		Netdev: eth3

Here, you can also see the state of the host.

You can also pass in any value from 1 - 4 to print more info, like the
sessions running through the host, what ifaces are being used and what
devices are accessed through it.

To print the info for a specific host, you can pass in the -H argument
with the host number:

	iscsiadm -m host -P 1 -H 7

About

iSCSI tools for Linux

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C 91.1%
  • Perl 4.0%
  • Python 1.2%
  • Shell 1.2%
  • Makefile 1.0%
  • Meson 0.8%
  • Other 0.7%