GithubHelp home page GithubHelp logo

open-iscsi / open-iscsi Goto Github PK

View Code? Open in Web Editor NEW
507.0 54.0 240.0 9.25 MB

iSCSI tools for Linux

License: GNU General Public License v2.0

Makefile 0.98% Shell 1.50% C 90.93% Lex 0.12% Yacc 0.28% M4 0.12% Roff 0.21% Perl 3.95% Python 1.15% Meson 0.76%

open-iscsi's Introduction

=================================================================

                Linux* Open-iSCSI

=================================================================

                                                   Jun 6, 2022
Contents
========

- 1. In This Release
- 2. Introduction
- 3. Installation
- 4. Open-iSCSI daemon
- 5. Open-iSCSI Configuration Utility
- 6. Configuration
- 7. Getting Started
- 8. Advanced Configuration
- 9. iSCSI System Info


1. In This Release
==================

This file describes the Linux* Open-iSCSI Initiator. The software was
tested on AMD Opteron (TM) and Intel Xeon (TM).

The latest development release is available at:

	https://github.com/open-iscsi/open-iscsi

For questions, comments, contributions post an issue on github, or
send e-mail to:

	[email protected]

You can also raise an issue on the github page.


1.1. Features
=============

- highly optimized and very small-footprint data path
- persistent configuration database
- SendTargets discovery
- CHAP
- PDU header Digest
- multiple sessions


1.2  Licensing
==============

The daemon and other top-level commands are licensed as GPLv3, while the
libopeniscsiusr library used by some of those commmands is licensed as LGPLv3.


2. Introduction
===============

The Open-iSCSI project is a high-performance, transport independent,
multi-platform implementation of RFC3720 iSCSI.

Open-iSCSI is partitioned into user and kernel parts.

The kernel portion of Open-iSCSI was originally part of this project
repository, but now is built into the linux kernel itself. It
includes loadable modules: scsi_transport_iscsi.ko, libiscsi.ko and
scsi_tcp.ko. The kernel code handles the "fast" path, i.e. data flow.

User space contains the entire control plane: configuration
manager, iSCSI Discovery, Login and Logout processing,
connection-level error processing, Nop-In and Nop-Out handling,
and (perhaps in the future:) Text processing, iSNS, SLP, Radius, etc.

The user space Open-iSCSI consists of a daemon process called
iscsid, and a management utility iscsiadm. There are also helper
programs, and iscsiuio, which is used for certain iSCSI adapters.


3. Installation
===============

NOTE:	You will need to be root to install the Open-iSCSI code, and
	you will also need to be root to run it.

As of today, the Open-iSCSI Initiator requires a host running the
Linux operating system with kernel.

The userspace components iscsid, iscsiadm and iscsistart require the
open-isns library, unless open-isns use is diabled when building (see
below).

If this package is not available for your distribution, you can download
and install it yourself.  To install the open-isns headers and library
required for Open-iSCSI, download the current release from:

	https://github.com/open-iscsi/open-isns

Then, from the top-level directory, run:

	./configure [<OPTIONS>]
	make
	make install

For the open-iscsi project and iscsiuio, the original build
system used make and autoconf the build the project. These
build systems are being depcreated in favor of meson (and ninja).
See below for how to build using make and autoconf, but
migrating as soon as possible to meson would be a good idea.

Building open-iscsi/iscsiuio using meson
----------------------------------------
For Open-iSCSI and iscsiuio, the system is built using meson and ninja
(see https://github.com/mesonbuild/meson). If these packages aren't
available to you on your Linux distribution, you can download
the latest release from: https://github.com/mesonbuild/meson/releases).
The README.md file describes in detail how to build it yourself, including
how to get ninja.

To build the open-iscsi project, including iscsiuio, first run meson
to configure the build, from the top-level open-iscsi directory, e.g.:

	rm -rf builddir
	mkdir builddir
	meson [<MESON-OPTIONS>] builddir

Then, to build the code:

	ninja -C builddir

If you change any code and want to rebuild, you simply run ninja again.

When you are ready to install:

	[DESTDIR=<SOME-DIR>] ninja -C builddir install

This will install the iSCSI tools, configuration files, interfaces, and
documentation. If you do not set DESTDIR, it defaults to "/".


MESON-OPTIONS:
--------------
One can override several default values when building with meson:


Option			Description
=====================	=====================================================

--libdir=<LIBDIR>	Where library files go [/lib64]
--sbindir=<DIR>		Meson 0.63 or newer: Where binaries go [/usr/sbin]
-Dc_flags="<C-FLAGS>"	Add in addition flags to the C compiler
-Dno_systemd=<BOOL>	Enables systemd usage [false]
			(set to "true" to disable systemd)
-Dsystemddir=<DIR>	Set systemd unit directory [/usr/lib/systemd]
-Dhomedir=<DIR>		Set config file directory [/etc/iscsi]
-Ddbroot=<DIR>		Set Database directory [/etciscsi]
-Dlockdir=<DIR>		Set Lock directory [/run/lock/iscsi]
-Drulesdir=<DIR>	Set udev rules directory [/usr/lib/udev/rules.d]
-Discsi_sbindir=<DIR>	Where binaries go [/usr/sbin]
			(for use when sbindir can't be set, in older versions
			 of meson)
-Disns_supported=<BOOL>	Enable/disable iSNS support [true]
			(set to "false" to disable use of open-isns)


Building open-iscsi/iscsiuio using make/autoconf (Deprecated)
-------------------------------------------------------------
If you wish to build using the older deprecated system, you can
simply run:

	make [<MAKE-OPTIONS>]
	make [DESTDIR=<SOME-DIR>] install

Where MAKE-OPTIONS are from:
	* SBINDIR=<some-dir>  [/usr/bin]   for executables
	* DBROOT=<some-dir>   [/etc/iscsi] for iscsi database files
	* HOMEDIR=<some-dir>  [/etc/iscsi] for iscsi config files


4. Open-iSCSI daemon
====================

The iscsid daemon implements control path of iSCSI protocol, plus some
management facilities. For example, the daemon could be configured to
automatically re-start discovery at startup, based on the contents of
persistent iSCSI database (see next section).

For help, run:

	iscsid --help

The output will be similar to the following (assuming a default install):

Usage: iscsid [OPTION]

  -c, --config=[path]     Execute in the config file (/etc/iscsi/iscsid.conf).
  -i, --initiatorname=[path]     read initiatorname from file (/etc/iscsi/initiatorname.iscsi).
  -f, --foreground        run iscsid in the foreground
  -d, --debug debuglevel  print debugging information
  -u, --uid=uid           run as uid, default is current user
  -g, --gid=gid           run as gid, default is current user group
  -n, --no-pid-file       do not use a pid file
  -p, --pid=pidfile       use pid file (default /run/iscsid.pid).
  -h, --help              display this help and exit
  -v, --version           display version and exit


5. Open-iSCSI Configuration and Administration Utility
======================================================

Open-iSCSI persistent configuration is stored in a number of
directories under a configuration root directory, using a flat-file
format. This configuration root directory is /etc/iscsi by default,
but may also commonly be in /var/lib/iscsi (see "dbroot" in the meson
options discussed earlier).

Configuration is contained in directories for:

	- nodes
	- isns
	- static
	- fw
	- send_targets
	- ifaces

The iscsiadm utility is a command-line tool to manage (update, delete,
insert, query) the persistent database, as well manage discovery,
session establishment (login), and ending sessions (logout).

This utility presents set of operations that a user can perform
on iSCSI node, session, connection, and discovery records.

Open-iSCSI does not use the term node as defined by the iSCSI RFC,
where a node is a single iSCSI initiator or target. Open-iSCSI uses the
term node to refer to a portal on a target, so tools like iscsiadm
require that the '--targetname' and '--portal' arguments be used when
in node mode.

For session mode, a session id (sid) is used. The sid of a session can be
found by running:

	iscsiadm -m session -P 1

The session id is not currently persistent and is partially determined by
when the session is setup.

Note that some of the iSCSI Node and iSCSI Discovery operations
do not require iSCSI daemon (iscsid) loaded.

For help on the command, run:

	iscsiadm --help

The output will be similar to the following.

iscsiadm -m discoverydb [-hV] [-d debug_level] [-P printlevel] [-t type -p ip:port -I ifaceN ... [-Dl]] | [[-p ip:port -t type] [-o operation] [-n name] [-v value] [-lD]]
iscsiadm -m discovery [-hV] [-d debug_level] [-P printlevel] [-t type -p ip:port -I ifaceN ... [-l]] | [[-p ip:port] [-l | -D]] [-W]
iscsiadm -m node [-hV] [-d debug_level] [-P printlevel] [-L all,manual,automatic,onboot] [-W] [-U all,manual,automatic,onboot] [-S] [[-T targetname -p ip:port -I ifaceN] [-l | -u | -R | -s]] [[-o operation ] [-n name] [-v value]]
iscsiadm -m session [-hV] [-d debug_level] [-P printlevel] [-r sessionid | sysfsdir [-R | -u | -s] [-o operation] [-n name] [-v value]]
iscsiadm -m iface [-hV] [-d debug_level] [-P printlevel] [-I ifacename | -H hostno|MAC] [[-o operation ] [-n name] [-v value]] [-C ping [-a ip] [-b packetsize] [-c count] [-i interval]]
iscsiadm -m fw [-d debug_level] [-l] [-W] [[-n name] [-v value]]
iscsiadm -m host [-P printlevel] [-H hostno|MAC] [[-C chap [-x chap_tbl_idx]] | [-C flashnode [-A portal_type] [-x flashnode_idx]] | [-C stats]] [[-o operation] [-n name] [-v value]]
iscsiadm -k priority


The first parameter specifies the mode to operate in:

  -m, --mode <op>	specify operational mode op =
			<discoverydb|discovery|node|session|iface|fw|host>

Mode "discoverydb"
------------------

  -m discoverydb --type=[type] --interface=[iface…] --portal=[ip:port] \
			--print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT] \
			--discover

			  This command will use the discovery record settings
			  matching the record with type=type and
			  portal=ip:port]. If a record does not exist, it will
			  create a record using the iscsid.conf discovery
			  settings.

			  By default, it will then remove records for
			  portals no longer returned. And,
			  if a portal is returned by the target, then the
			  discovery command will create a new record or modify
			  an existing one with values from iscsi.conf and the
			  command line.

			  [op] can be passed in multiple times to this
			  command, and it will alter the node DB manipulation.

			  If [op] is passed in and the value is
			  "new", iscsiadm will add records for portals that do
			  not yet have records in the db.

			  If [op] is passed in and the value is
			  "update", iscsiadm will update node records using
			  info from iscsi.conf and the command line for portals
			  that are returned during discovery and have
			  a record in the db.

			  If [op] is passed in and the value is "delete",
			  iscsiadm will delete records for portals that
			  were not returned during discovery.

			  If [op] is passed in and the value is
			  "nonpersistent", iscsiadm will not store
			  the portals found in the node DB. This is
			  only useful with the --login command.

			  See the example section for more info.

			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  Multiple ifaces can be passed in during discovery.

			  For the above commands, "print" is optional. If
			  used, N can be 0 or 1.
			  0 = The old flat style of output is used.
			  1 = The tree style with the inteface info is used.

			  If print is not used, the old flat style is used.

  -m discoverydb --interface=[iface...] --type=[type] --portal=[ip:port] \
			--print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT] \
			--discover --login

			  This works like the previous discoverydb command
			  with the --login argument passed in will also
			  log into the portals that are found.

  -m discoverydb --portal=[ip:port] --type=[type] \
			--op=[op] [--name=[name] --value=[value]]

			  Perform specific DB operation [op] for
			  discovery portal. It could be one of:
			  [new], [delete], [update] or [show]. In case of
			  [update], you have to provide [name] and [value]
			  you wish to update

			  Setting op=NEW will create a new discovery record
			  using the iscsid.conf discovery settings. If it
			  already exists, it will be overwritten using
			  iscsid.conf discovery settings.

			  Setting op=DELETE will delete the discovery record
			  and records for the targets found through
			  Phat discovery source.

			  Setting op=SHOW will display the discovery record
			  values. The --show argument can be used to
			  force the CHAP passwords to be displayed.

Mode "discovery"
----------------

  -m discovery --type=[type] --interface=iscsi_ifacename \
			--portal=[ip:port] --login --print=[N] \
			--op=[op]=[NEW | UPDATE | DELETE | NONPERSISTENT]

			  Perform [type] discovery for target portal with
			  ip-address [ip] and port [port].

			  This command will not use the discovery record
			  settings. It will use the iscsid.conf discovery
			  settings and it will overwrite the discovery
			  record with iscsid.conf discovery settings if it
			  exists. By default, it will then remove records for
			  portals no longer returned. And,
			  if a portal is returned by the target, then the
			  discovery command will create a new record or modify
			  an existing one with values from iscsi.conf and the
			  command line.

			  [op] can be passed in multiple times to this
			  command, and it will alter the DB manipulation.

			  If [op] is passed in and the value is
			  "new", iscsiadm will add records for portals that do
			  not yet have records in the db.

			  If [op] is passed in and the value is
			  "update", iscsiadm will update node records using
			  info from iscsi.conf and the command line for portals
			  that are returned during discovery and have
			  a record in the db.

			  If [op] is passed in and the value is "delete",
			  iscsiadm will delete records for portals that
			  were not returned during discovery.

			  If [op] is passed in and the value is
			  "nonpersistent", iscsiadm will not store
			  the portals found in the node DB.

			  See the example section for more info.

			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  Multiple ifaces can be passed in during discovery.

  -m discovery --print=[N]

			  Display all discovery records from internal
			  persistent discovery database.

Mode "node"
-----------

  -m node		  display all discovered nodes from internal
			  persistent discovery database

  -m node --targetname=[name] --portal=[ip:port] \
			--interface=iscsi_ifacename] \
			[--login|--logout|--rescan|--stats] [-W]

  -m node --targetname=[name] --portal=[ip:port]
			--interface=[driver,HWaddress] \
			--op=[op] [--name=[name] --value=[value]]

  -m node --targetname=[name] --portal=[ip:port]
			--interface=iscsi_ifacename] \
			--print=[level]

			  Perform specific DB operation [op] for specific
			  interface on host that will connect to portal on
			  target. targetname, portal and interface are optional.
			  See below for how to setup iSCSI ifaces for
			  software iSCSI or override the system defaults.

			  The op could be one of [new], [delete], [update] or
			  [show]. In case of [update], you have to provide
			  [name] and [value] you wish to update.
			  For [delete], note that if a session is using the
			  node record, the session will be logged out then
			  the record will be deleted.

			  Using --rescan will perform a SCSI layer scan of the
			  session to find new LUNs.

			  Using --stats prints the iSCSI stats for the session.

			  Using --login normally sends a login request to the
			  specified target and normally waits for the results.
			  If -W/--no_wait is supplied return success if we are
			  able to send the login request, and do not wait
			  for the response. The user will have to poll for
			  success

			  Print level can be 0 to 1.

  -m node --logoutall=[all|manual|automatic]
			  Logout "all" the running sessions or just the ones
			  with a node startup value manual or automatic.
			  Nodes marked as ONBOOT are skipped.

  -m node --loginall=[all|manual|automatic] [-W]
			  Login "all" the running sessions or just the ones
			  with a node startup value manual or automatic.
			  Nodes marked as ONBOOT are skipped.

			  If -W is supplied then do not wait for the login
			  response for the target, returning success if we
			  are able to just send the request. The client
			  will have to poll for success.

Mode "session"
--------------

  -m session		  display all active sessions and connections

  -m session --sid=[sid] [ --print=level | --rescan | --logout ]
			--op=[op] [--name=[name] --value=[value]]

			  Perform operation for specific session with
			  session id sid. If no sid is given, the operation
			  will be performed on all running sessions if possible.
			  --logout and --op work like they do in node mode,
			  but in session mode targetname and portal info
			  is not passed in.

			  Print level can be 0 to 3.
			  0 = Print the running sessions.
			  1 = Print basic session info like node we are
			  connected to and whether we are connected.
			  2 = Print iSCSI params used.
			  3 = Print SCSI info like LUNs, device state.

			  If no sid and no operation is given print out the
			  running sessions.

Mode "iface"
------------

  -m iface --interface=iscsi_ifacename --op=[op] [--name=[name] --value=[value]]
			--print=level

			  Perform operation on given interface with name
			  iscsi_ifacename.

			  See below for examples.

  -m iface --interface=iscsi_ifacename -C ping --ip=[ipaddr] --packetsize=[size]
			--count=[count] --interval=[interval]

Mode "host"
-----------

  -m host [--host=hostno|MAC] --print=level -C chap --op=[SHOW]

			  Display information for a specific host. The host
			  can be passed in by host number or by MAC address.
			  If a host is not passed in, then info
			  for all hosts is printed.

			  Print level can be 0 to 4.
			  1 = Print info for how like its state, MAC, and
			      netinfo if possible.
			  2 = Print basic session info for nodes the host
			      is connected to.
			  3 = Print iSCSI params used.
			  4 = Print SCSI info like LUNs, device state.

  -m host --host=hostno|MAC -C chap --op=[DELETE] --index=[chap_tbl_idx]

			  Delete chap entry at the given index from chap table.

  -m host --host=hostno|MAC -C chap --op=[NEW | UPDATE] --index=[chap_tbl_idx] \
			--name=[name] --value=[value]

			  Add new or update existing chap entry at the given
			  index with given username and password pair. If index
			  is not passed then entry is added at the first free
			  index in chap table.

  -m host --host=hostno|MAC -C flashnode

			  Display list of all the targets in adapter's
			  flash (flash node), for the specified host,
			  with ip, port, tpgt and iqn.

  -m host --host=hostno|MAC -C flashnode --op=[NEW] --portal_type=[ipv4|ipv6]

			  Create new flash node entry for the given host of the
			  specified portal_type. This returns the index of the
			  newly created entry on success.

  -m host --host=hostno|MAC -C flashnode --index=[flashnode_index] \
			--op=[UPDATE] --name=[name] --value=[value]

			  Update the params of the specified flash node.
			  The [name] and [value] pairs must be provided for the
			  params that need to be updated. Multiple params can
			  be updated using a single command.

  -m host --host=hostno|MAC -C flashnode --index=[flashnode_index] \
			--op=[SHOW | DELETE | LOGIN | LOGOUT]

			  Setting op=DELETE|LOGIN|LOGOUT will perform
			  deletion/login/ logout operation on the specified
			  flash node.

			  Setting op=SHOW will list all params with the values
			  for the specified flash node. This is the default
			  operation.

			  See the iscsiadm example section below for more info.

Other arguments
---------------

  -d, --debug debuglevel  print debugging information

  -V, --version		  display version and exit

  -h, --help		  display this help and exit


5.1 iSCSI iface setup
=====================

The next sections describe how to setup iSCSI ifaces so you can bind
a session to a NIC port when using software iSCSI (section 5.1.1), and
it describes how to setup ifaces for use with offload cards from Chelsio
and Broadcom (section 5.1.2).


5.1.1 How to setup iSCSI interfaces (iface) for binding
=======================================================

If you wish to allow the network susbsystem to figure out
the best path/NIC to use, then you can skip this section. For example
if you have setup your portals and NICs on different subnets, then
the following is not needed for software iSCSI.

Warning!!!!!!
This feature is experimental. The interface may change. When reporting
bugs, if you cannot do a "ping -I ethX target_portal", then check your
network settings first. Make sure the rp_filter setting is set to 0 or 2
(see Prep section below for more info). If you cannot ping the portal,
then you will not be able to bind a session to a NIC.

What is a scsi_host and iface for software, hardware and partial
offload iSCSI?

Software iSCSI, like iscsi_tcp and iser, allocates a scsi_host per session
and does a single connection per session. As a result
/sys/class_scsi_host and /proc/scsi will report a scsi_host for
each connection/session you have logged into. Offload iSCSI, like
Chelsio cxgb3i, allocates a scsi_host for each PCI device (each
port on a HBA will show up as a different PCI device so you get
a scsi_host per HBA port).

To manage both types of initiator stacks, iscsiadm uses the interface (iface)
structure. For each HBA port or for software iSCSI for each network
device (ethX) or NIC, that you wish to bind sessions to you must create
a iface config /etc/iscsi/ifaces.

Prep
----

The iface binding feature requires the sysctl setting
net.ipv4.conf.default.rp_filter to be set to 0 or 2.
This can be set in /etc/sysctl.conf by having the line:
	net.ipv4.conf.default.rp_filter = N

where N is 0 or 2. Note that when setting this you may have to reboot
for the value to take effect.


rp_filter information from Documentation/networking/ip-sysctl.txt:

rp_filter - INTEGER
	0 - No source validation.
	1 - Strict mode as defined in RFC3704 Strict Reverse Path
	    Each incoming packet is tested against the FIB and if the interface
	    is not the best reverse path the packet check will fail.
	    By default failed packets are discarded.
	2 - Loose mode as defined in RFC3704 Loose Reverse Path
	    Each incoming packet's source address is also tested against the FIB
	    and if the source address is not reachable via any interface
	    the packet check will fail.

Running
-------

The command:

	iscsiadm -m iface

will report iface configurations that are setup in /etc/iscsi/ifaces:

	iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-06.com.redhat:madmax
	iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-06.com.redhat:madmax

The format is:

	iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname

For software iSCSI, you can create the iface configs by hand, but it is
recommended that you use iscsiadm's iface mode. There is an iface.example in
/etc/iscsi/ifaces which can be used as a template for the daring.

For each network object you wish to bind a session to, you must create
a separate iface config in /etc/iscsi/ifaces and each iface config file
must have a unique name which is less than or equal to 64 characters.

Example
-------

If you have NIC1 with MAC address 00:0F:1F:92:6B:BF and NIC2 with
MAC address 00:C0:DD:08:63:E7, and you wanted to do software iSCSI over
TCP/IP, then in /etc/iscsi/ifaces/iface0 you would enter:

	iface.transport_name = tcp
	iface.hwaddress = 00:0F:1F:92:6B:BF

and in /etc/iscsi/ifaces/iface1 you would enter:

	iface.transport_name = tcp
	iface.hwaddress = 00:C0:DD:08:63:E7

Warning: Do not name an iface config file  "default" or "iser".
They are special values/files that are used by the iSCSI tools for
backward compatibility. If you name an iface default or iser, then
the behavior is not defined.

To use iscsiadm to create an iface0 similar to the above example, run:

	iscsiadm -m iface -I iface0 --op=new

(This will create a new empty iface config. If there was already an iface
with the name "iface0", this command will overwrite it.)

Next, set the hwaddress:

	iscsiadm -m iface -I iface0 --op=update \
		-n iface.hwaddress -v 00:0F:1F:92:6B:BF

If you had sessions logged in, iscsiadm will not update or overwrite
an iface. You must log out first. If you have an iface bound to a node/portal
but you have not logged in, then iscsiadm will update the config and
all existing bindings.

You should now skip to 5.1.3 to see how to log in using the iface, and for
some helpful management commands.


5.1.2 Setting up an iface for an iSCSI offload card
===================================================

This section describes how to setup ifaces for use with Chelsio, Broadcom and
QLogic cards.

By default, iscsiadm will create an iface for each Broadcom, QLogic and Chelsio
port. The iface name will be of the form:

	$transport/driver_name.$MAC_ADDRESS

Running the following command:

	iscsiadm -m iface

will report iface configurations that are setup in /etc/iscsi/ifaces:

	default tcp,<empty>,<empty>,<empty>,<empty>
	iser iser,<empty>,<empty>,<empty>,<empty>
	cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,<empty>
	qla4xxx.00:0e:1e:04:8b:2e qla4xxx,00:0e:1e:04:8b:2e,<empty>,<empty>,<empty>

The format is:

	iface_name transport_name,hwaddress,ipaddress,net_ifacename,initiatorname

where:	iface_name:		name of iface
	transport_name:		name of driver
	hwaddress:		MAC address
	ipaddress:		IP address to use for this port
	net_iface_name:		will be <empty> because change between reboots.
				It is used for software iSCSI's vlan or alias binding.
	initiatorname:		Initiatorname to be used if you want to override the
				default one in /etc/iscsi/initiatorname.iscsi.

To display these values in a more friendly way, run:

	iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07

Example output:

	# BEGIN RECORD 2.0-871
	iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07
	iface.net_ifacename = <empty>
	iface.ipaddress = <empty>
	iface.hwaddress = 00:07:43:05:97:07
	iface.transport_name = cxgb3i
	iface.initiatorname = <empty>
	# END RECORD

Before you can use the iface, you must set the IP address for the port.
We determine the corresponding variable name that we want to update from
the output above, which is "iface.ipaddress".
Then we fill this empty variable with the value we desire, with this command:

	iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update \
		-n iface.ipaddress -v 20.15.0.66

Note for QLogic ports: After updating the iface record, you must apply or
applyall the settings for the changes to take effect:

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2e -o apply
	iscsiadm -m iface -H 00:0e:1e:04:8b:2e -o applyall

With "apply", the network settings for the specified iface will take effect.
With "applyall", the network settings for all ifaces on a specific host will
take effect. The host can be specified using the -H/--host argument by either
the MAC address of the host or the host number.

Here is an example of setting multiple IPv6 addresses on a single iSCSI
interface port.
First interface (no need to set iface_num, it is 0 by default):

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.ipaddress -v fec0:ce00:7014:0041:1111:2222:1e04:9392

Create the second interface if it does not exist (iface_num is mandatory here):

	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a.1 -op=new
	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.iface_num -v 1
	iscsiadm -m iface -I qla4xxx.00:0e:1e:04:8b:2a -o update \
		 -n iface.ipaddress -v fec0:ce00:7014:0041:1111:2222:1e04:9393
	iscsiadm -m iface -H 00:0e:1e:04:8b:2a --op=applyall

Note: If there are common settings for multiple interfaces then the
settings from 0th iface would be considered valid.

Now, we can use this iface to login into targets, which is described in the
next section.


5.1.3 Discovering iSCSI targets/portals
========================================

Be aware that iscsiadm will use the default route to do discovery. It will
not use the iface specified. So if you are using an offload card, you will
need a separate network connection to the target for discovery purposes.

*This should be fixed in the some future version of Open-iSCSI*

For compatibility reasons, when you run iscsiadm to do discovery, it
will check for interfaces in /etc/iscsi/iscsi/ifaces that are using
tcp for the iface.transport, and it will bind the portals that are discovered
so that they will be logged in through those ifaces. This behavior can also
be overridden by passing in the interfaces you want to use. For the case
of offload, like with cxgb3i and bnx2i, this is required because the transport
will not be tcp.

For example if you had defined two interfaces but only wanted to use one,
you can use the --interface/-I argument:

	iscsiadm -m discoverydb -t st -p ip:port -I iface1 --discover -P 1

If you had defined interfaces but wanted the old behavior, where we do not
bind a session to an iface, then you can use the special iface "default":

	iscsiadm -m discoverydb -t st -p ip:port -I default --discover -P 1

And if you did not define any interfaces in /etc/iscsi/ifaces and do
not pass anything into iscsiadm, running iscsiadm will do the default
behavior, allowing the network subsystem to decide which device to use.

If you later want to remove the bindings for a specific target and
iface, then you can run:

	iscsiadm -m node -T my_target -I iface0 --op=delete

To do this for a specific portal on a target, run:

	iscsiadm -m node -T my_target -p ip:port -I iface0 --op=delete

If you wanted to delete all bindinds for iface0, then you can run:

	iscsiadm -m node -I iface0 --op=delete

And for equalogic targets it is sometimes useful to remove just by portal:

	iscsiadm -m node -p ip:port -I iface0 --op=delete


Now logging into targets is the same as with software iSCSI. See section 7
for how to get started.


5.2 iscsiadm examples
=====================

Usage examples using the one-letter options (see iscsiadm man page
for long options):

Discovery mode
--------------

- SendTargets iSCSI Discovery using the default driver and interface and
		using the discovery settings for the discovery record with the
		ID [192.168.1.1:3260]:

	iscsiadm -m discoverydb -t st -p 192.168.1.1:3260 --discover

  This will search /etc/iscsi/send_targets for a record with the
  ID [portal = 192.168.1.1:3260 and type = sendtargets. If found it
  will perform discovery using the settings stored in the record.
  If a record does not exist, it will be created using the iscsid.conf
  discovery settings.

  The argument to -p may also be a hostname instead of an address:

		iscsiadm -m discoverydb -t st -p somehost --discover

  For the ifaces, iscsiadm will first search /etc/iscsi/ifaces for
  interfaces using software iSCSI. If any are found then nodes found
  during discovery will be setup so that they can logged in through
  those interfaces. To specify a specific iface, pass the
  -I argument for each iface.

- SendTargets iSCSI Discovery updating existing target records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o update --discover

  If there is a record for targetX, and portalY exists in the DB, and
  is returned during discovery, it will be updated with the info from
  the iscsi.conf. No new portals will be added and stale portals
  will not be removed.

- SendTargets iSCSI Discovery deleting existing target records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o delete --discover

  If there is a record for targetX, and portalY exists in the DB, but
  is not returned during discovery, it will be removed from the DB.
  No new portals will be added and existing portal records will not
  be changed.

  Note: If a session is logged into portal we are going to delete
  a record for, it will be logged out then the record will be
  deleted.

- SendTargets iSCSI Discovery adding new records:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o new --discover

  If there is targetX, and portalY is returned during discovery, and does
  not have a record, it will be added. Existing records are not modified.

- SendTargets iSCSI Discovery using multiple ops:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o new -o delete --discover

  This command will add new portals and delete records for portals
  no longer returned. It will not change the record information for
  existing portals.

- SendTargets iSCSI Discovery in nonpersistent mode:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o nonpersistent --discover

  This command will perform discovery, but not manipulate the node DB.

- SendTargets iSCSI Discovery with a specific interface.  If you wish
  to only use a subset of the interfaces in
  /etc/iscsi/ifaces, then you can pass them in during discovery:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		--interface=iface0 --interface=iface1 --discover

  Note that for software iSCSI, we let the network layer select
  which NIC to use for discovery, but for later logins iscsiadm
  will use the NIC defined in the iface configuration.

  qla4xxx support is very basic and experimental. It does not store
  the record info in the card's FLASH or the node DB, so you must
  rerun discovery every time the driver is reloaded.

- Manipulate SendTargets DB: Create new SendTargets discovery record or
  overwrite an existing discovery record with iscsid.conf
  discovery settings:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o new

- Manipulate SendTargets DB: Display discovery settings:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o show

- Manipulate SendTargets DB: Display hidden discovery settings like
		 CHAP passwords:

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o show --show

- Manipulate SendTargets DB: Set discovery setting.

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 \
		-o update -n name -v value

- Manipulate SendTargets DB: Delete discovery record. This will also delete
  the records for the targets found through the discovery source.

	iscsiadm -m discoverydb -t sendtargets -p 192.168.1.1:3260 -o delete

- Show all records in discovery database:

	iscsiadm -m discovery

- Show all records in discovery database and show the targets that were
  discovered from each record:

	iscsiadm -m discovery -P 1

Node mode
---------

In node mode you can specify which records you want to log
into by specifying the targetname, ip address, port or interface
(if specifying the interface it must already be setup in the node db).
iscsiadm will search the node db for records which match the values
you pass in, so if you pass in the targetname and interface, iscsiadm
will search for records with those values and operate on only them.
Passing in none of them will result in all node records being operated on.

- iSCSI Login to all portals on every node/starget through each interface
  set in the db:

	iscsiadm -m node -l

- iSCSI login to all portals on a node/target through each interface set
  in the db, but do not wait for the login response:

	iscsiadm -m node -T iqn.2005-03.com.max -l -W

- iSCSI login to a specific portal through each interface set in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 -l

  To specify an iPv6 address, the following can be used:

	iscsiadm -m node -T iqn.2005-03.com.max \
		-p 2001:c90::211:9ff:feb8:a9e9 -l

  The above command would use the default port, 3260. To specify a
  port, use the following:

	iscsiadm -m node -T iqn.2005-03.com.max \
		-p [2001:c90::211:9ff:feb8:a9e9]:3260 -l

  To specify a hostname, the following can be used:

	iscsiadm -m node -T iqn.2005-03.com.max -p somehost -l

- iSCSI Login to a specific portal through the NIC setup as iface0:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-I iface0  -l

- iSCSI Logout of all portals on every node/starget through each interface
  set in the db:

	iscsiadm -m node -u

  Warning: this does not check startup values like the logout/login all
  option. Do not use this if you are running iSCSI on your root disk.

- iSCSI logout of all portals on a node/target through each interface set
  in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -u

- iSCSI logout of a specific portal through each interface set in the db:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 -u

- iSCSI Logout of a specific portal through the NIC setup as iface0:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-I iface0

- Changing iSCSI parameter:

	iscsiadm -m node -T iqn.2005-03.com.max -p 192.168.0.4:3260 \
		-o update -n node.cnx[0].iscsi.MaxRecvDataSegmentLength -v 65536

  You can also change parameters for multiple records at once, by
  specifying different combinations of target, portal and interface
  like above.

- Adding custom iSCSI portal:

	iscsiadm -m node -o new -T iqn.2005-03.com.max \
		-p 192.168.0.1:3260,2 -I iface4

  The -I/--interface is optional. If not passed in, "default" is used.
  For tcp or iser, this would allow the network layer to decide what is
  best.

  Note that for this command, the Target Portal Group Tag (TPGT) should
  be passed in. If it is not passed in on the initial creation command,
  then the user must run iscsiadm again to set the value. Also,
  if the TPGT is not initially passed in, the old behavior of not
  tracking whether the record was statically or dynamically created
  is used.

- Adding custom NIC config to multiple targets:

	iscsiadm -m node -o new -I iface4

  This command will add an interface config using the iSCSI and SCSI
  settings from iscsid.conf to every target that is in the node db.

- Removing iSCSI portal:

	iscsiadm -m node -o delete -T iqn.2005-03.com.max -p 192.168.0.4:3260

  You can also delete multiple records at once, by specifying different
  combinations of target, portal and interface like above.

- Display iSCSI portal onfiguration:

	iscsiadm -m node [-o show] -T iqn.2005-03.com.max -p 192.168.0.4:3260

  You can also display multiple records at once, by specifying different
  combinations of target, portal and interface like above.

  Note: running "iscsiadm -m node" will only display the records. It
  will not display the configuration info. For the latter, run:

	iscsiadm -m node -o show

- Show all node records:

	iscsiadm -m node

  This will print the nodes using the old flat format where the
  interface and driver are not displayed. To display that info
  use the -P option with the argument "1":

	iscsiadm -m node -P 1

Session mode
------------

- Display session statistics:

	iscsiadm -m session -r 1 --stats

  This function also works in node mode. Instead of the "-r $sid"
  argument, you would pass in the node info like targetname and/or portal,
  and/or interface.

- Perform a SCSI scan on a session

	iscsiadm -m session -r 1 --rescan

  This function also works in node mode. Instead of the "-r $sid"
  argument, you would pass in the node info like targetname and/or portal,
  and/or interface.

  Note: Rescanning does not delete old LUNs. It will only pick up new
  ones.

- Display running sessions:

	iscsiadm -m session -P 1

Host mode with flashnode submode
--------------------------------

- Display list of flash nodes for a host

	iscsiadm -m host -H 6 -C flashnode

  This will print list of all the flash node entries for the given host
  along with their ip, port, tpgt and iqn values.

- Display all parameters of a flash node entry for a host

	iscsiadm -m host -H 6 -C flashnode -x 0

  This will list all the parameter name,value pairs for the
  flash node entry at index 0 of host 6.

- Add a new flash node entry for a host

	iscsiadm -m host -H 6 -C flashnode -o new -A [ipv4|ipv6]

  This will add new flash node entry for the given host 6 with portal
  type of either ipv4 or ipv6. The new operation returns the index of
  the newly created flash node entry.

- Update a flashnode entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o update \
		-n flashnode.conn[0].ipaddress -v 192.168.1.12 \
		-n flashnode.session.targetname \
		-v iqn.2002-03.com.compellent:5000d310004b0716

  This will update the values of ipaddress and targetname params of
  the flash node entry at index 1 of host 6.

- Login to a flash node entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o login

- Logout from a flash node entry
	Logout can be performed either using the flash node index:

	iscsiadm -m host -H 6 -C flashnode -x 1 -o logout

  or by using the corresponding session index:

	iscsiadm -m session -r $sid -u

- Delete a flash node entry

	iscsiadm -m host -H 6 -C flashnode -x 1 -o delete

Host mode with chap submode
---------------------------

- Display list of chap entries for a host

	iscsiadm -m host -H 6 -C chap -o show

- Delete a chap entry for a host

	iscsiadm -m host -H 6 -C chap -o delete -x 5

  This will delete any chap entry present at index 5.

- Add/Update a local chap entry for a host

	iscsiadm -m host -H 6 -C chap -o update -x 4 -n username \
			-v value -n password -v value

  This will update the local chap entry present at index 4. If index 4
  is free, then a new entry of type local chap will be created at that
  index with given username and password values.

- Add/Update a bidi chap entry for a host

	iscsiadm -m host -H 6 -C chap -o update -x 5 -n username_in \
		-v value -n password_in -v value

  This will update the bidi chap entry present at index 5. If index 5
  is free then entry of type bidi chap will be created at that index
  with given username_in and password_in values.

Host mode with stats submode
----------------------------

- Display host statistics:

	iscsiadm -m host -H 6 -C stats

  This will print the aggregate statistics on the host adapter port.
  This includes MAC, TCP/IP, ECC & iSCSI statistics.


6. Configuration
================

The default configuration file is /etc/iscsi/iscsid.conf, but the
directory is configurable with the top-level make option "homedir".
The remainder of this document will assume the /etc/iscsi directory.
This file contains only configuration that could be overwritten by iSCSI
discovery, or manually updated via iscsiadm utility. Its OK if this file
does not exist, in which case compiled-in default configuration will take place
for newer discovered Target nodes.

See the man page and the example file for the current syntax.
The manual pages for iscsid, iscsiadm are in the doc subdirectory and can be
installed in the appropriate man page directories and need to be manually
copied into e.g. /usr/local/share/man8.


7. Getting Started
==================

There are three steps needed to set up a system to use iSCSI storage:

7.1. iSCSI startup using the systemd units or manual startup.
7.2. Discover targets.
7.3. Automate target logins for future system reboots.

The systemd startup units will start the iSCSI daemon and log into any
portals that are set up for automatic login (discussed in 7.2)
or discovered through the discover daemon iscsid.conf params
(discussed in 7.1.2).

If your distro does not have systemd units for iSCSI, then you will have
to start the daemon and log into the targets manually.


7.1.1 iSCSI startup using the init script
=========================================

Red Hat or Fedora:
-----------------
To start Open-iSCSI in Red Hat/Fedora you can do:

	systemctl start open-iscsi

To get Open-iSCSI to automatically start at run time you may have to
run:
	systemctl enable open-iscsi

And, to automatically mount a file system during startup
you must have the partition entry in /etc/fstab marked with the "_netdev"
option. For example this would mount an iSCSI disk sdb:

	/dev/sdb /mnt/iscsi ext3 _netdev 0 0

SUSE or Debian:
---------------
The Open-iSCSI service is socket activated, so there is no need to
enable the Open-iSCSI service. Likewise, the iscsi.service login
service is enabled automatically, so setting 'startup' to "automatic'
will enable automatic login to Open-iSCSI targets.


7.1.2 Manual Startup
====================

7.1.2.1 Starting up the iSCSI daemon (iscsid) and loading modules
=================================================================

If there is no initd script, you must start the tools by hand. First load the
iSCSI modules:

	modprobe -q iscsi_tcp

After that, start iSCSI as a daemon process:

	iscsid

or alternatively, start it with debug enabled, in a separate window,
which will force it into "foreground" mode:

	iscsid -d 8


7.1.2.2 Logging into Targets
============================

Use the configuration utility, iscsiadm, to add/remove/update Discovery
records, iSCSI Node records or monitor active iSCSI sessions (see above or the
iscsiadm man files and see section 7.2 below for how to discover targets):

	iscsiadm  -m node

This will print out the nodes that have been discovered as:

	10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311
	10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311

The format is:

	ip:port,target_portal_group_tag targetname

If you are using the iface argument or want to see the driver
info, use the following:

	iscsiadm -m node -P 1

Example output:

	Target: iqn.1992-08.com.netapp:sn.33615311
	        Portal: 10.15.84.19:3260,2
	                Iface Name: iface2
	        Portal: 10.15.85.19:3260,3
	                Iface Name: iface2

The format is:

	Target: targetname
		Portal ip_address:port,tpgt
			Iface: ifacename

Here, where targetname is the name of the target and ip_address:port
is the address and port of the portal. tpgt is the Target Portal Group
Tag of the portal, and is not used in iscsiadm commands except for static
record creation. ifacename is the name of the iSCSI interface
defined in /etc/iscsi/ifaces. If no interface was defined in
/etc/iscsi/ifaces or passed in, the default behavior is used.
Default here is iscsi_tcp/tcp to be used over whichever NIC the
network layer decides is best.

To login, take the ip, port and targetname from above and run:

	iscsiadm -m node -T targetname -p ip:port -l

In this example we would run:

	iscsiadm -m node -T iqn.1992-08.com.netapp:sn.33615311 \
		-p 10.15.84.19:3260 -l

Note: drop the portal group tag from the "iscsiadm -m node" output.

If you wish, for example to login to all targets represented in the node
database, but not wait for the login responses:

	iscsiadm -m node -l -W

After this, you can use "session" mode to detect when the logins complete:

	iscsiadm -m session


7.2. Discover Targets
=====================

Once the iSCSI service is running, you can perform discovery using
SendTarget with:

	iscsiadm -m discoverydb -t sendtargets -p ip:port --discover

Here, "ip" is the address of the portal and "port" is the port.

To use iSNS you can run the discovery command with the type as "isns"
and pass in the ip:port:

	iscsiadm -m discoverydb -t isns -p ip:port --discover

Both commands will print out the list of all discovered targets and their
portals, e.g.:

	iscsiadm -m discoverydb -t st -p 10.15.85.19:3260 --discover

This might produce:

	10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
	10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

The format for the output is:

	ip:port,tpgt targetname

In this example, for the first target the ip address is 10.15.85.19, and
the port is 3260. The target portal group is 3. The target name
is iqn.1992-08.com.netapp:sn.33615311.

If you would also like to see the iSCSI inteface which will be used
for each session then use the --print=[N]/-P [N] option:

	iscsiadm -m discoverydb -t sendtargets -p ip:port -P 1 --discover

This might print:

    Target: iqn.1992-08.com.netapp:sn.33615311
        Portal: 10.15.84.19:3260,2
           Iface Name: iface2
        Portal: 10.15.85.19:3260,3
           Iface Name: iface2

In this example, the IP address of the first portal is 10.15.84.19, and
the port is 3260. The target portal group is 3. The target name
is iqn.1992-08.com.netapp:sn.33615311. The iface being used is iface2.

While discovery targets are kept in the discovery db, they are
useful only for re-discovery. The discovered targets (a.k.a. nodes)
are stored as records in the node db.

The discovered targets are not logged into yet. Rather than logging
into the discovered nodes (making LUs from those nodes available as
storage), it is better to automate the login to the nodes we need.

If you wish to log into a target manually now, see section
"7.1.2.2 Logging in targets" above.


7.3. Automate Target Logins for Future System Startups
======================================================

Note: this may only work for distros with systemd iSCSI login scripts.

To automate login to a node, use the following with the record ID
(record ID is the targetname and portal) of the node discovered in the
discovery above:

	iscsiadm -m node -T targetname -p ip:port --op update -n node.startup -v automatic

To set the automatic setting to all portals on a target through every
interface setup for each protal, the following can be run:

	iscsiadm -m node -T targetname --op update -n node.startup -v automatic

Or to set the "node.startup" attribute to "automatic" as default for
all sessions add the following to the /etc/iscsi/iscsid.conf:

	node.startup = automatic

Setting this in iscsid.conf will not affect existing nodes. It will only
affect nodes that are discovered after setting the value.

To login to all automated nodes, simply restart the iSCSI login service, e.g. with:

	systemctl restart iscsi.service

On your next startup the nodes will be logged into automatically.


7.4 Automatic Discovery and Login
=================================

Instead of running the iscsiadm discovery command and editing the
startup setting, iscsid can be configured so that every X seconds
it performs discovery and logs in and out of the portals returned or
no longer returned. In this mode, when iscsid starts it will check the
discovery db for iSNS records with:

	discovery.isns.use_discoveryd = Yes

This tells iscsi to check for SendTargets discovery records that have the
setting:

	discovery.sendtargets.use_discoveryd = Yes

If set, iscsid will perform discovery to the address every
discovery.isns.discoveryd_poll_inval or
discovery.sendtargets.discoveryd_poll_inval seconds,
and it will log into any portals found from the discovery source using
the ifaces in /etc/iscsi/ifaces.

Note that for iSNS the poll_interval does not have to be set. If not set,
iscsid will only perform rediscovery when it gets a SCN from the server.

#   iSNS Note:
#   For servers like Microsoft's where they allow SCN registrations, but do not
#   send SCN events, discovery.isns.poll_interval should be set to a non zero
#   value to auto discover new targets. This is also useful for servers like
#   linux-isns (SLES's iSNS server) where it sometimes does not send SCN
#   events in the proper format, so they may not get handled.

Examples
--------

SendTargets
-----------

- Create a SendTargets record by passing iscsiadm the "-o new" argument in
		discoverydb mode:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260 -o new

  On success, this will output something like:

  New discovery record for [20.15.0.7,3260] added.

- Set the use_discoveryd setting for the record:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260  -o update \
		-n discovery.sendtargets.use_discoveryd -v Yes

- Set the polling interval:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3260  -o update \
		-n discovery.sendtargets.discoveryd_poll_inval -v 30

To have the new settings take effect, restart iscsid by restarting the
iSCSI services.

NOTE:	When iscsiadm is run with the -o new argument, it will use the
	discovery.sendtargets.use_discoveryd and
	discovery.sendtargets.discoveryd_poll_inval
	settings in iscsid.conf for the records initial settings. So if those
	are set in iscsid.conf, then you can skip the iscsiadm -o update
	commands.

iSNS
----

- Create an iSNS record by passing iscsiadm the "-o new" argument in
		discoverydb mode:

	iscsiadm -m discoverydb -t isns -p 20.15.0.7:3205 -o new

  Response on success:

	New discovery record for [20.15.0.7,3205] added.

- Set the use_discoveryd setting for the record:

	iscsiadm -m discoverydb -t isns -p 20.15.0.7:3205  -o update \
		-n discovery.isns.use_discoveryd -v Yes

- [OPTIONAL: see iSNS note above] Set the polling interval if needed:

	iscsiadm -m discoverydb -t st -p 20.15.0.7:3205  -o update \
		-n discovery.isns.discoveryd_poll_inval -v 30

To have the new settings take effect, restart iscsid by restarting the
iscsi services.

Note:	When iscsiadm is run with the -o new argument, it will use the
	discovery.isns.use_discoveryd and discovery.isns.discoveryd_poll_inval
	settings in iscsid.conf for the record's initial settings. So if those
	are set in iscsid.conf, then you can skip the iscsiadm -o update
	commands.


8. Advanced Configuration
=========================

8.1 iSCSI settings for dm-multipath
===================================

When using dm-multipath, the iSCSI timers should be set so that commands
are quickly failed to the dm-multipath layer. For dm-multipath you should
then set values like queue if no path, so that IO errors are retried and
queued if all paths are failed in the multipath layer.


8.1.1 iSCSI ping/Nop-Out settings
=================================
To quickly detect problems in the network, the iSCSI layer will send iSCSI
pings (iSCSI NOP-Out requests) to the target. If a NOP-Out times out, the
iSCSI layer will respond by failing the connection and starting the
replacement_timeout. It will then tell the SCSI layer to stop the device queues
so no new IO will be sent to the iSCSI layer and to requeue and retry the
commands that were running if possible (see the next section on retrying
commands and the replacement_timeout).

To control how often a NOP-Out is sent, the following value can be set:

	node.conn[0].timeo.noop_out_interval = X

Where X is in seconds and the default is 10 seconds. To control the
timeout for the NOP-Out the noop_out_timeout value can be used:

	node.conn[0].timeo.noop_out_timeout = X

Again X is in seconds and the default is 15 seconds.

Normally for these values you can use:

	node.conn[0].timeo.noop_out_interval = 5
	node.conn[0].timeo.noop_out_timeout = 10

If there are a lot of IO error messages like

	detected conn error (22)

in the kernel log then the above values may be too aggressive. You may need to
increase the values for your network conditions and workload, or you may need
to check your network for possible problems.


8.1.2 SCSI command retries
==========================

SCSI disk commands get 5 retries by default. In newer kernels this can be
controlled via the sysfs file:

	/sys/block/$sdX/device/scsi_disk/$host:$bus:$target:LUN/max_retries

by writing a integer lower than 5 to reduce retries or setting to -1 for
infinite retries.

The number of actual retries a command gets may be less than 5 or what is
requested in max_retries if the replacement timeout expires. When that timer
expires it tells the SCSI layer to fail all new and queued commands.


8.1.3 replacement_timeout
=========================

The iSCSI layer timer:

	node.session.timeo.replacement_timeout = X

controls how long to wait for session re-establishment before failing all SCSI
commands:

	1. commands that have been requeued and awaiting a retry
	2. commands that are being operated on by the SCSI layer's error handler
	3. all new commands that are queued to the device

up to a higher level like multipath, filesystem layer, or to the application.

The setting is in seconds. zero means to fail immediately. -1 means an infinite
timeout which will wait until iscsid does a relogin, the user runs the iscsiadm
logout command or until the node.session.reopen_max limit is hit.

When this timer is started, the iSCSI layer will stop new IO from executing
and requeue running commands to the Block/SCSI layer. The new and requeued
commands will then sit in the Block/SCSI layer queue until the timeout has
expired, there is userspace intervention like a iscsiadm logout command, or
there is a successful relogin. If the command has run out of retries, the
command will be failed instead of being requeued.

After this timer has expired iscsid can continue to try to relogin. By default
iscsid will continue to try to relogin until there is a successful relogin or
until the user runs the iscsiadm logout command. The number of relogin retries
is controlled by the Open-iSCSI setting node.session.reopen_max. If that is set
too low, iscsid may give up and forcefully logout the session (equivalent to
running the iscsiadm logout command on a failed session) before replacement
timeout seconds. This will result in all commands being failed at that time.
The user would then have to manually relogin.

This timer starts when you see the connection error messsage:

	detected conn error (%d)

in the kernel log. The %d will be a integer with the following mappings
and meanings:

Int     Kernel define           Description
value
------------------------------------------------------------------------------
1	ISCSI_ERR_DATASN	Low level iSCSI protocol error where a data
				sequence value did not match the expected value.
2	ISCSI_ERR_DATA_OFFSET	There was an error where we were asked to
				read/write past a buffer's length.
3	ISCSI_ERR_MAX_CMDSN	Low level iSCSI protocol error where we got an
				invalid MaxCmdSN value.
4	ISCSI_ERR_EXP_CMDSN	Low level iSCSI protocol error where the
				ExpCmdSN from the target didn't match the
				expected value.
5	ISCSI_ERR_BAD_OPCODE	The iSCSI Target has sent an invalid or unknown
				opcode.
6	ISCSI_ERR_DATALEN	The iSCSI target has send a PDU with a data
				length that is invalid.
7	ISCSI_ERR_AHSLEN	The iSCSI target has sent a PDU with an invalid
				Additional Header Length.
8	ISCSI_ERR_PROTO		The iSCSI target has performed an operation that
				violated the iSCSI RFC.
9	ISCSI_ERR_LUN		The iSCSI target has requested an invalid LUN.
10	ISCSI_ERR_BAD_ITT       The iSCSI target has sent an invalid Initiator
				Task Tag.
11	ISCSI_ERR_CONN_FAILED   Generic error that can indicate the transmission
				of a PDU, like a SCSI cmd or task management
				function, has timed out. Or, we are not able to
				transmit a PDU because the network layer has
				returned an error, or we have detected a
				network error like a link down. It can
				sometimes be an error that does not fit the
				other error codes like a kernel function has
				returned a failure and there no other way to
				recovery from it except to try and kill the
				existing session and relogin.
12	ISCSI_ERR_R2TSN		Low level iSCSI protocol error where the R2T
				sequence numbers to not match.
13	ISCSI_ERR_SESSION_FAILED
				Unused.
14	ISCSI_ERR_HDR_DGST	iSCSI Header Digest error.
15	ISCSI_ERR_DATA_DGST	iSCSI Data Digest error.
16	ISCSI_ERR_PARAM_NOT_FOUND
				Userspace has passed the kernel an unknown
				setting.
17	ISCSI_ERR_NO_SCSI_CMD	The iSCSI target has sent a ITT for an unknown
				task.
18	ISCSI_ERR_INVALID_HOST	The iSCSI Host is no longer present or being
				removed.
19	ISCSI_ERR_XMIT_FAILED	The software iSCSI initiator or cxgb was not
				able to transmit a PDU becuase of a network
				layer error.
20	ISCSI_ERR_TCP_CONN_CLOSE
				The iSCSI target has closed the connection.
21	ISCSI_ERR_SCSI_EH_SESSION_RST
				The SCSI layer's Error Handler has timed out
				the SCSI cmd, tried to abort it and possibly
				tried to send a LUN RESET, and it's now
				going to drop the session.
22	ISCSI_ERR_NOP_TIMEDOUT	An iSCSI Nop as a ping has timed out.


8.1.4 Running Commands, the SCSI Error Handler, and replacement_timeout
=======================================================================

Each SCSI command has a timer controlled by:

	/sys/block/sdX/device/timeout

The value is in seconds and the default ranges from 30 - 60 seconds
depending on the distro's udev scripts.

When a command is sent to the iSCSI layer the timer is started, and when it's
returned to the SCSI layer the timer is stopped. This could be for successful
completion or due to a retry/requeue due to a conn error like described
previously. If a command is retried the timer is reset.

When the command timer fires, the SCSI layer will ask the iSCSI layer to abort
the command by sending an ABORT_TASK task management request. If the abort
is successful the SCSI layer retries the command if it has enough retries left.
If the abort times out, the iSCSI layer will report failure to the SCSI layer
and will fire a ISCSI_ERR_SCSI_EH_SESSION_RST error. In the logs you will see:

	detected conn error (21)

The ISCSI_ERR_SCSI_EH_SESSION_RST will cause the connection/session to be
dropped and the iSCSI layer will start the replacement_timeout operations
described in that section.

The SCSI layer will then eventually call the iSCSI layer's target/session reset
callout which will wait for the replacement timeout to expire, a successful
relogin to occur, or for userspace to logout the session.

- If the replacement timeout fires, then commands will be failed upwards as
described in the replacement timeout section. The SCSI devices will be put
into an offline state until iscsid performs a relogin.

- If a relogin occurs before the timer fires, commands will be retried if
possible.

To check if the SCSI error handler is running, iscsiadm can be run as:

	iscsiadm -m session -P 3

and you will see:

	Host Number: X State: Recovery

To modify the timer that starts the SCSI EH, you can either write
directly to the device's sysfs file:

	echo X > /sys/block/sdX/device/timeout

where X is in seconds.
Alternatively, on most distros you can modify the udev rule.

To modify the udev rule open /etc/udev/rules.d/50-udev.rules, and find the
following lines:

	ACTION=="add", SUBSYSTEM=="scsi" , SYSFS{type}=="0|7|14", \
		RUN+="/bin/sh -c 'echo 60 > /sys$$DEVPATH/timeout'"

And change the "echo 60" part of the line to the value that you want.

The default timeout for normal File System commands is 30 seconds when udev
is not being used. If udev is used the default is the above value which
is normally 60 seconds.


8.1.4 Optimal replacement_timeout Value
=======================================

The default value for replacement_timeout is 120 seconds, but because
multipath's queue_if_no_path and no_path_retry setting can prevent IO errors
from being propagated to the application, replacement_timeout can be set to a
shorter value like 5 to 15 seconds. By setting it lower, pending IO is quickly
sent to a new path and executed while the iSCSI layer attempts
re-establishment of the session. If all paths end up being failed, then the
multipath and device mapper layer will internally queue IO based on the
multipath.conf settings, instead of the iSCSI layer.


8.2 iSCSI settings for iSCSI root
=================================

When accessing the root partition directly through an iSCSI disk, the
iSCSI timers should be set so that iSCSI layer has several chances to try to
re-establish a session and so that commands are not quickly requeued to
the SCSI layer. Basically you want the opposite of when using dm-multipath.

For this setup, you can turn off iSCSI pings (NOPs) by setting:

	node.conn[0].timeo.noop_out_interval = 0
	node.conn[0].timeo.noop_out_timeout = 0

And you can turn the replacement_timer to a very long value:

	node.session.timeo.replacement_timeout = 86400


8.3 iSCSI settings for iSCSI tape
=================================

It is possible to use open-iscsi to connect to a remote tape drive,
making available locally. In such a case, you need to disable NOPs out,
since tape drives don't handle those well at all. See above (section 8.2)
for how to disable these NOPs.


9. iSCSI System Info
====================

To get information about the running sessions: including the session and
device state, session ids (sid) for session mode, and some of the
negotiated parameters, run:

	iscsiadm -m session -P 2

If you are looking for something shorter, like just the sid to node mapping,
run:

	iscsiadm -m session [-P 0]

This will print the list of running sessions with the format:

	driver [sid] ip:port,target_portal_group_tag targetname

Example output of "iscsiadm -m session":

	tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
	tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

To print the hw address info use the -P option with "1":

	iscsiadm -m session -P 1

This will print the sessions with the following format:

	Target: targetname
		Current Portal: portal currently logged into
		Persistent Portal: portal we would fall back to if we had got
				   redirected during login
			Iface Transport: driver/transport_name
			Iface IPaddress: IP address of iface being used
			Iface HWaddress: HW address used to bind session
			Iface Netdev: netdev value used to bind session
			SID: iscsi sysfs session id
			iSCSI Connection State: iscsi state

Note: if an older kernel is being used or if the session is not bound,
then the keyword "default" is printed to indicate that the default
network behavior is being used.

Example output of "iscsiadm -m session -P 1":

	Target: iqn.1992-08.com.netapp:sn.33615311
		Current Portal: 10.15.85.19:3260,3
		Persistent Portal: 10.15.85.19:3260,3
			Iface Transport: tcp
			Iface IPaddress: 10.11.14.37
			Iface HWaddress: default
			Iface Netdev: default
			SID: 7
			iSCSI Connection State: LOGGED IN
			Internal iscsid Session State: NO CHANGE

The connection state is currently not available for qla4xxx.

To get a HBA/Host view of the session, there is the host mode:

	iscsiadm -m host

This prints the list of iSCSI hosts in the system with the format:

	driver [hostno] ipaddress,[hwaddress],net_ifacename,initiatorname

Example output:

	cxgb3i: [7] 10.10.15.51,[00:07:43:05:97:07],eth3 <empty>

To print this info in a more user friendly way, the -P argument can be used:

	iscsiadm -m host -P 1

Example output:

	Host Number: 7
		State: running
		Transport: cxgb3i
		Initiatorname: <empty>
		IPaddress: 10.10.15.51
		HWaddress: 00:07:43:05:97:07
		Netdev: eth3

Here, you can also see the state of the host.

You can also pass in any value from 1 - 4 to print more info, like the
sessions running through the host, what ifaces are being used and what
devices are accessed through it.

To print the info for a specific host, you can pass in the -H argument
with the host number:

	iscsiadm -m host -P 1 -H 7

open-iscsi's People

Contributors

akrog avatar cathay4t avatar chris-se avatar cleech avatar ddiss avatar ekigwana avatar erezzi avatar ericwmackay avatar ew929595 avatar eworm-de avatar ffontaine avatar frank-fegert avatar fredvx avatar gonzoleeman avatar gulams avatar hreinecke avatar igo95862 avatar kraj avatar mdrustad avatar mikechristie avatar njavali avatar nwnk avatar phmccarty avatar rickysarraf avatar scop avatar smoser avatar thesamesam avatar wenchao-hao avatar wubo009 avatar zdzichu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-iscsi's Issues

kernel libiscsi queuecommand failure may corrupt ITT pool

I've observed (via ethernet analyzer) rare but occasional duplicate ITTs sent to my target from the same initiator session before receiving a response. When tracking down the cause, it appears in iscsi_queuecommand, when the transport fails to take the command (when session->tt->xmit_task(task) fails) that we jump to iscsi_complete_task and reschedule the command for later. However for this particular case when calling iscsi_complete_task the session back_lock is not held. So, on occasion, when kfifo_in gets called there's opportunity for the ITT pool to corrupt.

In my particular setup, I am using the Mellanox ISER transport. It's somewhat common I see the xmit_task function (in this case iscsi_iser_task_xmit) to fail under very high IOP conditions. The failures reported by ISER seems to coincide with the duplicate ITTs I also sometimes see very shortly after. Note though that the xmit_task failures do not cause a problem in general because the rescheduled I/O is submitted just fine on the second go around.

To test my theory I reworked iscsi_queuecommand to release the frwd_lock and grab the back_lock before calling iscsi_complete_task. It seems to resolve my duplicate ITT problem. However I am not very familiar with the libiscsi code in general to propose a more permanent fix. So I wanted to get your ideas on this and if you agree there's a bug not holding back_lock when calling iscsi_complete_task.

Thank you

Can't discover more than 1 iSER device

I made a similar bug report to the linux-rdma mailing list about a year ago, and never followed up here. I got a response that this is an open-iscsi issue not a kernel issue. (See http://www.spinics.net/lists/linux-rdma/msg27533.html)

Below is the same bug report, updated now that it's a year later.

Two up to date arch systems. Kernel 4.6.4 (Arch -1).

2 Mellanox MT25418 [ConnectX VPI PCIe 2.0 2.5GT/s - IB DDR / 10GigE](rev a0) running mlx4_core driver v2.2-1 (Feb, 2014.) Both on most
recent firmware for PSID MT_04A0110002, FW Version 2.9.1000. Systems
directly connected, no switches. InfiniBand otherwise works great,
through VERY extensive testing.

Running OpenFabrics most recent releases of everything (release
versions, not git versions.)

Open-iscsi 2.0_873-7.

targetcli-fb 2.1.fb43-1, python-rtslib-fb 2.1.fb60-1, and
python-configshell-fb 1.1.fb20-1.

I can't discover more than 1 iSER device working at a time. Using
IPoIB lets me discover as many as I want.

At the very end is a workaround - not a fix.

I start with 3 disks working through iSCSI over IPoIB, with
targetcli's (-fb version) ls looking like:

o- / ..................................................................... [...]
o- backstores .......................................................... [...]
| o- block .............................................. [Storage Objects: 3]
| | o- sda4 ........................ [/dev/sda4 (4.4TiB) write-thru activated]
| | o- sdb4 ........................ [/dev/sdb4 (4.4TiB) write-thru activated]
| | o- sdc4 ........................ [/dev/sdc4 (4.4TiB) write-thru activated]
| o- fileio ............................................. [Storage Objects: 0]
| o- pscsi .............................................. [Storage Objects: 0]
| o- ramdisk ............................................ [Storage Objects: 0]
| o- user ............................................... [Storage Objects: 0]
o- iscsi ........................................................ [Targets: 3]
| o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.2549ae938766 ....... [TPGs: 1]
| | o- tpg1 ........................................... [no-gen-acls, no-auth]
| | o- acls ...................................................... [ACLs: 1]
| | | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .......... [Mapped LUNs: 1]
| | | o- mapped_lun0 .............................. [lun0 block/sda4 (rw)]
| | o- luns ...................................................... [LUNs: 1]
| | | o- lun0 ..................................... [block/sda4 (/dev/sda4)]
| | o- portals ................................................ [Portals: 1]
| | o- 0.0.0.0:3260 ................................................. [OK]
| o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.8518b92b052d ....... [TPGs: 1]
| | o- tpg1 ........................................... [no-gen-acls, no-auth]
| | o- acls ...................................................... [ACLs: 1]
| | | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .......... [Mapped LUNs: 1]
| | | o- mapped_lun0 .............................. [lun0 block/sdb4 (rw)]
| | o- luns ...................................................... [LUNs: 1]
| | | o- lun0 ..................................... [block/sdb4 (/dev/sdb4)]
| | o- portals ................................................ [Portals: 1]
| | o- 0.0.0.0:3260 ................................................. [OK]
| o- iqn.2003-01.org.linux-iscsi.terra.x8664:sn.d4603198ba50 ....... [TPGs: 1]
| o- tpg1 ........................................... [no-gen-acls, no-auth]
| o- acls ...................................................... [ACLs: 1]
| | o- iqn.2005-03.org.open-iscsi:c04e8f17af18 .......... [Mapped LUNs: 1]
| | o- mapped_lun0 .............................. [lun0 block/sdc4 (rw)]
| o- luns ...................................................... [LUNs: 1]
| | o- lun0 ..................................... [block/sdc4 (/dev/sdc4)]
| o- portals ................................................ [Portals: 1]
| o- 0.0.0.0:3260 ................................................. [OK]
o- loopback ..................................................... [Targets: 0]
o- sbp .......................................................... [Targets: 0]
o- srpt ......................................................... [Targets: 0]
o- vhost ........................................................ [Targets: 0]

On the initiator system, I clear everything. Log out via iscsiadm -m
node -U all. Disconnect via iscsiadm -m discovery -t sendtargets -p
IP -o delete.

On the target system, i go into each of the
iscsi/iqn/tpg1/portals/0.0.0.0:3260 directories and run "enable_iser
true". Each time it says "iSER enable now: True". / saveconfig and
exit.

target-cli now changes to:
| | o- 0.0.0.0:3260 ............................................... [iser]
...
| | o- 0.0.0.0:3260 ............................................... [iser]
...
| o- 0.0.0.0:3260 ............................................... [iser]

On the initiator system, I discover via iscsiadm -m discovery -t
sendtargets -p IP -I iser, and it says:

iscsiadm: recv's end state machine bug?
iscsiadm: Could not perform SendTargets discovery: iSCSI PDU timed out

The target's dmesg added:

[ 4430.623627] isert: isert_print_wc: send failure: invalid request error (9) vend_err 8a
--- a year ago on kernel 4.1.2, open-iscsi 2.0_873-6, it gave this ---
[80296.332049] isert: isert_handle_wc: wr id ffff8800a78f1c18 status 9
vend_err 8a

The initiator's dmesg added:

[64793.054230] scsi host7: iSCSI Initiator over iSER
[64793.057292] iser: iser_err_comp: task_rsp failure: local length error (1) vend_err d7
[64793.057309] connection27:0: detected conn error (1011)
--- a year ago on kernel 4.1.2, open-iscsi 2.0_873-6, it gave this ---
[10868.076407] scsi host25: iSCSI Initiator over iSER
[10868.078969] iser: iser_handle_wc: wr id ffff8807f7ee4000 status 1 vend_err d7
[10868.078982] connection7:0: detected conn error (1011)

Now, on the target machine, if I run "enable_iser false" (turning iser OFF) on two of the
iqn's portals, saveconfig, and exit... Then run iscsiadm -m discovery
-t sendtargets -p IP -I iser, it gives:

192.168.2.1:3260,1 iqn.2003-01.org.linux-iscsi.terra.x8664:sn.2549ae938766

Target's dmesg has nothing new, initiator's has:

[11067.116617] scsi host27: iSCSI Initiator over iSER

On the initiator, I can log into the node, mount it, and use it just
fine. I can even discover and log into the other two nodes, using
iSCSI over IPoIB rather than iSER for those 2, and use all 3.

But, I can't get more than 1 iSER to discover at a time.

... Not sure if this is a kernel issue, a Mellanox issue, an
OpenFabrics issue, an open-iscsi issue, or a targetcli (-fb version)
issue.

In the linux-rdma thread I linked to, a Mellanox employee said this is an open-iscsi issue.

My work-around, not a fix:

I found the only difference in open-iscsi's node configuration files
is iface.iscsi_ifacename = iser, and iface.transport_name = iser.
(Rather than default and tcp.) And, the files are called iser rather
than default.

If I discover the targets with the targets having enable_iser false,
then stop the initiator's open-iscsi.service, update all the node
config files to iser, rename them to iser, change all the targets to
enable_iser true, and start the initiator's open-iscsi.service, it
works. (I can log in at that point, mount them, whatever.)

So, the issue is in discovery, not logging in or using.

"iscsiadm -m discovery..." command forcibly reverts node.startup value to default value

Hello,

I'm using open-iscsi 2.0.873 on Ubuntu 16.04.

In that environment, all node.startup values will be forcibly reverted to default value[1] if I run "iscsiadm -m discovery..." command.
e.g.
$ sudo iscsiadm -m discovery -t sendtargets -p 172.90.128.16:3260

Is this correct behavior of iscsiadm?
I'm worried about this behavior.
Because I'd like to set node.startup value for node by node.

[1]
Default value means the value set in iscsid.conf.

Misleading error message when network port is locked

A few weeks back I encountered the following error message:

somehost:~ # iscsiadm -m iface -I qla4xxx.84:xx:xx:xx:xx:82.ipv4.0 -o update -n iface.ipaddress -v 10.0.1.60 -n iface.subnet_mask -v 255.255.255.0 -n iface.mtu -v 9000
qla4xxx.84:xx:xx:xx:xx:82.ipv4.0 updated.
somehost:~ # iscsiadm -m iface -I qla4xxx.84:xx:xx:xx:xx:82.ipv4.0 -o apply
qla4xxx.84:xx:xx:xx:xx:82.ipv4.0 applied.
somehost:~ # iscsiadm -m iface -I qla4xxx.84:xx:xx:xx:xx:83.ipv4.0 -o update -n iface.ipaddress -v 10.0.1.61 -n iface.subnet_mask -v 255.255.255.0 -n iface.mtu -v 9000
qla4xxx.84:xx:xx:xx:xx:83.ipv4.0 updated.
somehost:~ # iscsiadm -m iface -I qla4xxx.84:xx:xx:xx:xx:83.ipv4.0 -o apply
qla4xxx.84:xx:xx:xx:xx:83.ipv4.0 applied.
somehost:~ # iscsiadm -m iface -I qla4xxx.84:xx:xx:xx:xx:82.ipv4.0 -C ping -a 10.0.1.2 -b 56 -c 3 -i 1 
Ping 1 failed: invalid link local IPv6 address
Ping 2 failed: invalid link local IPv6 address
Ping 3 failed: invalid link local IPv6 address
somehost:~ # iscsiadm -m iface -I qla4xxx.84:xx:xx:xx:xx:83.ipv4.0 -C ping -a 10.0.1.2 -b 56 -c 3 -i 1 
Ping 1 failed: invalid link local IPv6 address
Ping 2 failed: invalid link local IPv6 address
Ping 3 failed: invalid link local IPv6 address
somehost:~ # 

The above commands are basically the first and only config-related stuff I had done up to that point, so there was no IPv6 involved in my iscsi config whatsoever (the host had IPv6 addresses including a valid link-local address, though).

The reason the ping failed was because the switch ports had been disabled by another admin, but the error message did not suggest that at all.

Centos7 iscsi stopping before unmounting on shutdown

G'day All,

I have installed and set up an iSCSI volume. Created and LVM LV of it, done a mkfs on it. Mounted
it. No problem.

It's when I shut the server down and the iscsi volume is mounted, I get issues.

My fstab entry is:

/dev/Owncloud/Owncloud /raid/Owncloud ext4 defaults,auto,_netdev,user_xattr,acl 0 0

I've looked at various links on this, and supposedly the issue was fixed in Centos 7.1. However, I've just installed a fresh Centos 7.2 box (CentOS Linux release 7.2.1511 (Core)) and the issue exists.

see: https://groups.google.com/forum/#!topic/open-iscsi/8U9mAXutlyE

I have one other, possibly related, issue upon start up. The iscsi
connection is not automatically "logged" into. I have to do this manually.
Not sure why, as the iscsi systemd files have lines in them saying to auto
login. I get this message when the server is starting up:

NFSD: starting 90-second grace period (net ffffffff81a25e00)

I think this is where it's trying to mount the iscsi volume, and cannot.

Any help with resolving this is very much appreciated.

Race condition between access & mkdir

From the time it takes you to get the result of access to when you do the mkdir, the state of the FS may have changed. Suggest alternative implementations which don't suffer from this.

int idbm_lock(void)
{
        int fd, i, ret;

        if (db->refs > 0) {
                db->refs++;
                return 0;
        }

        if (access(LOCK_DIR, F_OK) != 0) {
                if (mkdir(LOCK_DIR, 0660) != 0) {
                        log_error("Could not open %s: %s", LOCK_DIR,
                                  strerror(errno));
                        return ISCSI_ERR_IDBM;
                }
        }
...

iscsistart not cleaning up session if login fails

I call iscsistart on boot from an initramfs.
For reasons I have not figured out yet, connecting to my iSCSI target sometimes fails.

Shows up in dmesg like:

connection1:0: detected conn error (1011)

If iscsistart returns an exit code, I simply let my script sleep for a second, and try it a second time.
This used to work at some point in the past (using an ancient version of iscsistart).

But when I try it now using latest code from git, subsequent tries always seems to return: iscsistart: initator reported error (15 - session exists)
Like it fails to clean up the session if logging in fails.

shouldn't iscsiadm logout be aborted if scsi devices still present?

I am investigating kworker CPU spin on iscsi session logout described here coreos/bugs#2357 , after all experiments I found that logout completes successfully if I delete all SCSI devices created on session login.

In a nutshell, if server comes up, discovers disks, then if I do following without even mounting any of them:

iscsiadm -m node --logoutall=all

then 4 kworkers are spinning on CPU (1 per multipath session).

If I do following instead:

multipath -F
for d in $(iscsiadm -m session -P3 |awk '/scsi disk/ {print $4}'); do echo 1 > /sys/block/$d/device/delete; done
iscsiadm -m node --logoutall=all

then it is all happy.

Not saying that kernel behaves correctly when it happens, but it begs the question if it is legitimate action at all? Is it OK by design to logout from session, while SCSI devices are present (but not used)? If it is not desirable shouldn't iscsiadm detect it and warn user instead of attempting to logout?

Extra info:

$ systemctl status iscsid
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/iscsid.service.d
           └─40-discovery.conf, 50-debug.conf
   Active: active (running) since Wed 2018-02-14 19:07:48 UTC; 55min ago
     Docs: man:iscsid(8)
           man:iscsiuio(8)
           man:iscsiadm(8)
  Process: 1445 ExecStartPost=/sbin/iscsiadm -m node -L all -p 10.1.72.201 -p 10.1.74.204 --login (code=exited, status=0/SUCCESS)
  Process: 995 ExecStartPost=/sbin/iscsiadm -m discovery -t st -p 10.1.72.201 -p 10.1.74.204 (code=exited, status=0/SUCCESS)
  Process: 902 ExecStartPost=/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=21)
  Process: 856 ExecStart=/usr/sbin/iscsid -d8 (code=exited, status=0/SUCCESS)
 Main PID: 864 (iscsid)
    Tasks: 2 (limit: 32768)
   CGroup: /system.slice/iscsid.service
           ├─863 /usr/sbin/iscsid -d8
           └─864 /usr/sbin/iscsid -d8

iscsiadm login twice and duplicated LUN occur

Hi, Recently we found if we login simultaneously, iscsi target will be connected twice.
I am just wondering wether it can be considered as a bug?

open-iscsi version:
centos7.4
device-mapper-multipath-0.4.9-111.el7_4.2.x86_64

[root@sds-10-20 ~]# cat test.sh
iscsiadm -m node --loginall=automatic &
iscsiadm -m node --loginall=automatic &
[root@sds-10-20 ~]#
[root@sds-10-20 ~]# iscsiadm -m discovery -t st -p 10.128.3.94 3260
10.128.3.94:3260,1 iqn.2003-01.com.redhat.iscsi-gw0
10.128.2.104:3260,2 iqn.2003-01.com.redhat.iscsi-gw0
10.128.3.94:3260,1 iqn.2003-01.com.redhat.iscsi-gw1
10.128.2.104:3260,2 iqn.2003-01.com.redhat.iscsi-gw1
[root@sds-10-20 ~]#
[root@sds-10-20 ~]# sh iscsiadm.sh
[root@sds-10-20 ~]# Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw1, portal: 10.128.3.94,3260] (multiple)
Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw1, portal: 10.128.2.104,3260] (multiple)
Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw0, portal: 10.128.3.94,3260] (multiple)
Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw0, portal: 10.128.2.104,3260] (multiple)
iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session170
Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw0, portal: 10.128.3.94,3260] (multiple)
Logging in to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw0, portal: 10.128.2.104,3260] (multiple)
Login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw1, portal: 10.128.3.94,3260] successful.
Login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw1, portal: 10.128.2.104,3260] successful.
Login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw0, portal: 10.128.3.94,3260] successful.
Login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw0, portal: 10.128.2.104,3260] successful.
Login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw0, portal: 10.128.3.94,3260] successful.
Login to [iface: default, target: iqn.2003-01.com.redhat.iscsi-gw0, portal: 10.128.2.104,3260] successful.
[root@sds-10-20 ~]#
[root@sds-10-20 ~]# iscsiadm -m node
10.128.3.94:3260,1 iqn.2003-01.com.redhat.iscsi-gw1
10.128.2.104:3260,2 iqn.2003-01.com.redhat.iscsi-gw1
10.128.3.94:3260,1 iqn.2003-01.com.redhat.iscsi-gw0
10.128.2.104:3260,2 iqn.2003-01.com.redhat.iscsi-gw0
[root@sds-10-20 ~]#
[root@sds-10-20 ~]#
[root@sds-10-20 ~]# iscsiadm -m session
tcp: [169] 10.128.3.94:3260,1 iqn.2003-01.com.redhat.iscsi-gw1 (non-flash)
tcp: [170] 10.128.2.104:3260,2 iqn.2003-01.com.redhat.iscsi-gw1 (non-flash)
tcp: [171] 10.128.3.94:3260,1 iqn.2003-01.com.redhat.iscsi-gw0 (non-flash)
tcp: [172] 10.128.2.104:3260,2 iqn.2003-01.com.redhat.iscsi-gw0 (non-flash)
tcp: [173] 10.128.3.94:3260,1 iqn.2003-01.com.redhat.iscsi-gw0 (non-flash)
tcp: [174] 10.128.2.104:3260,2 iqn.2003-01.com.redhat.iscsi-gw0 (non-flash)
[root@sds-10-20 ~]#
[root@sds-10-20 ~]# multipath -ll
mpathb (360014059d34c74db4044d1a840043560) dm-1 LIO-ORG ,TCMU device
size=10G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| - 180:0:0:0 sdh 8:112 active ready running
-+- policy='queue-length 0' prio=10 status=enabled
-- 179:0:0:0 sdf 8:80 active ready running
mpatha (36001405f8cd12e866a2475cabc7c0679) dm-0 LIO-ORG ,TCMU device
size=100G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='queue-length 0' prio=50 status=active
| - 182:0:0:0 sdi 8:128 active ready running
|-+- policy='queue-length 0' prio=50 status=enabled
| - 184:0:0:0 sdk 8:160 active ready running
|-+- policy='queue-length 0' prio=10 status=enabled
| - 181:0:0:0 sdg 8:96 active ready running
-+- policy='queue-length 0' prio=10 status=enabled
-- 183:0:0:0 sdj 8:144 active ready running

Delayed-OPT-ISCSI start.

my Linux distro now rams it down new kernels. and patches.

the service loader wont FAIL...... or disable.

@ Login screan or as X-loads , in backgound load service, if Kernel module fails
(sys-block / iscsitarget builds for newer kernels pending)
DBUS message. if service fails dbus message to root / username...

since ISCSI wont fail , my laptop is a nice brick with their stock kernel.....
using genkerel-next to make one for systemD to load GRRRRRR. so removing it not good.

but since the service wont gracefully fail , my laptop infinitely loops on and on. trying to load.

prior to that i could remove open-ISCSI and the km Package and go for a fresh install. (Problem solved)

Try N times load kernel module
initiator not found sorry cant load ISCSI,
esle load KM , if ISCSI is bugged IE build from binary isn't working and fails ( ie bugs) load N times
Fail send alert.

since my root is a physical drive and on my laptop I could care less least for Boot-time load.
now if i build up a Freenas having it run to keep my backups and repos (ie github) etc , oh yeah iscsi quite good for that , and or client shares.

ISUE="-Boot-start +Delayed-start"

Sabayon/Gentoo isn't the only distro suffering, IE Redhat were Sabayon ports things to the Gentoo land etc, is also affected...

if your a mobile user with quirky hardware and a bleeding edge distro , shit happens like Nvida drivers... download the *.run and do install works ever time. however since the open-iscis breaks and when it breaks it really breaks it's now forced , were as previously i could remove it temporally .

iscsistart not installed

--- /usr/src/open-iscsi/Makefile 2018-03-01 12:13:51.482042302 +0600
+++ Makefile 2018-05-21 13:46:20.000000000 +0600
@@ -18,7 +18,8 @@
MANPAGES = doc/iscsid.8 doc/iscsiadm.8 doc/iscsi_discovery.8
iscsiuio/docs/iscsiuio.8 doc/iscsi_fw_login.8 doc/iscsi-iname.8
doc/iscsistart.8
-PROGRAMS = usr/iscsid usr/iscsiadm utils/iscsi-iname iscsiuio/src/unix/iscsiuio
+PROGRAMS = usr/iscsid usr/iscsiadm utils/iscsi-iname iscsiuio/src/unix/iscsiuio usr/iscsistart
SCRIPTS = utils/iscsi_discovery utils/iscsi_fw_login utils/iscsi_offload
utils/iscsi-gen-initiatorname
INSTALL = install

Provide a way to see the throughput of individual iscsi targets

Under a given load, we're trying to figure where the IO load is coming from and I can't seem to find a way to see which iscsi target is using up the devices.

On approach was to check out the network bandwith, get the port on which most of the data is going through and find the corresponding info in iscsiadm, but no luck.

Is this info in iscsiadm -m session -P 3 ?

Command Output not displaying correct information

Hello,
When running iscsiadm some fields are not being populated correctly (empty), even if they do have information in SYSFS.

Follow below an example for "ipaddress" under "iscsi_host":

[root@lnx01 ~]# iscsiadm -m host -P3
Host Number: 0
State: running
Transport:
Initiatorname:
IPaddress:
HWaddress:
Netdev:

[root@lnx01 ~]# systool -c iscsi_host -av
Class = "iscsi_host"

Class Device = "host3"
Class Device path = "/sys/devices/platform/host3/iscsi_host/host3"
hwaddress = "(null)"
initiatorname = "(null)"
ipaddress = "10.XX.XX.XX"
netdev = "(null)"
uevent =

Device = "host3"
Device path = "/sys/devices/platform/host3"
  uevent              = "DEVTYPE=scsi_host"

Bigger iscsi_segment.size

In the struct iscsi_segment, there is an attribute named "size". I have found that the value of "size" is always 4096, but I can't find the assignment. And I want to replace it with a bigger value, such as 16384. How can I do?

Quick Question on "flash vs Non-flash"

Hi,

Just had a quick question. I was wondering if you could tell me in this function, what is flash vs non-flash referring to? It doesnt appear to be referring to the actually media type.

static char *get_iscsi_node_type(struct session_info *info)
{
int pid = iscsi_sysfs_session_user_created(info->sid);

if (!pid)
    return "flash";
else
    return "non-flash";

}

Example:
[root@iop061046 ~]# iscsiadm -m session -P 3 | grep -i flash
Target: iqn.2003-01.org.linux-iscsi.iop061048.x8664:sn.ceedd03dc66d (non-flash)

Thank you in advance,
Alan

libiscsi

Hi,

I'd like to ask for formal requirements, so that you (as upstream) would include libiscsi into the source tree? I am talking about libiscsi which currently lives only in Fedora's dist git (this repo also contains Fedora's patches).

Thank you very much.

The README is quite outdated

Related to Issue#35: The README file is outdated, particularly with respect to the kernel part of the code, which has long since been integrated.

lun not attached if no luns during target login

[root@blocksync bin]# iscsiadm -m session --rescan
Rescanning session [sid: 1, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.184,3260]
Rescanning session [sid: 2, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.185,3260]
Rescanning session [sid: 3, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.184,3260]
Rescanning session [sid: 4, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.185,3260]
Rescanning session [sid: 5, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.132,3260]
Rescanning session [sid: 6, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.132,3260]
Rescanning session [sid: 7, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.131,3260]
Rescanning session [sid: 8, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.131,3260]
[root@blocksync bin]#

mpatha (may be other letter) should be here ,but no...

[root@blocksync bin]# multipath -ll
mpathinetgw2 (3600c0ff00026940805c1005801000000) dm-1 HP ,MSA 2040 SAN
size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 7:0:0:1 sdf 8:80 active ready running
| - 6:0:0:1 sdd 8:48 active ready running -+- policy='round-robin 0' prio=10 status=enabled
|- 8:0:0:1 sde 8:64 active ready running
- 9:0:0:1 sdn 8:208 active ready running mpathinetgw1 (3600c0ff00026940808c1005801000000) dm-2 HP ,MSA 2040 SAN size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 6:0:0:2 sdg 8:96 active ready running |- 7:0:0:2 sdi 8:128 active ready running
-+- policy='round-robin 0' prio=10 status=enabled |- 8:0:0:2 sdh 8:112 active ready running - 9:0:0:2 sdo 8:224 active ready running
mpathinetgw1inetgw2 (3600c0ff0002694081dc1005801000000) dm-3 HP ,MSA 2040 SAN
size=150G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 7:0:0:3 sdl 8:176 active ready running
| - 6:0:0:3 sdj 8:144 active ready running -+- policy='round-robin 0' prio=10 status=enabled
|- 8:0:0:3 sdk 8:160 active ready running
- 9:0:0:3 sdp 8:240 active ready running mpathzabbixroot (3600c0ff000269408711efd5701000000) dm-0 HP ,MSA 2040 SAN size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=50 status=active | |- 6:0:0:0 sda 8:0 active ready running |- 7:0:0:0 sdb 8:16 active ready running
-+- policy='round-robin 0' prio=10 status=enabled |- 8:0:0:0 sdc 8:32 active ready running - 9:0:0:0 sdm 8:192 active ready running
[root@blocksync bin]#

reloging helps:

[root@blocksync bin]# ./iscsi-logoff
Logging out of session [sid: 1, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.184,3260]
Logging out of session [sid: 2, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.185,3260]
Logging out of session [sid: 3, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.184,3260]
Logging out of session [sid: 4, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.185,3260]
Logging out of session [sid: 5, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.132,3260]
Logging out of session [sid: 6, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.132,3260]
Logging out of session [sid: 7, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.131,3260]
Logging out of session [sid: 8, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.131,3260]
Logout of [sid: 1, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.184,3260] successful.
Logout of [sid: 2, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.185,3260] successful.
Logout of [sid: 3, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.184,3260] successful.
Logout of [sid: 4, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.185,3260] successful.
Logout of [sid: 5, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.132,3260] successful.
Logout of [sid: 6, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.132,3260] successful.
Logout of [sid: 7, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.131,3260] successful.
Logout of [sid: 8, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.131,3260] successful.

[root@blocksync bin]# ./iscsi-login
Logging in to iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.184,3260
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.184,3260] successful.
Logging in to iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.185,3260
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.223.185,3260] successful.
Logging in to iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.184,3260
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.184,3260] successful.
Logging in to iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.185,3260
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268533, portal: 192.168.224.185,3260] successful.
Logging in to iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.131,3260
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.131,3260] successful.
Logging in to iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.132,3260
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.223.132,3260] successful.
Logging in to iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.131,3260
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.131,3260] successful.
Logging in to iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.132,3260
Login to [iface: default, target: iqn.1986-03.com.hp:storage.msa2040.1533268539, portal: 192.168.224.132,3260] successful.

[root@blocksync bin]# multipath -ll
mpathe (3600c0ff000267d5355e8055801000000) dm-0 HP ,MSA 2040 SAN
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 2:0:0:0 sda 8:0 active ready running
| - 4:0:0:0 sdc 8:32 active ready running -+- policy='round-robin 0' prio=10 status=enabled
|- 3:0:0:0 sdb 8:16 active ready running
`- 5:0:0:0 sdd 8:48 active ready running

This happens not every time, though.
If lun from target presents during login, then looks like new luns are recognized OK.
Linux 4.8.1.

mlockall still necessary (containers)

Hello,
On Ubuntu (since 16.04, when open-iscsi became installed by default on server images), open-iscsi.service (and iscsid.service) will fail to start in LXD containers. This is because iscsid itself will fail to start as it tries to mlockall(MCL_CURRENT | MCL_FUTURE) at initialization time and fails if it can't. Given user namespaces, I believe the host kernel is denying the request, as the regular use is either exceeding their ulimit or doesn't have CAP_IPC_LOCK (most likely).

I think the mlockall is needed to ensure that the root disks (potentially) information is not swapped out -- or iscsid itself is not? Just trying to gather some information from a design perspective on wehther it is reasonable to run open-iscsi/iscsid in containers and what is necessary (for instance, we can state that only privileged containers can do so).

Thanks!
-Nish

Please cut a release

I am fixing bugs in the nixos packaging of open-iscsi that would depend on fixes already in HEAD but are not released. Could somebody tag a release? Even an rc is fine. Thanks!

open-iscsi test code needs to be updated

There is a "test" subdirectory that contains some useful self tests, but its seriously out of date. It uses command that are hard to find or not even present any longer (such as fdisk, which predates GPT and parted).

These tests have value and should be updated and then ran periodically/regularly?

In running them I found some serious issues with respect to NOPs and login/logout timeouts under load.

open-iscsi does not automatically login into portal

Description of problem:

Although "node.startup = automatic" and "node.conn[0].startup = automatic" are configured in a specific portal to start automatically after reboot no login is done although iscsid is running.

Steps to Reproduce:

  1. In /etc/iscsi/iscsi.conf the correct path for the iscsi daemon has to be specified. In Mageia 6 (mga6) the daemon (iscsid) is installed in /sbin. So, the part of the daemon configuration has to be adjusted:

Default for upstream open-iscsi scripts (uncomment to activate).
iscsid.startup = /sbin/iscsid

If this is not done iscsiadm cannot start the daemon:

iscsiadm -m discovery -t st -p 192.168.10.10
sh: /etc/rc.d/init.d/iscsid: Datei oder Verzeichnis nicht gefunden
iscsiadm: can not connect to iSCSI daemon (111)!
sh: /etc/rc.d/init.d/iscsid: Datei oder Verzeichnis nicht gefunden
iscsiadm: can not connect to iSCSI daemon (111)!
iscsiadm: Cannot perform discovery. Initiatorname required.
iscsiadm: Could not perform SendTargets discovery: could not connect to iscsid

  1. start the daemon:
    systemctl start open-iscsi.service

  2. configure nodes:
    iscsiadm -m discovery -t sendtargets -p 192.168.10.10:3260

  3. login into specific portal:
    iscsiadm --mode node --targetname iqn.2010-08.eu.n-tec:icebox-rix6r-000905600:dev0.ctr1 --portal 192.168.10.10:3260 --login

  4. After login the new device can be configured. I created a volume group and some logical volumes which have been formatted with xfs and can be mounted
    via an entry in /etc/fstab:
    ...
    /dev/vg_icebox/lv_projects /projects xfs _netdev,rw,inode64,logbufs=8 1 2
    ...

  5. logout of the portal:
    iscsiadm --mode node --targetname iqn.2010-08.eu.n-tec:icebox-rix6r-000905600:dev0.ctr1 --portal 192.168.10.10:3260 --logout

  6. update iface parameters for automatic login:
    iscsiadm -m node -T iqn.2010-08.eu.n-tec:icebox-rix6r-000905600:dev0.ctr1 -p 192.168.10.10 --op update -n node.startup -v automatic
    iscsiadm -m node -T iqn.2010-08.eu.n-tec:icebox-rix6r-000905600:dev0.ctr1 -p 192.168.10.10 --op update -n node.conn[0].startup -v automatic

  7. reboot the system

  8. no portal login ...


Hardware:
BCM57840 NetXtreme II 10 Gigabit Ethernet (QLogic 3442)

Software:
open-iscsi Version: 2.0-872.13.mga6

System:
Linux thalamus_nfs 4.9.56-server-1.mga6 #1 SMP Thu Oct 12 23:46:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

iscsiadm and openSUSE

Not sure if this is an openSUSE bug, or could be handled better by open-iscsi, or both.

We saw the following behavior on openSUSE Leap 42.2:

somehost:~ # iscsiadm -m host
qla4xxx: [1] 10.0.1.58,[84:xx:xx:xx:xx:83],<empty>
iqn.2000-04.com.qlogic:isp8214.00xxx.4
bnx2i: [10] <empty>,[84:xx:xx:xx:xx:7e],eth0 <empty>
qla4xxx: [8] 10.0.1.59,[84:xx:xx:xx:xx:84],<empty>
iqn.2000-04.com.qlogic:isp8214.00xxx.5
bnx2i: [9] <empty>,[84:xx:xx:xx:xx:47],eth1 <empty>
somehost:~ # iscsiadm -m host -H 1 -C flashnode
iscsiadm: Invalid host no 1. No such file or directory.
somehost:~ # iscsiadm -m host -H 8 -C flashnode
iscsiadm: Invalid host no 8. No such file or directory.
somehost:~ # iscsiadm -m host -H 84:xx:xx:xx:xx:83 -C flashnode
qla4xxx: [0] 0.0.0.0:3260,0 <empty>
qla4xxx: [1] 0.0.0.0:3260,0 <empty>
qla4xxx: [2] 10.0.1.4:3260,0 iqn.2001-05.com.equallogic:0-[…]-v-sto[…]-000001

This seems to be caused by the following puzzle pieces:

  • Open-iSCSI links against libopenssl1_0_0/libgcrypt20,
  • libopenssl/libgcrypt are compiled with FIPS support (probably for SLES),
  • openSUSE kernels don't have FIPS support.

The following actions fixed the problem:

  • Rebuild the kernel with CONFIG_MODULE_SIG=y and CONFIG_CRYPTO_FIPS=y (not sure if the latter is necessary, will check). (This creates the file /proc/sys/crypto/fips_enabled.)
  • Install the RPMs libgcrypt20-hmac and libopenssl1_0_0-hmac.

centos7 iscsi is adding ipv6 nodes on ipv4 only system

i have had some boot problems on centos 7 host

apparently when doing iscsisadmin discovery, the following is being added to /var and prevented proper automatic start on boot, as ipv6 is firewalled within our network. iscsi service hung forever on boot, as there also was no timeout defined in systemd configuration

./lib/iscsi/nodes
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e/192.168.1.87,3260,1
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e/192.168.1.87,3260,1/default
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e/fe80::211:32ff:fe0c:d12,3260,1
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e/fe80::211:32ff:fe0c:d12,3260,1/default
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-11.3ea3d7a31e
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-11.3ea3d7a31e/192.168.1.87,3260,1
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-11.3ea3d7a31e/192.168.1.87,3260,1/default
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-11.3ea3d7a31e/fe80::211:32ff:fe0c:d12,3260,1
./lib/iscsi/nodes/iqn.2000-01.com.synology:DiskStation4.Target-11.3ea3d7a31e/fe80::211:32ff:fe0c:d12,3260,1/default
./lib/iscsi/send_targets
./lib/iscsi/send_targets/192.168.1.87,3260
./lib/iscsi/send_targets/192.168.1.87,3260/st_config
./lib/iscsi/send_targets/192.168.1.87,3260/iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e,192.168.1.87,3260,1,default
./lib/iscsi/send_targets/192.168.1.87,3260/iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e,fe80::211:32ff:fe0c:d12,3260,1,default
./lib/iscsi/send_targets/192.168.1.87,3260/iqn.2000-01.com.synology:DiskStation4.Target-11.3ea3d7a31e,192.168.1.87,3260,1,default
./lib/iscsi/send_targets/192.168.1.87,3260/iqn.2000-01.com.synology:DiskStation4.Target-11.3ea3d7a31e,fe80::211:32ff:fe0c:d12,3260,1,default

manually removing the ipv6 entries from filesystem fixed the problem, reboot works now

im curious why iscsi is adding ipv6 information when i even disabled ipv6 on the initiator host (dont know how to disable on synology nas)

i`m missing at least a configuration switch in iscsd.conf to disable ipv6

in manpage from iscsiadm, i found --portal_type option which looks i can force ipv4 with that - but it doesn`t work on centos7

iscsiadm -m discovery -t sendtargets -p 192.168.1.87:3260 --portal_type=ipv4

iscsiadm: discovery mode: option '-A' is not allowed/supported

iscsiadm -m node -T iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e -p 192.168.1.87 -A ipv4 --login

iscsiadm: node mode: option '-A' is not allowed/supported

iscsiadm -m node -T iqn.2000-01.com.synology:DiskStation4.Target-1.3ea3d7a31e -p 192.168.1.87 --portal_type=ipv4 --login

iscsiadm: node mode: option '-A' is not allowed/supported

im asking here because im unsure if this is a centos or upstream issue

Man page sucks

As pointed out recently by a contributor, we all know the iscsi man page (and interactive help information) is not exceedingly clear.

A rewrite, perhaps by a real doc writer, might be very helpful.

IPC qtask write failed: Broken pipe

Hi,

Each time when I reboot the open-iscsi (2.0-876) service I get error about IPC qtask write failed:

May 28 12:59:12 [daemon.warning] Connection1:0 to [target: sdd, portal: 15.15.15.196,3261] through [iface: eth1] is operational now
May 28 12:59:12 [daemon.warning] Connection2:0 to [target: sde, portal: 15.15.15.196,3261] through [iface: eth1] is operational now
May 28 12:59:12 [daemon.err] IPC qtask write failed: Broken pipe
May 28 12:59:12 [daemon.err] IPC qtask write failed: Broken pipe
May 28 12:59:12 [daemon.warning] Connection3:0 to [target: sdf, portal: 15.15.15.196,3261] through [iface: eth1] is operational now
May 28 12:59:12 [daemon.warning] Connection4:0 to [target: sdg, portal: 15.15.15.196,3261] through [iface: eth1] is operational now
May 28 12:59:12 [daemon.warning] Connection5:0 to [target: sdb, portal: 15.15.15.196,3261] through [iface: eth1] is operational now
May 28 12:59:12 [daemon.warning] Connection6:0 to [target: sdc, portal: 15.15.15.196,3261] through [iface: eth1] is operational now
May 28 12:59:12 [daemon.err] IPC qtask write failed: Broken pipe
May 28 12:59:12 [daemon.err] IPC qtask write failed: Broken pipe
May 28 12:59:12 [daemon.err] IPC qtask write failed: Broken pipe
May 28 12:59:12 [daemon.err] IPC qtask write failed: Broken pipe

Seems that initiator works fine but why such error goes to system logs although disks are visible correctly (too high severity of log message?):

Debug:

iscsid: Connection15:0 to [target: sdf, portal: 15.15.15.196,3261] through [iface: eth1] is operational now
iscsid: thread 55c11d774ca0 done
iscsid: exec thread 55c11d81d9e0 callback
iscsid: in krecv_pdu_begin
iscsid: recv PDU began, pdu handle 0x55c11d81da60
iscsid: in kread 0 48 0x55c11d8169d0 0x55c11d81da60
iscsid: read 48 bytes of PDU header
iscsid: read 48 PDU header bytes, opcode 0x23, dlength 0, data 0x55c11d816a40, max 8192
iscsid: in krecv_pdu_end
iscsid: recv PDU finished for pdu handle 0x0x55c11d81da60
iscsid: put ev context 0x55c11d81d9e0
iscsid: login response status 0000
iscsid: sending login PDU with current stage 1, next stage 3, transit 0x80, isid 0x00023d000011 exp_statsn 1834574272
iscsid: >    HeaderDigest=None
iscsid: >    DataDigest=None
iscsid: >    DefaultTime2Wait=0
iscsid: >    DefaultTime2Retain=0
iscsid: >    IFMarker=No
iscsid: >    OFMarker=No
iscsid: >    ErrorRecoveryLevel=0
iscsid: >    InitialR2T=No
iscsid: >    ImmediateData=Yes
iscsid: >    MaxBurstLength=16776192
iscsid: >    FirstBurstLength=262144
iscsid: >    MaxOutstandingR2T=1
iscsid: >    MaxConnections=1
iscsid: >    DataPDUInOrder=Yes
iscsid: >    DataSequenceInOrder=Yes
iscsid: >    MaxRecvDataSegmentLength=262144
iscsid: in ksend_pdu_begin
iscsid: send PDU began for hdr 48 bytes and data 312 bytes
iscsid: in kwritev
iscsid: wrote 48 bytes of PDU header
iscsid: in kwritev
iscsid: wrote 312 bytes of PDU data
iscsid: in ksend_pdu_end
iscsid: in __kipc_call
iscsid: in kwritev
iscsid: scanning host13 completed
iscsid: mgmt_ipc_write_rsp: rsp to fd 6
iscsid: pid 16424 caught signal 13
**iscsid: IPC qtask write failed: Broken pipe**
iscsid: set queue depth for 13:0:0:0 to 32
iscsid: scanning host14 completed
iscsid: mgmt_ipc_write_rsp: rsp to fd 8
iscsid: pid 16425 caught signal 13
**iscsid: IPC qtask write failed: Broken pipe**
iscsid: set queue depth for 14:0:0:0 to 32
iscsid: in nlpayload_read
iscsid: expecting event 19, got 101, handling...
iscsid: in ctldev_handle
iscsid: in nl_read
iscsid: ctldev_handle got event type 101
iscsid: get ev context 0x55c11d6b8830
iscsid: message real length is 184 bytes, recv_handle 0x55c11d6b8878
iscsid: in nlpayload_read
iscsid: sched conn context 0x55c11d6b8830 event 1, tmo 0
iscsid: thread 0x55c11d6b8830 schedule: delay 0 state 3
iscsid: in nlpayload_read
iscsid: in nlpayload_read
iscsid: send PDU finished for conn 17:0
iscsid: thread 55c11d81d9e0 done

iscsi redirect - missleading error messages "login authentication failure"

it has been observed with various iscsi targets, making use of redirect feature and chap authentication, that while establishing a session with the target virtual IP and being redirected to the actual node IP the daemon would log:

Login authentication failed with target....
Kernel reported iSCSI connection 14:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) state (1)

Followed by an authentication successful and connection established.
Albeit can be considered a cosmetic issue and expected behaviour this is reported as an error in the log and is not possible to understand to which node IP is connecting.
Is there any plan in better integrate the iscsi redirect behaviour with authentication?

command "iscsiadm --mode session -P 3" segfaults

Hi, running Fedora 28 as client, connecting to a Centos 7 target. iscsiadm version 6.2.0.874 generates segfault on the command above, whether the integer is 1, 2, or 3. I can't tell if this is the issue mentioned in your release notes?
Kernel is 4.16.6-302

When adding --debug 4 I get this output;

iscsiadm --mode session --print 3 -d 4
iscsiadm: Max file limits 1024 4096
iscsiadm: sysfs_attr_get_value: open '/module/scsi_transport_iscsi'/'version'
iSCSI Transport Class version 2.0-870
version 6.2.0.874
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'targetname'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'username'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'password'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'username_in'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'password_in'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'recovery_tmo'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'lu_reset_tmo'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'tgt_reset_tmo'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'abort_tmo'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_session/session1'/'tpgt'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_connection/connection1:0'/'persistent_address'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_connection/connection1:0'/'address'
iscsiadm: Could not read attr address on path /class/iscsi_connection/connection1:0
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_connection/connection1:0'/'persistent_port'
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_connection/connection1:0'/'port'
iscsiadm: Could not read attr port on path /class/iscsi_connection/connection1:0
iscsiadm: sysfs_device_get: open '/class/iscsi_session/session1'
iscsiadm: sysfs_resolve_link: path link '/class/iscsi_session/session1' points to '../../devices/platform/host6/session1/iscsi_session/session1'
iscsiadm: sysfs_resolve_link: base '/class/iscsi_session/session1', tail 'devices/platform/host6/session1/iscsi_session/session1', back 2
iscsiadm: sysfs_resolve_link: after moving back ''

  • there is more output, but not sure if it is helpful...

iscsiadm

Hi,
I am using iscsiadm to create many connections.
I can see that isid fields a and b contain the driver id, isid c has a number 0-0xFF but isid d is always 0.
so if I create more than 256 logins on the same session new logins get the same isid since only field c is changing and it has only 1 byte.
Can you please tell if this is misconfiguration, because from the code it seems that the id should be written into fields c and d together, so why is the always in the range of 0-0xFF?
Thanks, Roy.

What is the purpose of __idbm_rec_read?

It seems to allocate info object, populated it with values from out_rec and portal file and then just free it.

Perhaps, the intention was the opposite -- to read portal into info and then populate out_rec with it?

Also idbm locking doesn't seem to be necessary.

when writing data there exists a series of read requests.

when I launch a write request, I find the target gets a series of read requests and then the write request, the read requests again. why?
I find the sequence is consistent with the read requests when the Initiator log into the Target.

./configure script missing (referenced in README)

The README file instructs users to use a ./configure script to setup the build
No configure script is present in either release or the git repo itself.

A configure script should be added to the build

Update FSF addresses

Update the FSF addresses in utils/iscsi_discovery and COPYING. The address can just be left out of the COPYING file, as per updated FSF standards, and can be updated in the iscsi_discovery script to Free Software Foundation, Inc., 51 Franklin St, Boston, MA 02110 USA

Cannot set MaxRecvDataSegmentLength to max.

in iscsi_copy_operational_params, the conn->max_recv_dlength is set to _padding(conn_conf->MaxRecvDataSegmentLength);

This (_padding) rounds up the value. If the configured MaxRecvDataSegmentLength is set to the actual maximum (2^24 -1 : 16777215), the _padding function changes it to 16777216, causing the comparison to fail and it resets the value to the default minimum (262144).

The actual maximum is really 16777212 since 16777212 & 0x03 == 0, which will cause it to pass thru the _padding function with no change.

iscsiadm sometimes does not show attached device (iscsiadm -m session) but device is really exist and attached

Hi colleagues.

I have some troubles with iscsiadm. iscsiadm sometimes creates LUN but does not show attached disk. My device is really exist.

Login:

>sudo -s iscsiadm --mode node --portal 192.168.1.86 --targetname iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879 --login 2>&1

Logging in to [iface: default, target: iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879, portal: 192.168.1.86,3260] (multiple)
Login to [iface: default, target: iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879, portal: 192.168.1.86,3260] successful.

Check result:

>sudo -s iscsiadm -m session -P 3 2>&1

Target: iqn.2010-06.com.nutanix:vg-backup-3943a3b9-3745-4dd7-ba0e-72f25b4a417b
    Current Portal: 192.168.1.86:3260,1
    Persistent Portal: 192.168.1.86:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:5bb3c3d5f39
        Iface IPaddress: 192.168.0.43
        Iface HWaddress: <empty>
        Iface Netdev: <empty>
        SID: 10
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 120
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 1048576
        FirstBurstLength: 262144
        MaxBurstLength: 16776192
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 12	State: running
        scsi12 Channel 00 Id 0 Lun: 0
            Attached scsi disk sdc		State: running
Target: iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879
    Current Portal: 192.168.1.86:3260,1
    Persistent Portal: 192.168.1.86:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:5bb3c3d5f39
        Iface IPaddress: 192.168.0.43
        Iface HWaddress: <empty>
        Iface Netdev: <empty>
        SID: 40
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 120
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 1048576
        FirstBurstLength: 262144
        MaxBurstLength: 16776192
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 42	State: running
        scsi42 Channel 00 Id 0 Lun: 0
Target: iqn.2010-06.com.nutanix:vg-backup-2c392880-0708-4911-86ed-16dfd4ff47f6
    Current Portal: 192.168.1.86:3260,1
    Persistent Portal: 192.168.1.86:3260,1
        **********
        Interface:
        **********
        Iface Name: default
        Iface Transport: tcp
        Iface Initiatorname: iqn.1993-08.org.debian:01:5bb3c3d5f39
        Iface IPaddress: 192.168.0.43
        Iface HWaddress: <empty>
        Iface Netdev: <empty>
        SID: 6
        iSCSI Connection State: LOGGED IN
        iSCSI Session State: LOGGED_IN
        Internal iscsid Session State: NO CHANGE
        *********
        Timeouts:
        *********
        Recovery Timeout: 120
        Target Reset Timeout: 30
        LUN Reset Timeout: 30
        Abort Timeout: 15
        *****
        CHAP:
        *****
        username: <empty>
        password: ********
        username_in: <empty>
        password_in: ********
        ************************
        Negotiated iSCSI params:
        ************************
        HeaderDigest: None
        DataDigest: None
        MaxRecvDataSegmentLength: 262144
        MaxXmitDataSegmentLength: 1048576
        FirstBurstLength: 262144
        MaxBurstLength: 16776192
        ImmediateData: Yes
        InitialR2T: Yes
        MaxOutstandingR2T: 1
        ************************
        Attached SCSI devices:
        ************************
        Host Number: 8	State: running
        scsi8 Channel 00 Id 0 Lun: 0
            Attached scsi disk sdb		State: running

There is no string after scsi42 Channel 00 Id 0 Lun: 0:

Attached scsi disk sdd		State: running

But device is really exist:

> ll /dev/disk/by-path/

Result:

lrwxrwxrwx 1 root root   9 сер 22 15:18 ip-192.168.1.86:3260-iscsi-iqn.2010-06.com.nutanix:vg-backup-2c392880-0708-4911-86ed-16dfd4ff47f6-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root   9 сер 23 11:52 ip-192.168.1.86:3260-iscsi-iqn.2010-06.com.nutanix:vg-backup-3943a3b9-3745-4dd7-ba0e-72f25b4a417b-lun-0 -> ../../sdc
lrwxrwxrwx 1 root root   9 сер 23 19:02 ip-192.168.1.86:3260-iscsi-iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879-lun-0 -> ../../sdd
lrwxrwxrwx 1 root root   9 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part5 -> ../../sda5

As we can see my target/lun connected to sdd.

If I make full re-login then all is ok:

> sudo iscsiadm -m node --logout
> ll /dev/disk/by-path/

lrwxrwxrwx 1 root root   9 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part5 -> ../../sda5

> sudo -s iscsiadm --mode node --portal 192.168.1.86 --targetname iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879 --login 2>&1

Logging in to [iface: default, target: iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879, portal: 192.168.1.86,3260] (multiple)
Login to [iface: default, target: iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879, portal: 192.168.1.86,3260] successful.

Result:

> ll /dev/disk/by-path/

lrwxrwxrwx 1 root root   9 сер 24 14:38 ip-192.168.1.86:3260-iscsi-iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root   9 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10 сер 22 14:17 pci-0000:00:10.0-scsi-0:0:0:0-part5 -> ../../sda5

> sudo -s iscsiadm -m session -P 3 2>&1

Target: iqn.2010-06.com.nutanix:vg-restore-99079f45-f62d-4df5-8955-d3f6ffa27879
Current Portal: 192.168.1.86:3260,1
Persistent Portal: 192.168.1.86:3260,1
	**********
	Interface:
	**********
	Iface Name: default
	Iface Transport: tcp
	Iface Initiatorname: iqn.1993-08.org.debian:01:5bb3c3d5f39
	Iface IPaddress: 192.168.0.43
	Iface HWaddress: <empty>
	Iface Netdev: <empty>
	SID: 41
	iSCSI Connection State: LOGGED IN
	iSCSI Session State: LOGGED_IN
	Internal iscsid Session State: NO CHANGE
	*********
	Timeouts:
	*********
	Recovery Timeout: 120
	Target Reset Timeout: 30
	LUN Reset Timeout: 30
	Abort Timeout: 15
	*****
	CHAP:
	*****
	username: <empty>
	password: ********
	username_in: <empty>
	password_in: ********
	************************
	Negotiated iSCSI params:
	************************
	HeaderDigest: None
	DataDigest: None
	MaxRecvDataSegmentLength: 262144
	MaxXmitDataSegmentLength: 1048576
	FirstBurstLength: 262144
	MaxBurstLength: 16776192
	ImmediateData: Yes
	InitialR2T: Yes
	MaxOutstandingR2T: 1
	************************
	Attached SCSI devices:
	************************
	Host Number: 43	State: running
	scsi43 Channel 00 Id 0 Lun: 0
		Attached scsi disk sdb		State: running

After re-login the string with device name is exist:

Attached scsi disk sdb		State: running

iscsiadm cannot connect to a target

On a fresh installation of Mageia 6 iscsiadm cannot connect to a target which is known to be reachable from a different system. The ping command works and there is no firewall on the linux client. The target is configured without authentication. What could be the problem?

ping icebox_iscsi_1
PING icebox_iscsi_1 (192.168.10.10) 56(84) bytes of data.
64 bytes from icebox_iscsi_1 (192.168.10.10): icmp_seq=1 ttl=64 time=0.027 ms
64 bytes from icebox_iscsi_1 (192.168.10.10): icmp_seq=2 ttl=64 time=0.013 ms
64 bytes from icebox_iscsi_1 (192.168.10.10): icmp_seq=3 ttl=64 time=0.011 ms
^C
--- icebox_iscsi_1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2080ms
rtt min/avg/max/mdev = 0.011/0.017/0.027/0.007 ms


iscsiadm -m discovery -t st -p icebox_iscsi_1 -d5
iscsiadm: Max file limits 1024 4096

iscsiadm: Matched transport tcp

iscsiadm: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'handle'

iscsiadm: sysfs_attr_get_value: new uncached attribute
'/sys/class/iscsi_transport/tcp/handle'

iscsiadm: sysfs_attr_get_value: add to cache
'/sys/class/iscsi_transport/tcp/handle'

iscsiadm: sysfs_attr_get_value: cache '/sys/class/iscsi_transport/tcp/handle'
with attribute value '18446744072643412000'

iscsiadm: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'caps'

iscsiadm: sysfs_attr_get_value: new uncached attribute
'/sys/class/iscsi_transport/tcp/caps'

iscsiadm: sysfs_attr_get_value: add to cache
'/sys/class/iscsi_transport/tcp/caps'

iscsiadm: sysfs_attr_get_value: cache '/sys/class/iscsi_transport/tcp/caps'
with attribute value '0x39'

iscsiadm: starting sendtargets discovery, address icebox_iscsi_1:3260, 
iscsiadm: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'handle'

iscsiadm: sysfs_attr_get_value: found in cache
'/class/iscsi_transport/tcp/handle'

iscsiadm: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'caps'

iscsiadm: sysfs_attr_get_value: found in cache
'/class/iscsi_transport/tcp/caps'

iscsiadm: sendtargets discovery to icebox_iscsi_1:3260 using isid
0x00023d000000
iscsiadm: resolved icebox_iscsi_1 to 192.168.10.10
iscsiadm: discovery timeouts: login 15, reopen_cnt 6, auth 45.
iscsiadm: Binding session -1 to lan_iscsi_1
iscsiadm: connecting to 192.168.10.10:3260
iscsiadm: sysfs_attr_get_value: open '/class/net/lan_iscsi_1'/'iflink'

iscsiadm: sysfs_attr_get_value: new uncached attribute
'/sys/class/net/lan_iscsi_1/iflink'

iscsiadm: sysfs_attr_get_value: add to cache
'/sys/class/net/lan_iscsi_1/iflink'

iscsiadm: sysfs_attr_get_value: cache '/sys/class/net/lan_iscsi_1/iflink' with
attribute value '4'

iscsiadm: No priority for lan_iscsi_1
iscsiadm: connect to 192.168.10.10 timed out
iscsiadm: socket 3 connect timed out
iscsiadm: discovery session to 192.168.10.10:3260 sleeping for 1 seconds
before next login attempt
iscsiadm: Binding session -1 to lan_iscsi_1
iscsiadm: connecting to 192.168.10.10:3260
iscsiadm: sysfs_attr_get_value: open '/class/net/lan_iscsi_1'/'iflink'

iscsiadm: sysfs_attr_get_value: found in cache '/class/net/lan_iscsi_1/iflink'

iscsiadm: No priority for lan_iscsi_1
iscsiadm: connect to 192.168.10.10 timed out
iscsiadm: socket 4 connect timed out
iscsiadm: discovery session to 192.168.10.10:3260 sleeping for 1 seconds
before next login attempt
iscsiadm: Binding session -1 to lan_iscsi_1
iscsiadm: connecting to 192.168.10.10:3260
iscsiadm: sysfs_attr_get_value: open '/class/net/lan_iscsi_1'/'iflink'

iscsiadm: sysfs_attr_get_value: found in cache '/class/net/lan_iscsi_1/iflink'

iscsiadm: No priority for lan_iscsi_1
iscsiadm: connect to 192.168.10.10 timed out
iscsiadm: socket 5 connect timed out
iscsiadm: discovery session to 192.168.10.10:3260 sleeping for 1 seconds
before next login attempt
iscsiadm: Binding session -1 to lan_iscsi_1
iscsiadm: connecting to 192.168.10.10:3260
iscsiadm: sysfs_attr_get_value: open '/class/net/lan_iscsi_1'/'iflink'

iscsiadm: sysfs_attr_get_value: found in cache '/class/net/lan_iscsi_1/iflink'

iscsiadm: No priority for lan_iscsi_1
iscsiadm: connect to 192.168.10.10 timed out
iscsiadm: socket 6 connect timed out
iscsiadm: discovery session to 192.168.10.10:3260 sleeping for 1 seconds
before next login attempt
iscsiadm: Binding session -1 to lan_iscsi_1
iscsiadm: connecting to 192.168.10.10:3260
iscsiadm: sysfs_attr_get_value: open '/class/net/lan_iscsi_1'/'iflink'

iscsiadm: sysfs_attr_get_value: found in cache '/class/net/lan_iscsi_1/iflink'

iscsiadm: No priority for lan_iscsi_1
iscsiadm: connect to 192.168.10.10 timed out
iscsiadm: socket 7 connect timed out
iscsiadm: discovery session to 192.168.10.10:3260 sleeping for 1 seconds
before next login attempt
iscsiadm: Binding session -1 to lan_iscsi_1
iscsiadm: connecting to 192.168.10.10:3260
iscsiadm: sysfs_attr_get_value: open '/class/net/lan_iscsi_1'/'iflink'

iscsiadm: sysfs_attr_get_value: found in cache '/class/net/lan_iscsi_1/iflink'

iscsiadm: No priority for lan_iscsi_1
iscsiadm: connect to 192.168.10.10 timed out
iscsiadm: socket 8 connect timed out
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: No portals found

Installation problem on Ubuntu 17.10

Hi,

I cannot make installer work for me through the repository. The OS is Ubuntu 17.10.
For some unknown reasons, installer does not copy template config files.
Any help is appreciated.

`
pczekalski@n317-pc01:~$ sudo apt-get install open-iscsi
[sudo] password for pczekalski:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libisns0
The following NEW packages will be installed:
libisns0 open-iscsi
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/392 kB of archives.
After this operation, 2419 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Preconfiguring packages ...
Selecting previously unselected package libisns0:amd64.
(Reading database ... 240358 files and directories currently installed.)
Preparing to unpack .../libisns0_0.97-2_amd64.deb ...
Unpacking libisns0:amd64 (0.97-2) ...
Selecting previously unselected package open-iscsi.
Preparing to unpack .../open-iscsi_2.0.874-4ubuntu3_amd64.deb ...
Unpacking open-iscsi (2.0.874-4ubuntu3) ...
Processing triggers for ureadahead (0.100.0-20) ...
Setting up libisns0:amd64 (0.97-2) ...
Processing triggers for initramfs-tools (0.125ubuntu12) ...
update-initramfs: Generating /boot/initrd.img-4.13.0-16-generic
cp: cannot stat '/etc/iscsi/initiatorname.iscsi': No such file or directory
Setting up open-iscsi (2.0.874-4ubuntu3) ...
Job for iscsid.service failed because the control process exited with error code.
See "systemctl status iscsid.service" and "journalctl -xe" for details.
invoke-rc.d: initscript iscsid, action "restart" failed.
● iscsid.service - iSCSI initiator daemon (iscsid)
Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2017-11-22 15:46:40 CET; 6ms ago
Docs: man:iscsid(8)
Process: 10191 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, status=1/FAILURE)

lis 22 15:46:40 n317-pc01 systemd[1]: Starting iSCSI initiator daemon (iscsid)...
lis 22 15:46:40 n317-pc01 startup-checks.sh[10191]: Error: configuration file /etc/iscsi/iscsid.conf is missing!
lis 22 15:46:40 n317-pc01 startup-checks.sh[10191]: The iSCSI driver has not been correctly installed and cannot start.
lis 22 15:46:40 n317-pc01 systemd[1]: iscsid.service: Control process exited, code=exited status=1
lis 22 15:46:40 n317-pc01 systemd[1]: Failed to start iSCSI initiator daemon (iscsid).
lis 22 15:46:40 n317-pc01 systemd[1]: iscsid.service: Unit entered failed state.
lis 22 15:46:40 n317-pc01 systemd[1]: iscsid.service: Failed with result 'exit-code'.
dpkg: error processing package open-iscsi (--configure):
subprocess installed post-installation script returned error exit status 1
Processing triggers for libc-bin (2.26-0ubuntu2) ...
Processing triggers for systemd (234-2ubuntu12.1) ...
Processing triggers for man-db (2.7.6.1-2) ...
Processing triggers for initramfs-tools (0.125ubuntu12) ...
update-initramfs: Generating /boot/initrd.img-4.13.0-16-generic
cp: cannot stat '/etc/iscsi/initiatorname.iscsi': No such file or directory
Errors were encountered while processing:
open-iscsi
E: Sub-process /usr/bin/dpkg returned an error code (1)
`

`
pczekalski@n317-pc01:~$ systemctl status iscsid.service
● iscsid.service - iSCSI initiator daemon (iscsid)
Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2017-11-22 15:46:40 CET; 5min ago
Docs: man:iscsid(8)

lis 22 15:46:40 n317-pc01 systemd[1]: Starting iSCSI initiator daemon (iscsid)...
lis 22 15:46:40 n317-pc01 startup-checks.sh[10191]: Error: configuration file /etc/iscsi/iscsid.conf is missing!
lis 22 15:46:40 n317-pc01 startup-checks.sh[10191]: The iSCSI driver has not been correctly installed and cannot start.
lis 22 15:46:40 n317-pc01 systemd[1]: iscsid.service: Control process exited, code=exited status=1
lis 22 15:46:40 n317-pc01 systemd[1]: Failed to start iSCSI initiator daemon (iscsid).
lis 22 15:46:40 n317-pc01 systemd[1]: iscsid.service: Unit entered failed state.
lis 22 15:46:40 n317-pc01 systemd[1]: iscsid.service: Failed with result 'exit-code'.
`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.