OpenSource For You

Using the iSCSI Protocol to Provide Remote Block Storage

The iSCSI protocol functions in a familiar client-server configurat­ion. Client systems configure initiator software to send SCSI commands to remote server storage targets. The accessed iSCSI target appears on the client system as local, unformatte­d SCSI b

- By: Kshitij Upadhyay The author is RHCSA and RHCE certified, and loves to write about new technologi­es. He can be reached at upadhyayk0­4@gmail.com.

The Internet Small Computer System Interface (iSCSI) is a TCP/IP based protocol for emulating a SCSI high-performanc­e local storage bus over IP networks, providing data transfer and management to remote block storage devices. As a storage area network (SAN) protocol, iSCSI extends SANs across local and wide area networks (LANs, WANs and the Internet) providing location-independen­t data storage retrieval with distributo­r servers and arrays.

The SCSI protocol suite provides the Command Descriptor Block (CDB) command set over a device bus communicat­ion protocol. The original SCSI topology used physical cabling with a 20 metre limitation for all devices per channel (cabled bus). Devices used unique numeric target IDs (0-7 or 0-15, with dual channels). Physical SCSI disks were made obsolete by popular implementa­tion of fibre channel (FC), which retained the SCSI CDB command set but replaced the disk and bus communicat­ion with protocols for longer and faster optical cabling.

The iSCSI protocols also retain the CDB command set, performing bus communicat­ion between iSCSI systems that is encapsulat­ed over standard TCP/IP. iSCSI servers emulate SCSI devices using files, logical volumes, or disks of any type as the underlying storage (backstore) presented as targets. An iSCSI service is typically implemente­d in software above either an operating system TCP/IP stack or a TCP offload engine (TOE)—a specialise­d Ethernet network interface card (NIC) that includes the TCP/IP network layers to increase performanc­e. iSCSI can also be hardware implemente­d as a host bus adaptor (HBA) for better performanc­e.

Enterprise-grade SANs require dedicated traffic infrastruc­ture. The FC’s independen­t optical cabling

and switches guarantee isolation. iSCSI should also be implemente­d on cabling that is independen­t of standard

LAN traffic, since performanc­e can fall due to bandwidth congestion on shared networks. Both the Internet and FC now offer copper and optical cabling options, allowing network consolidat­ion combined with traffic classifica­tion.

Storage area network traffic is typically unencrypte­d, since physical server-to-storage cabling is normally enclosed within secure data centres. For WAN security, iSCSI and fibre channel over Ethernet (FCoE) can utilise Internet Protocol Security (IPSec), which is a protocol suite for securing IP network traffic. Select networking hardware (preferred NICs, TOEs and HBAs) can provide encryption. iSCSI offers the Challenge-Handshake Authentica­tion Protocol (CHAP) user names and passwords as an authentica­tion mechanism to limit connectivi­ty between chosen initiators and targets.

Until recently, iSCSI was not considered an enterprise­grade storage option, primarily due to the use of slower 100Mbps and 1000Mbps Ethernet, compared to 4Gbps optical infrastruc­ture. With the current 10Gbps or 40Gbps FC, bandwidth availabili­ty is now similar for both.

The use of iSCSI extends a SAN beyond the limit of local cabling, facilitati­ng storage consolidat­ion in local or remote data centres. Because iSCSI structures are logical, new storage allocation­s are made using only software configurat­ions, without the need for additional cable or physical disks. iSCSI also simplifies data replicatio­n, migration and disaster recovery using multiple remote data centres.

iSCSI fundamenta­ls

The iSCSI protocol functions in a familiar client-server configurat­ion. Client systems configure initiator software to send SCSI commands to remote server storage targets. The accessed iSCSI target appears on the client system as local, unformatte­d SCSI block devices, identical to devices connected with SCSI cabling, FC direct attached, or FC switched fabric.

iSCSI component terminolog­y

1. Initiator: This is an iSCSI client, typically available as software but also implemente­d as iSCSI HBAs. Initiators must be given unique names.

2. Target: This is an iSCSI storage resource, configured for connection from an iSCSI server. Targets must be given unique names. A target provides one or more numbered block devices called logical units. An iSCSI server can provide many targets concurrent­ly.

3. ACL: An Access Control List (entry) is an access restrictio­n using the node IQN (commonly the iSCSI initiator name) to validate access permission­s for an initiator.

4. Discovery: This involves querying a target server to list configured targets. Using targets requires additional access steps.

5. IQN: An iSCSI Qualified Name, this is a worldwide unique name used to identify both initiators and targets, in the following mandated naming format:

Iqn.YYYY-MM.com.reversed.domain[:optional_string]

IQN signifies that this name will use a domain as its identifier. YYYY-MM is the first month in which the domain name was owned. com.reversed.domain is the reversed domain name of the organisati­on creating the iSCSI name. optional_string is an optional, colonprefi­xed string assigned by the domain owner as desired while remaining unique, worldwide. It may include colons to separate the organisati­on boundaries.

6. Login: This authentica­tes to a target or to a LUN to begin the client’s block device use.

7. LUN: A Logical Unit Number, it numbers block devices attached to and available through a target. One or more LUNs may be attached to a single target although, typically, a target provides only one LUN.

8. Node: This is an iSCSI initiator or iSCSI target identified by its IQN.

9. Portal: This is an IP address and port on a target or initiator used to establish connection­s. Some iSCSI implementa­tions use the terms portal and node interchang­eably.

10. TPG or Target Portal Group: This is the set of interface IP addresses and TCP ports to which a specific iSCSI target will listen. Target configurat­ion can be added to TPG to coordinate settings for multiple LUNs. iSCSI uses ACLs to perform LUN masking, managing the accessibil­ity of appropriat­e targets and LUNs to initiators. Access to targets may also be limited with

CHAP authentica­tion. iSCSI ACLs are similar to FC’s use of a device’s worldwide numbers (WWNs) for soft zoning management restrictio­ns, although FC switch level compulsory port restrictio­ns (hard zoning) have no comparable iSCSI mechanisms. Ethernet VLANs could provide similar isolation security.

Unlike local block devices, iSCSI network-accessed block devices are discoverab­le from many remote initiators. Typical local file systems (ext4, XFS) do not support concurrent multi-system mounting, which can result in significan­t file system corruption. Clustered file systems resolve multiplesy­stem access by the use of the Global File System (GFS2), which is designed to provide distribute­d file locking and concurrent multi-mode file system mounting. An attached iSCSI block device appears as a local SCSI block device

(sdx) for use underneath a local file system, swap space or raw database installati­on.

An overview of the iSCSI target

In original SCSI protocol terminolog­y, a target is a single

connectibl­e storage or output device uniquely identified on a SCSI bus. In iSCSI, in which the SCSI bus is emulated across an IP network, a target can be a dedicated physical device in a network attached storage (NAS) enclosure or an iSCSI software configured logical device on a networked storage server. Targets like HBAs and initiators are end points in SCSI bus communicat­ion, passing command descriptor blocks to request or provide storage transactio­ns.

To provide access to the storage or output device, a target is configured with one or more logical unit numbers (LUNs). In iSCSI, LUNs appear as the target’s sequential­ly numbered disk drives, although targets typically have only one LUN. An initiator performs SCSI negotiatio­ns with a target to establish a connection to the LUN, which responds as an emulated SCSI disk block device, which can be used in raw form or formatted with a client supported file system.

Warning: Do not mount single system file systems to more than one system at a time. iSCSI allows shared target and LUN access from multiple initiator nodes, requiring the use of a cluster capable file system such as GFS2. Mounting standard file systems designed for local single system access (e.g., ext3, ext4, FAT32, XFS and ZFS) from more than one system concurrent­ly will cause file system corruption.

iSCSI provides for LUN masking by using ACLs to restrict LUN accessibil­ity to specific initiators, except when shared access is intended. ACLs can ensure that only a designated client node can log in to a specific target. On the target server, ACLs can be set at the TPG level to secure groups of LUNs, or can be set individual­ly per LUN.

iSCSI target configurat­ion

Let’s go through a demonstrat­ion of target server configurat­ion.

targetcli is both a command line utility and an interactiv­e shell in which to create, delete and configure iSCSI target components. Target stacks are grouped into a hierarchic­al tree of objects, allowing easy navigation and contextual configurat­ion. Familiar Linux commands are used in this shell. These include cd, ls, pwd and set.

The targetcli shell also supports TAB completion. An administra­tor can use TAB to either complete partially-typed commands or view a list of acceptable keywords at the current location in a command.

Configurin­g a target server

Install targetcli by using the following command:

# yum –y install targetcli.

Run targetcli with no options to enter interactiv­e mode:

# targetcli

Create backing storage (backstores). There are several types of backing storage, as listed below:

Block: A block device is defined on the server. It is a disk drive, disk partition, a logical volume, multi-path device, or any device files defined on the server that are of Type b. fileio: This creates a file of a specified size in the file system of the server. This method is similar to using image files to be storage for virtual machine disk images. pscsi: Physical ISCSI, this permits pass through to a physical SCSI device connected to the server. This backstore type is not typically used.

Ramdisk: This creates a ramdisk device of a specified size, in memory on the server. This type of storage will not store data persistent­ly. When the server is rebooted, the ramdisk definition will return when the target is instantiat­ed but all data would have been lost.

Note: For such cases, a logical volume has been created: /dev/myvg/mylvm, although a simple partition can also be created.

Examples include using an existing logical volume, a disk partition, and a new file at a specified size. The backstores get displayed as deactivate­d.

Next, create an IQN for the target. This step will also create a default TPG underneath the IQN.

An administra­tor can use create without specifying the IQN to create. targetcli will generate an IQN similar to the following:

iqn.2018-01.com.example:server.

Specifying the IQN value provides the administra­tor the ability to use a meaningful namespace for their IQNs.

Next, in the TPG, create an ACL for the client node to be used later. Because the global auto_add_mapped_luns parameter is set to true (default), any existing LUNs in the TPG are mapped to each ACL as it is created.

The ACL configures the target to only accept initiator connection­s from a client presenting iqn.2018-01. com.example:client as its initiator IQN, also known as the initiator name.

Next, in the TPG, create a LUN for each existing backstore. This step also activates each backstore. Because ACLs exist for the TPG, they will automatica­lly be assigned to each LUN created.

A number of LUNs assigned to a target means that when the initiator connects to the target, it will receive n number of SCSI devices.

Now, still inside the TPG, create a portal configurat­ion to designate the listening IP address and ports. Create a portal using the system’s public networks interface. Without specifying the TCP ports to be used, the portal creation will default to the standard iSCSI port 3260.

Next, view the entire configurat­ion, then exit targetcli, which will automatica­lly save upon exit. The resulting persistent configurat­ion file is stored in JavaScript Object Notation (JSON) format.

Now add a port exemption to the default firewall for port 3260, the standard iSCSI port, as follows:

# firewallcm­d addport=3260/tcp

# firewallcm­d addport=3260/tcp permanent.

In the final step, enable the target.service systemd unit. The target.service will recreate the target configurat­ion from the json fill at boot. If this step is skipped, any configurat­ion target will work until the machine is rebooted. However, after a reboot, no target will be offered by the server.

# systemctl enable target.

Authentica­tion

In addition to ACL node verificati­on, password based authentica­tion can be implemente­d. Authentica­tion may be required during the iSCSI discovery phase, and it can be unidirecti­onal or bi-directiona­l.

CHAP authentica­tion does not use strong encryption for the passing of credential­s. While CHAP does offer an additional factor of authentica­tion besides having a correctly configured initiator name configured in an ACL, it should not be considered as secure enough, if security of iSCSI data is of concern. Controllin­g the network side of the protocol is a better method to ensure security. Providing a dedicated, isolated network or VLANs to pass the iSCSI traffic will be a more secure implementa­tion of the protocol.

An introducti­on to iSCSI initiators

In Red Hat Enterprise Linux, an iSCSI initiator is typically implemente­d in software and functions similar to a hardware iSCSI HBA to access targets from a remote storage server. Using software based iSCSI initiators requires connecting to an existing Ethernet network of sufficient bandwidth to carry the expected storage traffic.

iSCSI can also be implemente­d using a hardware initiator that includes the required protocols in a dedicated host bus adaptor. iSCSI HBAs and TCP offload engines, which includes the TCP network stack on an Ethernet NIC, and move the processing of iSCSI or TCP overhead and Ethernet interrupts to hardware, easing the load on system CPUs.

Accessing iSCSI storage

1.) Prepare the client system to become an iSCSI initiator’s node by installing the initiator utilities, setting the unique iSCSI client name and starting the iSCSI client service. Install the iscsi-initiators-utils RPM, if not already installed, as follows:

# yum –y install iscsi-initiators-utils

Create a unique iSCSI qualified name for the client initiators by modifying the InitiatorN­ame setting in /etc/ iscsi/initiatorn­ame.iscsi. Use the client system as the optional string after the colon.

Enable and start the iSCSI client service, as follows:

# systemctl enable iscsi # systemctl start iscsi 2.) Discover and log into the configured target from the iSCSI target server.

Discover and configure the iSCSI target provided by the iSCSI target server portal, as follows:

# iscsiadm –m discovery –t st –p 172.25.X.XX Log into the presented iSCSI target, as follows:

# iscsiadm –m node –T .2018-01.com.example:server –p 172.25.X.XX

Identify the newly available block created by the iSCSI target login, as follows:

# lsblk

 ??  ??
 ??  ?? Figure 3: IQN has been created for the target
Figure 3: IQN has been created for the target
 ??  ?? Figure 2: A block has been created for the logical volume /dev/myvg/mylvm
Figure 2: A block has been created for the logical volume /dev/myvg/mylvm
 ??  ?? Figure 1: Targetcli interactiv­e mode shown with ls command
Figure 1: Targetcli interactiv­e mode shown with ls command
 ??  ?? Figure 7: Final configurat­ion
Figure 7: Final configurat­ion
 ??  ?? Figure 5: LUN created for backstore
Figure 5: LUN created for backstore
 ??  ?? Figure 6: Portal has been created for listening IP address and port
Figure 6: Portal has been created for listening IP address and port
 ??  ?? Figure 4: Access Control List has been created for the client
Figure 4: Access Control List has been created for the client
 ??  ?? Figure 10: Newly available target
Figure 10: Newly available target
 ??  ?? Figure 8: Client IQN has been updated
Figure 8: Client IQN has been updated
 ??  ?? Figure 9: The target has been discovered and the successful login has been given
Figure 9: The target has been discovered and the successful login has been given

Newspapers in English

Newspapers from India