Us­ing the iSCSI Pro­to­col to Pro­vide Re­mote Block Stor­age

The iSCSI pro­to­col func­tions in a fa­mil­iar client-server con­fig­u­ra­tion. Client sys­tems con­fig­ure ini­tia­tor soft­ware to send SCSI com­mands to re­mote server stor­age tar­gets. The ac­cessed iSCSI tar­get ap­pears on the client sys­tem as lo­cal, un­for­mat­ted SCSI b

OpenSource For You - - Contents - By: Kshi­tij Upad­hyay The au­thor is RHCSA and RHCE cer­ti­fied, and loves to write about new tech­nolo­gies. He can be reached at upad­[email protected]

The In­ter­net Small Com­puter Sys­tem In­ter­face (iSCSI) is a TCP/IP based pro­to­col for em­u­lat­ing a SCSI high-per­for­mance lo­cal stor­age bus over IP net­works, pro­vid­ing data trans­fer and man­age­ment to re­mote block stor­age de­vices. As a stor­age area net­work (SAN) pro­to­col, iSCSI ex­tends SANs across lo­cal and wide area net­works (LANs, WANs and the In­ter­net) pro­vid­ing lo­ca­tion-in­de­pen­dent data stor­age retrieval with dis­trib­u­tor servers and ar­rays.

The SCSI pro­to­col suite pro­vides the Com­mand De­scrip­tor Block (CDB) com­mand set over a de­vice bus com­mu­ni­ca­tion pro­to­col. The orig­i­nal SCSI topol­ogy used phys­i­cal ca­bling with a 20 me­tre lim­i­ta­tion for all de­vices per chan­nel (ca­bled bus). De­vices used unique nu­meric tar­get IDs (0-7 or 0-15, with dual chan­nels). Phys­i­cal SCSI disks were made ob­so­lete by pop­u­lar im­ple­men­ta­tion of fi­bre chan­nel (FC), which re­tained the SCSI CDB com­mand set but re­placed the disk and bus com­mu­ni­ca­tion with pro­to­cols for longer and faster op­ti­cal ca­bling.

The iSCSI pro­to­cols also re­tain the CDB com­mand set, per­form­ing bus com­mu­ni­ca­tion be­tween iSCSI sys­tems that is en­cap­su­lated over stan­dard TCP/IP. iSCSI servers em­u­late SCSI de­vices us­ing files, log­i­cal vol­umes, or disks of any type as the un­der­ly­ing stor­age (back­store) pre­sented as tar­gets. An iSCSI ser­vice is typ­i­cally im­ple­mented in soft­ware above ei­ther an op­er­at­ing sys­tem TCP/IP stack or a TCP off­load en­gine (TOE)—a spe­cialised Eth­er­net net­work in­ter­face card (NIC) that in­cludes the TCP/IP net­work lay­ers to in­crease per­for­mance. iSCSI can also be hard­ware im­ple­mented as a host bus adap­tor (HBA) for bet­ter per­for­mance.

En­ter­prise-grade SANs re­quire ded­i­cated traf­fic in­fra­struc­ture. The FC’s in­de­pen­dent op­ti­cal ca­bling

and switches guar­an­tee iso­la­tion. iSCSI should also be im­ple­mented on ca­bling that is in­de­pen­dent of stan­dard

LAN traf­fic, since per­for­mance can fall due to band­width con­ges­tion on shared net­works. Both the In­ter­net and FC now of­fer cop­per and op­ti­cal ca­bling op­tions, al­low­ing net­work con­sol­i­da­tion com­bined with traf­fic clas­si­fi­ca­tion.

Stor­age area net­work traf­fic is typ­i­cally un­en­crypted, since phys­i­cal server-to-stor­age ca­bling is nor­mally en­closed within se­cure data cen­tres. For WAN se­cu­rity, iSCSI and fi­bre chan­nel over Eth­er­net (FCoE) can utilise In­ter­net Pro­to­col Se­cu­rity (IPSec), which is a pro­to­col suite for se­cur­ing IP net­work traf­fic. Se­lect net­work­ing hard­ware (pre­ferred NICs, TOEs and HBAs) can pro­vide en­cryp­tion. iSCSI of­fers the Chal­lenge-Hand­shake Au­then­ti­ca­tion Pro­to­col (CHAP) user names and pass­words as an au­then­ti­ca­tion mech­a­nism to limit con­nec­tiv­ity be­tween cho­sen ini­tia­tors and tar­gets.

Un­til re­cently, iSCSI was not con­sid­ered an en­ter­priseg­rade stor­age op­tion, pri­mar­ily due to the use of slower 100Mbps and 1000Mbps Eth­er­net, com­pared to 4Gbps op­ti­cal in­fra­struc­ture. With the cur­rent 10Gbps or 40Gbps FC, band­width avail­abil­ity is now sim­i­lar for both.

The use of iSCSI ex­tends a SAN be­yond the limit of lo­cal ca­bling, fa­cil­i­tat­ing stor­age con­sol­i­da­tion in lo­cal or re­mote data cen­tres. Be­cause iSCSI struc­tures are log­i­cal, new stor­age al­lo­ca­tions are made us­ing only soft­ware con­fig­u­ra­tions, with­out the need for ad­di­tional ca­ble or phys­i­cal disks. iSCSI also sim­pli­fies data repli­ca­tion, mi­gra­tion and dis­as­ter re­cov­ery us­ing mul­ti­ple re­mote data cen­tres.

iSCSI fun­da­men­tals

The iSCSI pro­to­col func­tions in a fa­mil­iar client-server con­fig­u­ra­tion. Client sys­tems con­fig­ure ini­tia­tor soft­ware to send SCSI com­mands to re­mote server stor­age tar­gets. The ac­cessed iSCSI tar­get ap­pears on the client sys­tem as lo­cal, un­for­mat­ted SCSI block de­vices, iden­ti­cal to de­vices con­nected with SCSI ca­bling, FC di­rect at­tached, or FC switched fab­ric.

iSCSI com­po­nent ter­mi­nol­ogy

1. Ini­tia­tor: This is an iSCSI client, typ­i­cally avail­able as soft­ware but also im­ple­mented as iSCSI HBAs. Ini­tia­tors must be given unique names.

2. Tar­get: This is an iSCSI stor­age re­source, con­fig­ured for con­nec­tion from an iSCSI server. Tar­gets must be given unique names. A tar­get pro­vides one or more num­bered block de­vices called log­i­cal units. An iSCSI server can pro­vide many tar­gets con­cur­rently.

3. ACL: An Ac­cess Con­trol List (en­try) is an ac­cess re­stric­tion us­ing the node IQN (com­monly the iSCSI ini­tia­tor name) to val­i­date ac­cess per­mis­sions for an ini­tia­tor.

4. Dis­cov­ery: This in­volves query­ing a tar­get server to list con­fig­ured tar­gets. Us­ing tar­gets re­quires ad­di­tional ac­cess steps.

5. IQN: An iSCSI Qual­i­fied Name, this is a world­wide unique name used to iden­tify both ini­tia­tors and tar­gets, in the fol­low­ing man­dated nam­ing for­mat:­­main[:op­tion­al_string]

IQN sig­ni­fies that this name will use a do­main as its iden­ti­fier. YYYY-MM is the first month in which the do­main name was owned.­­main is the re­versed do­main name of the or­gan­i­sa­tion cre­at­ing the iSCSI name. op­tion­al_string is an op­tional, colon­pre­fixed string as­signed by the do­main owner as de­sired while re­main­ing unique, world­wide. It may in­clude colons to sep­a­rate the or­gan­i­sa­tion bound­aries.

6. Lo­gin: This au­then­ti­cates to a tar­get or to a LUN to be­gin the client’s block de­vice use.

7. LUN: A Log­i­cal Unit Num­ber, it num­bers block de­vices at­tached to and avail­able through a tar­get. One or more LUNs may be at­tached to a sin­gle tar­get al­though, typ­i­cally, a tar­get pro­vides only one LUN.

8. Node: This is an iSCSI ini­tia­tor or iSCSI tar­get iden­ti­fied by its IQN.

9. Por­tal: This is an IP ad­dress and port on a tar­get or ini­tia­tor used to es­tab­lish con­nec­tions. Some iSCSI im­ple­men­ta­tions use the terms por­tal and node in­ter­change­ably.

10. TPG or Tar­get Por­tal Group: This is the set of in­ter­face IP ad­dresses and TCP ports to which a spe­cific iSCSI tar­get will lis­ten. Tar­get con­fig­u­ra­tion can be added to TPG to co­or­di­nate set­tings for mul­ti­ple LUNs. iSCSI uses ACLs to per­form LUN mask­ing, man­ag­ing the ac­ces­si­bil­ity of ap­pro­pri­ate tar­gets and LUNs to ini­tia­tors. Ac­cess to tar­gets may also be lim­ited with

CHAP au­then­ti­ca­tion. iSCSI ACLs are sim­i­lar to FC’s use of a de­vice’s world­wide num­bers (WWNs) for soft zon­ing man­age­ment re­stric­tions, al­though FC switch level com­pul­sory port re­stric­tions (hard zon­ing) have no com­pa­ra­ble iSCSI mech­a­nisms. Eth­er­net VLANs could pro­vide sim­i­lar iso­la­tion se­cu­rity.

Un­like lo­cal block de­vices, iSCSI net­work-ac­cessed block de­vices are dis­cov­er­able from many re­mote ini­tia­tors. Typ­i­cal lo­cal file sys­tems (ext4, XFS) do not sup­port con­cur­rent multi-sys­tem mount­ing, which can re­sult in sig­nif­i­cant file sys­tem cor­rup­tion. Clus­tered file sys­tems re­solve mul­ti­plesys­tem ac­cess by the use of the Global File Sys­tem (GFS2), which is de­signed to pro­vide dis­trib­uted file lock­ing and con­cur­rent multi-mode file sys­tem mount­ing. An at­tached iSCSI block de­vice ap­pears as a lo­cal SCSI block de­vice

(sdx) for use un­der­neath a lo­cal file sys­tem, swap space or raw data­base in­stal­la­tion.

An overview of the iSCSI tar­get

In orig­i­nal SCSI pro­to­col ter­mi­nol­ogy, a tar­get is a sin­gle

con­nectible stor­age or out­put de­vice uniquely iden­ti­fied on a SCSI bus. In iSCSI, in which the SCSI bus is em­u­lated across an IP net­work, a tar­get can be a ded­i­cated phys­i­cal de­vice in a net­work at­tached stor­age (NAS) en­clo­sure or an iSCSI soft­ware con­fig­ured log­i­cal de­vice on a net­worked stor­age server. Tar­gets like HBAs and ini­tia­tors are end points in SCSI bus com­mu­ni­ca­tion, pass­ing com­mand de­scrip­tor blocks to re­quest or pro­vide stor­age trans­ac­tions.

To pro­vide ac­cess to the stor­age or out­put de­vice, a tar­get is con­fig­ured with one or more log­i­cal unit num­bers (LUNs). In iSCSI, LUNs ap­pear as the tar­get’s se­quen­tially num­bered disk drives, al­though tar­gets typ­i­cally have only one LUN. An ini­tia­tor per­forms SCSI ne­go­ti­a­tions with a tar­get to es­tab­lish a con­nec­tion to the LUN, which re­sponds as an em­u­lated SCSI disk block de­vice, which can be used in raw form or for­mat­ted with a client sup­ported file sys­tem.

Warn­ing: Do not mount sin­gle sys­tem file sys­tems to more than one sys­tem at a time. iSCSI al­lows shared tar­get and LUN ac­cess from mul­ti­ple ini­tia­tor nodes, re­quir­ing the use of a clus­ter ca­pa­ble file sys­tem such as GFS2. Mount­ing stan­dard file sys­tems de­signed for lo­cal sin­gle sys­tem ac­cess (e.g., ext3, ext4, FAT32, XFS and ZFS) from more than one sys­tem con­cur­rently will cause file sys­tem cor­rup­tion.

iSCSI pro­vides for LUN mask­ing by us­ing ACLs to re­strict LUN ac­ces­si­bil­ity to spe­cific ini­tia­tors, ex­cept when shared ac­cess is in­tended. ACLs can en­sure that only a des­ig­nated client node can log in to a spe­cific tar­get. On the tar­get server, ACLs can be set at the TPG level to se­cure groups of LUNs, or can be set in­di­vid­u­ally per LUN.

iSCSI tar­get con­fig­u­ra­tion

Let’s go through a demon­stra­tion of tar­get server con­fig­u­ra­tion.

tar­get­cli is both a com­mand line util­ity and an in­ter­ac­tive shell in which to cre­ate, delete and con­fig­ure iSCSI tar­get com­po­nents. Tar­get stacks are grouped into a hi­er­ar­chi­cal tree of ob­jects, al­low­ing easy nav­i­ga­tion and con­tex­tual con­fig­u­ra­tion. Fa­mil­iar Linux com­mands are used in this shell. These in­clude cd, ls, pwd and set.

The tar­get­cli shell also sup­ports TAB com­ple­tion. An ad­min­is­tra­tor can use TAB to ei­ther com­plete par­tially-typed com­mands or view a list of ac­cept­able key­words at the cur­rent lo­ca­tion in a com­mand.

Con­fig­ur­ing a tar­get server

In­stall tar­get­cli by us­ing the fol­low­ing com­mand:

# yum –y in­stall tar­get­cli.

Run tar­get­cli with no op­tions to en­ter in­ter­ac­tive mode:

# tar­get­cli

Cre­ate back­ing stor­age (back­stores). There are sev­eral types of back­ing stor­age, as listed be­low:

Block: A block de­vice is de­fined on the server. It is a disk drive, disk par­ti­tion, a log­i­cal vol­ume, multi-path de­vice, or any de­vice files de­fined on the server that are of Type b. fileio: This cre­ates a file of a spec­i­fied size in the file sys­tem of the server. This method is sim­i­lar to us­ing im­age files to be stor­age for vir­tual ma­chine disk im­ages. pscsi: Phys­i­cal ISCSI, this per­mits pass through to a phys­i­cal SCSI de­vice con­nected to the server. This back­store type is not typ­i­cally used.

Ramdisk: This cre­ates a ramdisk de­vice of a spec­i­fied size, in mem­ory on the server. This type of stor­age will not store data per­sis­tently. When the server is re­booted, the ramdisk def­i­ni­tion will re­turn when the tar­get is in­stan­ti­ated but all data would have been lost.

Note: For such cases, a log­i­cal vol­ume has been cre­ated: /dev/myvg/mylvm, al­though a sim­ple par­ti­tion can also be cre­ated.

Ex­am­ples in­clude us­ing an ex­ist­ing log­i­cal vol­ume, a disk par­ti­tion, and a new file at a spec­i­fied size. The back­stores get dis­played as de­ac­ti­vated.

Next, cre­ate an IQN for the tar­get. This step will also cre­ate a de­fault TPG un­der­neath the IQN.

An ad­min­is­tra­tor can use cre­ate with­out spec­i­fy­ing the IQN to cre­ate. tar­get­cli will gen­er­ate an IQN sim­i­lar to the fol­low­ing:­am­ple:server.

Spec­i­fy­ing the IQN value pro­vides the ad­min­is­tra­tor the abil­ity to use a mean­ing­ful names­pace for their IQNs.

Next, in the TPG, cre­ate an ACL for the client node to be used later. Be­cause the global au­to_ad­d_mapped_luns pa­ram­e­ter is set to true (de­fault), any ex­ist­ing LUNs in the TPG are mapped to each ACL as it is cre­ated.

The ACL con­fig­ures the tar­get to only ac­cept ini­tia­tor con­nec­tions from a client pre­sent­ing iqn.2018-01. com.ex­am­ple:client as its ini­tia­tor IQN, also known as the ini­tia­tor name.

Next, in the TPG, cre­ate a LUN for each ex­ist­ing back­store. This step also ac­ti­vates each back­store. Be­cause ACLs ex­ist for the TPG, they will au­to­mat­i­cally be as­signed to each LUN cre­ated.

A num­ber of LUNs as­signed to a tar­get means that when the ini­tia­tor con­nects to the tar­get, it will re­ceive n num­ber of SCSI de­vices.

Now, still in­side the TPG, cre­ate a por­tal con­fig­u­ra­tion to des­ig­nate the lis­ten­ing IP ad­dress and ports. Cre­ate a por­tal us­ing the sys­tem’s pub­lic net­works in­ter­face. With­out spec­i­fy­ing the TCP ports to be used, the por­tal cre­ation will de­fault to the stan­dard iSCSI port 3260.

Next, view the en­tire con­fig­u­ra­tion, then exit tar­get­cli, which will au­to­mat­i­cally save upon exit. The re­sult­ing per­sis­tent con­fig­u­ra­tion file is stored in JavaScript Ob­ject No­ta­tion (JSON) for­mat.

Now add a port ex­emp­tion to the de­fault fire­wall for port 3260, the stan­dard iSCSI port, as fol­lows:

# fire­wall­cmd ­­add­port=3260/tcp

# fire­wall­cmd ­­add­port=3260/tcp ­­per­ma­nent.

In the fi­nal step, en­able the tar­get.ser­vice sys­temd unit. The tar­get.ser­vice will recre­ate the tar­get con­fig­u­ra­tion from the json fill at boot. If this step is skipped, any con­fig­u­ra­tion tar­get will work un­til the ma­chine is re­booted. How­ever, af­ter a re­boot, no tar­get will be of­fered by the server.

# sys­tem­ctl en­able tar­get.


In ad­di­tion to ACL node ver­i­fi­ca­tion, pass­word based au­then­ti­ca­tion can be im­ple­mented. Au­then­ti­ca­tion may be re­quired dur­ing the iSCSI dis­cov­ery phase, and it can be uni­di­rec­tional or bi-direc­tional.

CHAP au­then­ti­ca­tion does not use strong en­cryp­tion for the pass­ing of cre­den­tials. While CHAP does of­fer an ad­di­tional fac­tor of au­then­ti­ca­tion be­sides hav­ing a cor­rectly con­fig­ured ini­tia­tor name con­fig­ured in an ACL, it should not be con­sid­ered as se­cure enough, if se­cu­rity of iSCSI data is of con­cern. Con­trol­ling the net­work side of the pro­to­col is a bet­ter method to en­sure se­cu­rity. Pro­vid­ing a ded­i­cated, iso­lated net­work or VLANs to pass the iSCSI traf­fic will be a more se­cure im­ple­men­ta­tion of the pro­to­col.

An in­tro­duc­tion to iSCSI ini­tia­tors

In Red Hat En­ter­prise Linux, an iSCSI ini­tia­tor is typ­i­cally im­ple­mented in soft­ware and func­tions sim­i­lar to a hard­ware iSCSI HBA to ac­cess tar­gets from a re­mote stor­age server. Us­ing soft­ware based iSCSI ini­tia­tors re­quires con­nect­ing to an ex­ist­ing Eth­er­net net­work of suf­fi­cient band­width to carry the ex­pected stor­age traf­fic.

iSCSI can also be im­ple­mented us­ing a hard­ware ini­tia­tor that in­cludes the re­quired pro­to­cols in a ded­i­cated host bus adap­tor. iSCSI HBAs and TCP off­load en­gines, which in­cludes the TCP net­work stack on an Eth­er­net NIC, and move the pro­cess­ing of iSCSI or TCP over­head and Eth­er­net in­ter­rupts to hard­ware, eas­ing the load on sys­tem CPUs.

Ac­cess­ing iSCSI stor­age

1.) Pre­pare the client sys­tem to be­come an iSCSI ini­tia­tor’s node by in­stalling the ini­tia­tor util­i­ties, set­ting the unique iSCSI client name and start­ing the iSCSI client ser­vice. In­stall the iscsi-ini­tia­tors-utils RPM, if not al­ready in­stalled, as fol­lows:

# yum –y in­stall iscsi-ini­tia­tors-utils

Cre­ate a unique iSCSI qual­i­fied name for the client ini­tia­tors by mod­i­fy­ing the Ini­tia­torName set­ting in /etc/ iscsi/ini­tia­torname.iscsi. Use the client sys­tem as the op­tional string af­ter the colon.

En­able and start the iSCSI client ser­vice, as fol­lows:

# sys­tem­ctl en­able iscsi # sys­tem­ctl start iscsi 2.) Dis­cover and log into the con­fig­ured tar­get from the iSCSI tar­get server.

Dis­cover and con­fig­ure the iSCSI tar­get pro­vided by the iSCSI tar­get server por­tal, as fol­lows:

# isc­si­adm –m dis­cov­ery –t st –p 172.25.X.XX Log into the pre­sented iSCSI tar­get, as fol­lows:

# isc­si­adm –m node –T­am­ple:server –p 172.25.X.XX

Iden­tify the newly avail­able block cre­ated by the iSCSI tar­get lo­gin, as fol­lows:

# ls­blk

Fig­ure 3: IQN has been cre­ated for the tar­get

Fig­ure 2: A block has been cre­ated for the log­i­cal vol­ume /dev/myvg/mylvm

Fig­ure 1: Tar­get­cli in­ter­ac­tive mode shown with ls com­mand

Fig­ure 7: Fi­nal con­fig­u­ra­tion

Fig­ure 5: LUN cre­ated for back­store

Fig­ure 6: Por­tal has been cre­ated for lis­ten­ing IP ad­dress and port

Fig­ure 4: Ac­cess Con­trol List has been cre­ated for the client

Fig­ure 10: Newly avail­able tar­get

Fig­ure 8: Client IQN has been up­dated

Fig­ure 9: The tar­get has been dis­cov­ered and the suc­cess­ful lo­gin has been given

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.