Hy­per-Con­verged In­fra­struc­ture is Trans­form­ing Data Cen­tres

OpenSource For You - - Contents -

Hy­per-con­ver­gence is the cur­rent and po­ten­tially fu­ture course of a jour­ney that be­gan years ago with server and stor­age vir­tu­al­i­sa­tion, in a bid to op­ti­mise the tra­di­tional siloed ap­proach to in­for­ma­tion tech­nol­ogy.

Hy­per­con­ver­gence is ba­si­cally an emerg­ing tech­nol­ogy in which the server, stor­age and net­work­ing equip­ment are all com­bined into one soft­ware­driven ap­pli­ance. Hy­per­con­verged in­fra­struc­ture (HCI) can be man­aged com­fort­ably through an easy­to­use soft­ware in­ter­face. Typ­i­cally, a hy­per­con­verged plat­form uses stan­dard off­the­shelf servers and stor­age, and a vir­tu­al­i­sa­tion hy­per­vi­sor that helps ab­stract and man­age these re­sources con­ve­niently. A sim­ple op­er­a­tional struc­ture makes this in­fra­struc­ture flex­i­ble, easy­to­scale and man­age.

HCI is gain­ing trac­tion not just in en­ter­prises but also with oth­ers who can ben­e­fit from heavy com­put­ing with­out com­plex man­age­ment, such as pro­fes­sional rac­ers! Train­ing and rac­ing is usu­ally data­in­ten­sive. Rac­ers need in­for­ma­tion about their speed, ac­cel­er­a­tion and torque in real­time to un­der­stand how they are per­form­ing and how they need to pro­ceed. One mis­take could cost them not just a vic­tory, but a life too! Rac­ing speeds de­pend on the speed and re­li­a­bil­ity of the data pro­cess­ing and an­a­lyt­ics that takes place be­hind the scenes. Last year, For­mula One rac­ing team Red

Bull Rac­ing re­placed its le­gacy sys­tems with HPE Sim­pliv­ity hy­per­con­verged in­fra­struc­ture, achiev­ing 4.5 times faster per­for­mance, in­creased agility and lower TCO, based on this move.

At the out­set, hy­per­con­ver­gence might sound sim­i­lar to all the vir­tu­al­i­sa­tion stuff you have read about in the past. So, let us first set out some facts about HCI be­fore round­ing

up the lat­est up­dates:

1. Nu­tanix, a leader in HCI, states that the tech­nol­ogy stream­lines de­ploy­ment, man­age­ment and scal­ing of data cen­tre re­sources by com­bin­ing x86­based server and stor­age re­sources with in­tel­li­gent soft­ware in a turnkey, soft­ware­de­fined so­lu­tion.

HCI com­bines com­pute, stor­age and the net­work to­gether as a sin­gle ap­pli­ance. It can be scaled out or ex­panded us­ing ad­di­tional nodes. That is, in­stead of scal­ing up the tra­di­tional way by adding more drives, mem­ory or CPUs to the base sys­tem, you scale out by adding more nodes. Groups of such nodes are known as clus­ters. Each node runs a hy­per­vi­sor like Nu­tanix Acrop­o­lis, VMware ESXi, Mi­crosoft Hy­per­V or Ker­nel­based Vir­tual Ma­chine (KVM), and the

HCI con­trol fea­tures run as a sep­a­rate vir­tual ma­chine on ev­ery node, form­ing a fully dis­trib­uted fab­ric that can scale re­sources with the ad­di­tion of new nodes.

Since most mod­ern HCI are en­tirely soft­ware­de­fined, there is no de­pen­dence on pro­pri­etary hard­ware.

HCI does not need a sep­a­rate team as it can be man­aged by any IT pro­fes­sional. This makes it ideal for small and medium en­ter­prises.

HCI is dif­fer­ent from the tra­di­tional server and shared­stor­age ar­chi­tec­tures. It also dif­fers from the pre­vi­ous gen­er­a­tion of con­verged in­fra­struc­ture.

HCI is not the same as cloud com­put­ing!

Scale­out and shared­core are two of the key­words you are bound to en­counter when read­ing about HCI. Ba­si­cally, most HCI im­ple­men­ta­tions in­volve mul­ti­ple servers or nodes in a clus­ter, with the stor­age re­sources dis­trib­uted across the nodes. This pro­vides re­silience against any com­po­nent or node fail­ure. Plus, by plac­ing stor­age at the nodes, the data is closer to com­pute than a tra­di­tional stor­age area net­work. So, you can ac­tu­ally get the most out of faster stor­age tech­nolo­gies like the non­volatile mem­ory ex­press (NVMe) or non­volatile dual in­line mem­ory mod­ule (NVDIMM). The sca­le­out ar­chi­tec­ture also en­ables the com­pany to add nodes as and when re­quired rather than in­vest­ing in ev­ery­thing up­front. Just add a node and con­nect it to the net­work, and you are ready to go. The re­sources are au­to­mat­i­cally re­bal­anced and ready to use. A lot of hy­per­con­verged im­ple­men­ta­tions also have a shared core, that is, the stor­age and vir­tual ma­chines com­pete for the same pro­ces­sors and mem­ory. This re­duces wastage and op­ti­mises re­source us­age. How­ever, some ex­perts feel that in spe­cial cases, the users might have to buy more equip­ment to run the same work­loads.

It’s true that HCI is great for small en­ter­prises as it can be used with­out fuss and has­sles, but it can be used by re­ally large data cen­tres too. Lead­ing com­pa­nies have on record hy­per­con­ver­gence case stud­ies where more than a thou­sand HCI nodes are in­stalled in a sin­gle data cen­tre.

In a re­cent sur­vey, re­search leader For­rester found that the most com­mon work­loads be­ing run on hy­per­con­verged sys­tems are: data­bases, such as Oracle or SQL server (cited by 50 per cent); file and print ser­vices (40 per cent); col­lab­o­ra­tions, such as Ex­change or SharePoint (38 per cent); vir­tual desk­tops (34 per cent); com­mer­cial pack­aged soft­ware such as SAP and Oracle (33 per cent); an­a­lyt­ics (25 per cent); and Web­fac­ing work­loads such as the LAMP stack or Web servers (17 per cent).

Dur­ing a re­cent talk, Dell EMC’s Euro­pean chief tech­nol­ogy of­fi­cer for con­verged and hy­per­con­verged in­fra­struc­ture, Nigel Moul­ton, said that “…mis­sion­crit­i­cal ap­pli­ca­tions live best in con­verged in­fras­truc­tures.” He jus­ti­fied his view by say­ing that many mis­sion­crit­i­cal ap­pli­ca­tions such as ERP are ar­chi­tected to rely on the un­der­ly­ing hard­ware (in the stor­age area net­work) for en­cryp­tion, repli­ca­tion and high avail­abil­ity. With HCI, on the other hand, many of these things come from the soft­ware stack. In­stead of look­ing at the move to HCI as all­or­none, he sug­gested that or­gan­i­sa­tions take a prac­ti­cal ap­proach to it. If an ap­pli­ca­tion can work on HCI, com­pa­nies must make the move as and when their ex­ist­ing hard­ware reaches the end of life. The rest can re­main on tra­di­tional hard­ware or con­verged

sys­tems. He also added that by the end of this year, the fea­ture-sets of tra­di­tional hard­ware, con­verged and hy­per-con­verged, will align and it will be dif­fi­cult to even see the dif­fer­ences.

How HCI dif­fers from tra­di­tional vir­tu­al­i­sa­tion, con­verged in­fra­struc­ture and cloud com­put­ing

In the be­gin­ning, there was just IT in­fra­struc­ture and no vir­tu­al­i­sa­tion. En­ter­prises spent a lot on buy­ing the best IT com­po­nents for each func­tion, and spent even more on peo­ple to im­ple­ment and man­age these si­los of tech­nol­ogy.

Then came vir­tu­al­i­sa­tion, which greatly op­ti­mised the way re­sources were shared and used with­out wastage by mul­ti­ple de­part­ments or func­tions.

In the last decade, we have seen con­verged in­fra­struc­ture (CI) emerg­ing as a pop­u­lar op­tion. In CI, the server, stor­age and net­work­ing hard­ware are de­liv­ered as a sin­gle rack, and sold as a sin­gle prod­uct, af­ter be­ing pre-tested and val­i­dated by the ven­dor. The sys­tem may in­clude soft­ware to eas­ily man­age hard­ware com­po­nents and vir­tual ma­chines (VMs), or sup­port third-party hy­per­vi­sors like VMware vSphere, Mi­crosoft Hy­per-V and KVM. As far as the cus­tomer is con­cerned, CI is con­ve­nient be­cause there is just a sin­gle point of con­tact and a sin­gle bill. Chris Evans, a UK-based con­sul­tant in vir­tu­al­i­sa­tion and cloud tech­nolo­gies, de­scribes it hu­mor­ously in one of his blogs, where he says that in CI there is just one throat to choke!

While this sin­gle prod­uct stack ap­proach is ideal for most en­ter­prises, there are some who want to do it them­selves. This could be be­cause they need heavy cus­tomi­sa­tion or have some healthy le­gacy in­fra­struc­ture that they do not want to dis­card. Given the re­quired time and re­sources, one can also build a CI out of ref­er­ence ar­chi­tec­tures. Here, the ven­dor gives you the ref­er­ence ar­chi­tec­ture, which is some­thing like an in­struc­tion man­ual that tells you which sup­plier plat­forms can work to­gether well and in what ways. So, the ven­dor tests, val­i­dates and cer­ti­fies the in­ter­op­er­abil­ity of prod­ucts and plat­forms, giv­ing you more than one op­tion to choose from. There­after, you can pro­cure the re­sources and put them to­gether to work as CI.

En­ter HCI. What makes the lat­est tech­nol­ogy ‘hy­per-con­verged’ is that it brings to­gether a va­ri­ety of tech­nolo­gies with­out be­ing ven­dor-ag­nos­tic. Here, mul­ti­ple off-the-shelf data cen­tre com­po­nents are brought to­gether below a hy­per­vi­sor. Along with the hard­ware, HCI also pack­ages the server and stor­age vir­tu­al­i­sa­tion fea­tures.

The term ‘hy­per’ also stems from the fact that the hy­per­vi­sor is more tightly in­te­grated into a hy­per-con­verged in­fra­struc­ture than a con­verged one.

In a hy­per-con­verged in­fra­struc­ture, all key data cen­tre func­tions run as soft­ware on the hy­per­vi­sor.

The phys­i­cal form of an HCI ap­pli­ance is the server or node that in­cludes a pro­ces­sor, mem­ory and stor­age. Each node has in­di­vid­ual stor­age re­sources, which im­proves the re­silience of the sys­tem. A hy­per­vi­sor or a vir­tual ma­chine run­ning on the node de­liv­ers stor­age ser­vices seam­lessly, hid­ing all com­plex­ity from the ad­min­is­tra­tor. Much of the traf­fic be­tween in­ter­nal VMs also flows over soft­ware-based net­work­ing in the hy­per­vi­sor. HCI sys­tems usu­ally come with fea­tures like data de-du­pli­ca­tion, com­pres­sion, data pro­tec­tion, snapshots, wide-area net­work op­ti­mi­sa­tion, backup and dis­as­ter re­cov­ery op­tions.

De­ploy­ing an HCI ap­pli­ance is a sim­ple plug-and-play pro­ce­dure. You just add a node and start us­ing it. There is no com­plex con­fig­u­ra­tion in­volved. Scal­ing is also easy be­cause you can add nodes on the go, as and when re­quired. This makes it a good op­tion for SMEs.

Broadly speak­ing, CI can be con­sid­ered a hard­ware-fo­cused ap­proach, while HCI is largely or en­tirely soft­ware-de­fined. CI works bet­ter in some sce­nar­ios, while HCI works for oth­ers. Some en­ter­prises might have a mix of tra­di­tional, con­verged and hy­per-con­verged in­fra­struc­ture. The com­mon thumb rule to­day is that when or­gan­i­sa­tions don’t have a big IT man­age­ment team and want to in­vest only in a phased man­ner, HCI is an ideal op­tion. They can add nodes grad­u­ally, de­pend­ing on their needs and man­age the in­fra­struc­ture with prac­ti­cally no knowl­edge of server, stor­age or net­work man­age­ment. How­ever, they lose the op­tion of choos­ing spe­cific stor­age or server prod­ucts and have to set­tle for what their ven­dor of­fers. For or­gan­i­sa­tions with more crit­i­cal ap­pli­ca­tions that re­quire greater con­trol over con­fig­u­ra­tion, a con­verged sys­tem is bet­ter. CI of­fers more choice and con­trol, but users might have to pur­chase the soft­ware them­selves, which means more time, money and ef­fort.

Fi­nally, even though HCI is a soft­ware-de­fined ar­chi­tec­ture that of­fers the con­ve­nience of the cloud to your en­ter­prise in­fra­struc­ture, it is not a term that can be used syn­ony­mously with cloud com­put­ing. The two are quite dif­fer­ent. The cloud is a ser­vice—it pro­vides in­fra­struc­ture, plat­form and soft­ware as ser­vices to those who need it. A cloud could be pri­vate, pub­lic or hy­brid, and herein starts the con­fu­sion, as many peo­ple mis­tak­enly think that hy­per-con­ver­gence and the pri­vate cloud are one and the same. Hyper­con­ver­gence is merely a tech­nol­ogy that en­ables easy de­ploy­ment of In­fra­struc­ture as a Ser­vice in your pri­vate cloud by ad­dress­ing many op­er­a­tional is­sues and im­prov­ing cost-ef­fi­ciency. It is an en­abler, which pre­pares your or­gan­i­sa­tion for the cloud era, and not the cloud it­self!

The bring-your-own-hard­ware kind

Ac­cord­ing to a re­cent Gart­ner re­port, soft­ware-only ‘bring-your-own­hard­ware’ hy­per-con­verged sys­tems have be­come sig­nif­i­cant and are in­creas­ingly com­pet­ing with hy­per-con­verged hard­ware ap­pli­ances.

Gart­ner re­search di­rec­tor Jon MacArthur ex­plained in a Web re­port that, “Most of the core tech­nol­ogy is start­ing to shift to soft­ware. If you buy a vSAN ReadyNode from Len­ovo, or Cisco, or HPE, or your pick of server plat­form, they are all pretty much the same. Cus­tomers are eval­u­at­ing vSAN and ReadyNode, not so much the hard­ware. Dell EMC VxRAIL is a pop­u­lar de­ploy­ment model for vSAN, but so is putting VMware on your own choice of hy­per-con­verged hard­ware.”

Spe­cial needs like those of the mil­i­tary and the so­phis­ti­ca­tion level of users are some of the fac­tors that in­flu­ence the de­ci­sion in favour of im­ple­ment­ing HCI as a soft­ware stack rather than a prepack­aged so­lu­tion.

Ma­jor play­ers

Gart­ner pre­dicts that the mar­ket for hy­per-con­verged in­te­grated sys­tems (HCIS) will reach nearly US$ 5 bil­lion (24 per cent of the over­all mar­ket for in­te­grated sys­tems) by 2019 as the tech­nol­ogy be­comes main­stream. So, apart from spe­cial­ists like Nu­tanix and Pivot3, we also see in­dus­try ma­jors like Cisco, Dell and HPE en­ter­ing the space with their own or ac­quired tech­nolo­gies.

Nu­tanix, a pi­o­neer in this space, of­fers a range of hard­ware sys­tems with dif­fer­ent op­tions for the num­ber of nodes, mem­ory/ stor­age ca­pac­ity, pro­ces­sors, etc. These can run VMware vSphere or Nu­tanix’s own hy­per­vi­sor sys­tem called Acrop­o­lis. The com­pany ex­cels at sup­port­ing large clus­ters of HCI de­ploy­ments, scal­ing up to 100 node clus­ters, which are easy to use and man­age.

Nu­tanix tempts cus­tomers with the idea of a full-stack in­fra­struc­ture and plat­form ser­vices with the prom­ise of ‘One OS, One Click’. Prism is a com­pre­hen­sive man­age­ment so­lu­tion, while Acrop­o­lis is a turnkey plat­form that com­bines server, stor­age, vir­tu­al­i­sa­tion and net­work­ing re­sources. Calm is for ap­pli­ca­tion au­to­ma­tion and life cy­cle man­age­ment in Nu­tanix and pub­lic clouds, while Xpress is de­signed for smaller IT or­gan­i­sa­tions. Xi Cloud Ser­vices help ex­tend your data cen­tre to an in­te­grated pub­lic cloud en­vi­ron­ment. Nu­tanix also has a com­mu­nity edi­tion that lets you eval­u­ate the Nu­tanix En­ter­prise Cloud Plat­form at no cost!

Com­pa­nies like Cisco and HPE are strength­en­ing their foothold by ac­quir­ing star­tups in the space.

Cisco bought Spring­path, whose tech­nol­ogy pow­ers Hyper­flex—Cisco’s fully-in­te­grated hy­per-con­ver­gence in­fra­struc­ture sys­tem. HPE ac­quired Sim­pliv­ity, one of the ma­jor com­peti­tors of Nu­tanix. HPE Sim­pliv­ity prod­ucts come with a strong, cus­tomde­signed plat­form called Om­niS­tack, which in­cludes a host of fea­tures like multi-site data man­age­ment, global dedu­pli­ca­tion, backup, snapshots, clones, multi-site data repli­ca­tion, dis­as­ter re­cov­ery and WAN op­ti­mi­sa­tion.

Pivot3 also finds a place amongst the most pop­u­lar HCI play­ers. Last year, the com­pany launched Acu­ity, which it claims to be the first pri­or­ityaware, per­for­mance-ar­chi­tected

HCI so­lu­tion with pol­icy-based man­age­ment. Acu­ity’s ad­vanced qual­ity of ser­vice (QoS) makes it pos­si­ble to sim­ply and pre­dictably run mul­ti­ple, mixed ap­pli­ca­tions on a sin­gle plat­form.

Dell EMC of­fers the VxRail and XC se­ries of HCI so­lu­tions based on its Pow­erEdge servers pow­ered by

In­tel pro­ces­sors. Last year, it re­leased hy­per-con­verged ap­pli­ances that in­clude In­tel’s 14-nanome­tre Xeon SP pro­ces­sors, along with VMware’s vSAN and EMC’s ScaleIO. Dell EMC of­fers NVIDIA GPU ac­cel­er­a­tors in both VxRail and XC so­lu­tions.

Apart from the reg­u­lar HCI in­fra­struc­ture, some star­tups are also com­ing up with in­no­va­tive so­lu­tions, which some trend-watch­ers de­scribe as HCI 2.0! NetApp, for ex­am­ple, has an op­tion for those who want to keep their servers and stor­age sep­a­rate—ei­ther to share the stor­age with non-HCI sys­tems or to off­load cer­tain tasks to ded­i­cated servers. NetApp HCI uses SolidFire tech­nol­ogy to de­liver clus­ters with ded­i­cated stor­age and com­pute nodes.

Com­pa­nies like Pivot3, At­lantis Com­put­ing and Maxta also of­fer pure soft­ware HCI so­lu­tions.

AI and ma­chine learn­ing may cre­ate de­mand

It is in­ter­est­ing to note that Dell EMC is push­ing NVIDIA GPU ac­cel­er­a­tors in its HCI so­lu­tions not just for video pro­cess­ing but also for run­ning ma­chine learn­ing al­go­rithms. Chad Dunn, who leads prod­uct man­age­ment and mar­ket­ing for Dell EMC’s

HCI port­fo­lio, ex­plains in a me­dia in­ter­view, “All the HCI so­lu­tions have a hy­per­vi­sor and gen­er­ally, in HPC, you’re go­ing for a bare-metal per­for­mance and you want as close to real-time op­er­a­tions as you pos­si­bly can. Where that could start to change is in ma­chine learn­ing and ar­ti­fi­cial in­tel­li­gence (AI). You typ­i­cally think of the In­ter­net-of-Things in­tel­li­gent edge use cases. There’s so much data be­ing gen­er­ated by the IoT that the data it­self is not valu­able. The in­sight that the data pro­vides you is ex­cep­tion­ally valu­able, so it makes a lot of sense not to bring that data all the way back to the core. You want to put the data an­a­lyt­ics and de­ci­sion-mak­ing of that data as close to the de­vices as you can, and then take the valu­able in­sight and leave the par­tic­u­larly worth­less data out where it is. What be­comes in­ter­est­ing is that the form fac­tors that HCI can get to are rel­a­tively small. Where the ma­chine learn­ing piece comes in is what we ex­pect to see and what we’re start­ing to see—peo­ple look­ing to lever­age the graph­i­cal pro­ces­sor units in these plat­forms.”

In­ci­den­tally, in Novem­ber 2017, Nu­tanix’s pres­i­dent Sud­heesh Nair also spoke about how AI and ma­chine learn­ing are be­com­ing ex­tremely im­por­tant for cus­tomers, at the com­pany’s .Next user con­fer­ence. He ex­plained: “If we have a cus­tomer who is pro­to­typ­ing an au­ton­o­mous car, that car gen­er­ates al­most 16TB of data per day, per city. An oil rig gen­er­ates around 100TB per rig in the mid­dle of an ocean with no con­nec­tiv­ity. There is no way you can bring this data to the cloud – you have to take the cloud to the data, to where the data is be­ing cre­ated. But mov­ing in­for­ma­tion from the data cen­tre to the cloud cre­ates man­age­abil­ity is­sues, and AI can be a bet­ter op­tion. That’s where ma­chine learn­ing and ar­ti­fi­cial in­tel­li­gence have a big part to play.”

Com­pa­nies have to work out ways to store all that in­for­ma­tion in a way that you can eas­ily ac­cess, and run an­a­lyt­i­cal mod­els to give you pre­dic­tive be­hav­iour. This surge in de­mand for AI and ma­chine learn­ing pro­vides a real op­por­tu­nity for HCI.

Five good rea­sons why CIOs are mov­ing to HCI and why IT guys should stay on top of the tech

Al­though hy­per-con­ver­gence as a con­cept has been around for more than five years, it is ob­vi­ously gain­ing a lot of mo­men­tum now with ev­ery­body talk­ing about it and the re­cent spate of ac­qui­si­tions. And it comes with some real ben­e­fits such as:

1. It is re­ally quick and easy

2. Does not re­quire a big team to man­age it

3. Re­duces cost of own­er­ship

4. Makes it easy to launch and scale on-the-go

5. Im­proves re­li­a­bil­ity and flex­i­bil­ity of the data cen­tre

As men­tioned ear­lier, HCI is not the panacea for all your in­fra­struc­ture man­age­ment prob­lems. But if you need new in­fra­struc­ture or have to re­place ex­ist­ing ones, do ask your­self whether an HCI ap­pli­ance can do the job. If it can, go for it be­cause it can ease your ad­min and cost headaches quite a bit!

By: Janani Gopalkr­ish­nan Vikram

The au­thor is a tech­ni­cally qual­i­fied free­lance writer and ed­i­tor based in Chen­nai. She can be con­tacted at gjanani@gmail.com.

Fig­ure 1: The Dell EMC VxRail hy­per-con­verged in­fra­struc­ture ap­pli­ance with In­tel Xeon pro­ces­sors (Cour­tesy: Dell EMC - A few quick facts about HCI)

Fig­ure 2: Hy­per-con­ver­gence com­bines com­put­ing, stor­age, net­work­ing and vir­tu­al­i­sa­tion into one easy-to-han­dle soft­ware-de­fined sys­tem (Cour­tesy: Nu­tanix)

Fig­ure 3: The growth in soft­ware-based hy­per-con­ver­gence has re-po­si­tioned many play­ers in the Gart­ner Magic Quad­rant this year (Source: Gart­ner)

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.