Surf­ing the Dig­i­tal Wave with ADDC

The ap­pli­ca­tion driven data cen­tre (ADDC) is a de­sign whereby all the com­po­nents of the data cen­tre can com­mu­ni­cate di­rectly with an ap­pli­ca­tion layer. As a re­sult, ap­pli­ca­tions can di­rectly con­trol the data cen­tre com­po­nents for bet­ter per­for­mance and av

OpenSource For You - - Admin - By: Abhradip Mukher­jee, Jaya­sun­dar Sankaran and Venkat­acha­lam Subra­ma­nian Abhradip Mukher­jee is a so­lu­tions ar­chi­tect at Global In­fras­truc­ture Ser­vices, Wipro Tech­nolo­gies. He can be reached at Jaya­sun­dar Sankaran is a prin­ci­ple archi

World­wide, the IT in­dus­try is un­der­go­ing a tec­tonic shift. To the In­dian IT ser­vice providers, this shift of­fers both new op­por­tu­ni­ties and chal­lenges. For long, the In­dian IT in­dus­try has en­joyed the priv­i­lege of be­ing a sup­plier of an English-speak­ing, in­tel­li­gent work­force that meets the global de­mand for IT pro­fes­sion­als. Till now, In­dia could lever­age the peo­ple cost ar­bi­trage be­tween the de­vel­oped and de­vel­op­ing coun­tries. The ba­sic premise was that IT man­age­ment will al­ways re­quire skilled pro­fes­sional peo­ple. There­fore, the op­er­at­ing model of the In­dian IT in­dus­try has so far been head­count based.

To­day, that fun­da­men­tal premise has given way to au­to­ma­tion and ar­ti­fi­cial in­tel­li­gence (AI). This has re­sulted in more de­mand for au­to­ma­tion so­lu­tions and a re­duc­tion in head­count—chal­leng­ing the tra­di­tional op­er­at­ing model. The new so­lu­tions in de­mand re­quire dif­fer­ent skillsets. The In­dian IT work­force is now strug­gling to meet this new skillset cri­te­ria.

Ear­lier, the in­dus­try’s de­pen­dence on peo­ple also meant time-con­sum­ing man­ual labour and de­lays caused by man­ual er­rors. The new so­lu­tions in­stead of­fer the ben­e­fits of au­to­ma­tion, such as speed­ing up IT op­er­a­tions by re­plac­ing peo­ple. This is sim­i­lar to the time when com­put­ers started re­plac­ing math­e­ma­ti­cians.

But just as com­put­ers re­placed math­e­ma­ti­cians yet cre­ated new jobs in the IT sec­tor, this new wave of au­to­ma­tion is also cre­at­ing jobs for a new gen­er­a­tion with new skillsets. In to­day’s world, in­fras­truc­ture man­age­ment and process man­age­ment pro­fes­sion­als are be­ing re­placed by de­vel­op­ers writ­ing code for au­to­ma­tion.

These new cod­ing lan­guages man­age in­fras­truc­ture in a rad­i­cally dif­fer­ent way. Tra­di­tion­ally, in­fras­truc­ture was man­aged by the op­er­a­tions teams and de­vel­op­ers never got in­volved. But now, the new man­age­ment prin­ci­ples talk about man­ag­ing in­fras­truc­ture through au­to­ma­tion code. This changes the role of sysad­mins and de­vel­op­ers.

The de­vel­op­ers need to un­der­stand in­fras­truc­ture op­er­a­tions and use these lan­guages to con­trol the data cen­tre. There­fore, they can now po­ten­tially start get­ting into the in­fras­truc­ture man­age­ment space. This is a threat to the ex­ist­ing in­fras­truc­ture op­er­a­tions work­force, un­less they them­selves skill up as in­fras­truc­ture de­vel­op­ers.

So does it mean that by learn­ing to code, one can se­cure jobs in this tur­bu­lent job mar­ket? The an­swer is both ‘Yes’ and ‘No’. ‘Yes’, be­cause in the com­ing days ev­ery­one needs to be a de­vel­oper.

And it’s also a ‘No’ be­cause in or­der to get into the in­fras­truc­ture man­age­ment space, one needs to master new in­fras­truc­ture cod­ing lan­guages even if one is an ex­pert de­vel­oper in other lan­guages.

New trends in IT in­fras­truc­ture

The new age in­fras­truc­ture is built to be man­aged by code. De­vel­op­ers can ben­e­fit from this new ar­chi­tec­ture by con­trol­ling in­fras­truc­ture from the ap­pli­ca­tions layer. In this

new model, an ap­pli­ca­tion can in­ter­act with the in­fras­truc­ture and shape it the way re­quired. It is not about de­sign­ing the in­fras­truc­ture with the ap­pli­ca­tion’s re­quire­ment as the cen­tral theme (ap­pli­ca­tion-cen­tric in­fras­truc­ture); rather, it is about de­sign­ing the in­fras­truc­ture in a way that the ap­pli­ca­tion can drive it (ap­pli­ca­tion-driven in­fras­truc­ture). We are not go­ing to build in­fras­truc­ture to host a group of ap­pli­ca­tions but rather, we will cre­ate ap­pli­ca­tions that can con­trol var­i­ous items of the in­fras­truc­ture. Some of the prom­i­nent use cases in­volve ap­pli­ca­tions be­ing able to au­to­mat­i­cally re­cover from in­fras­truc­ture fail­ures. Also, scal­ing to achieve the best per­for­mance-to-cost ra­tio is achieved by em­bed­ding busi­ness logic in the ap­pli­ca­tion code that drives in­fras­truc­ture con­sump­tion.

In to­day’s com­pet­i­tive world, these ben­e­fits can pro­vide a win­ning edge to a busi­ness against its com­peti­tors. While IT lead­ers such as Google, Ama­zon, Facebook and Ap­ple are al­ready op­er­at­ing in these ways, tra­di­tional en­ter­prises are only start­ing to think and move into these ar­eas. They are em­bark­ing on a jour­ney to reach the ADDC nir­vana state by tak­ing small steps to­wards it. Each of these small steps is trans­form­ing the tra­di­tional en­ter­prise data cen­tres, block by block, to be more com­pat­i­ble for an ap­pli­ca­tion-driven data cen­tre de­sign.

The build­ing blocks of ADDC

For ap­pli­ca­tions to be able to con­trol any­thing, they re­quire the data cen­tre com­po­nents to be avail­able with an ap­pli­ca­tion pro­gram­ming in­ter­face (API). So the first thing en­ter­prises need to do with their in­fras­truc­ture is to con­vert ev­ery com­po­nent’s con­trol in­ter­face into an API. Also, some­times, tra­di­tional pro­gram­ming lan­guages do not have the right struc­tural sup­port for con­trol­ling these in­fras­truc­ture com­po­nents and, hence, some new pro­gram­ming lan­guages need to be used that have in­fras­truc­ture do­main-spe­cific struc­tural sup­port. These lan­guages should be able to un­der­stand the in­fras­truc­ture com­po­nents such as the CPU, disk, mem­ory, file, pack­age, ser­vice, etc. If we are tasked with trans­form­ing a tra­di­tional data cen­tre into an ADDC, we have to first un­der­stand the build­ing blocks of the lat­ter, which we have to achieve, one by one. Let’s take a look at how each tra­di­tional man­age­ment build­ing block of an en­ter­prise data cen­tre will map into an ADDC set-up.

1. The Bare-metal-as-a-Ser­vice API

The bare metal phys­i­cal hard­ware has tra­di­tion­ally been man­aged by the ven­dor-spe­cific firmware in­ter­faces. Nowa­days, open stan­dard firmware in­ter­faces have emerged, which al­low one to write code in any of the ap­pli­ca­tion cod­ing lan­guages to in­ter­act through the HTTP REST API. One ex­am­ple of an open stan­dard Bare-metal-as-a-Ser­vice API is Red­fish. Most of the pop­u­lar hard­ware ven­dors are now al­low­ing their firmware to be con­trolled through Red­fish API im­ple­men­ta­tion. The Red­fish spec­i­fi­ca­tions-com­pat­i­ble hard­ware can be di­rectly con­trolled through a general ap­pli­ca­tion over HTTP, and with­out nec­es­sar­ily go­ing through any op­er­at­ing sys­tem in­ter­preted layer.

2. The soft­ware de­fined net­work­ing API

A tra­di­tional net­work layer uses spe­cialised ap­pli­ances such as switches, fire­walls and load bal­ancers. Such ap­pli­ances have built-in con­trol and data planes. Now, the net­work layer is trans­form­ing into a soft­ware de­fined so­lu­tion, which sep­a­rates the con­trol plane from the data plane.

In soft­ware de­fined so­lu­tions for net­work­ing, there are mainly two ap­proaches. The first one is called a soft­ware de­fined net­work (SDN). Here, the cen­tral soft­ware con­trol layer in­stalled on a com­puter will con­trol sev­eral of the net­work’s phys­i­cal hard­ware com­po­nents to pro­vide the spe­cific net­work func­tion­al­ity such as rout­ing, fire­wall and load bal­ancers. The sec­ond one is the vir­tual net­work func­tion (VNF). Here, the ap­proach is to re­place hard­ware com­po­nents on a real net­work with soft­ware so­lu­tions on the vir­tual net­work. The process of cre­at­ing vir­tual net­work func­tions is called net­work func­tion vir­tu­al­i­sa­tion (NFV). The soft­ware con­trol lay­ers are ex­posed as APIs, which can be used by the soft­ware/ap­pli­ca­tion codes. This pro­vides the abil­ity to con­trol net­work­ing com­po­nents from the ap­pli­ca­tion layer.

3. The soft­ware de­fined stor­age API

Tra­di­tional stor­ages such as SAN and NAS have now trans­formed into soft­ware de­fined stor­age so­lu­tions, which can of­fer both block and file sys­tem ca­pa­bil­i­ties. These soft­ware de­fined stor­age so­lu­tions are pur­pose-built op­er­at­ing sys­tems that can make a stan­dard phys­i­cal server ex­hibit the prop­er­ties of a stor­age de­vice. We can for­mat a stan­dard x86 server with these spe­cialised op­er­at­ing sys­tems, to cre­ate a stor­age so­lu­tion

out of this general-pur­pose server. De­pend­ing on the soft­ware, the stor­age so­lu­tion can ex­hibit the be­hav­iour of SAN block stor­age, NAS file stor­age or even ob­ject stor­age. Ceph, for ex­am­ple, can cre­ate all three types of stor­age out of the same server. In these cases, the disk de­vices at­tached to the servers op­er­ate as the stor­age blocks. The disks can be stan­dard di­rect at­tached stor­age (like the one in your lap­top) or a num­ber of disks daisy-chained to your server sys­tem.

The soft­ware de­fined so­lu­tions can be ex­tended and con­trolled through the soft­ware li­braries and APIs that they ex­pose. Typ­i­cally avail­able on a REST API and with UNIX/ Linux based op­er­at­ing sys­tems, these are easy to in­te­grate with other or­ches­tra­tion so­lu­tions. For ex­am­ple, OpenS­tack ex­poses Cin­der for block stor­age, Manila for file stor­age and Swift for ob­ject stor­age. An ap­pli­ca­tion can ei­ther run man­age­ment com­mands on the na­tively sup­ported CLI shell or the na­tive/or­ches­tra­tion APIs.

4. The Com­pute-as-a-Ser­vice API

Com­pute-as-a-Ser­vice is the abil­ity to serve the bare metal, the vir­tual ma­chine or the con­tain­ers in an on-de­mand ba­sis over API end­points or through self-ser­vice por­tals. It is built mostly on top of vir­tu­al­i­sa­tion or con­tainer­i­sa­tion plat­forms. A Com­pute-as-a-Ser­vice model may or may not be a cloud so­lu­tion. Hyper­vi­sors that can be man­aged through a self­ser­vice por­tal and API end­point can be con­sid­ered as Com­puteas-a-Ser­vice. For ex­am­ple, a VMware vSphere im­ple­men­ta­tion with a self-ser­vice por­tal and API end­point is such a so­lu­tion. Sim­i­larly, on the con­tainer­i­sa­tion front, the con­tainer or­ches­tra­tion tools like Ku­ber­netes are not a cloud so­lu­tion but a good ex­am­ple of Com­pute-as-a-Ser­vice with an API and self­ser­vice GUI. Typ­i­cal cloud so­lu­tions that al­low one to pro­vi­sion vir­tual ma­chines (like AWS EC2), con­tain­ers (like AWS ECS) and in some cases even phys­i­cal ma­chines (like Soft­layer), are ex­am­ples of com­pute power pro­vided as a ser­vice.

5. The in­fras­truc­ture or­ches­tra­tion API

In­fras­truc­ture or­ches­tra­tion is the In­fras­truc­ture-as-a-Ser­vice cloud so­lu­tion that can of­fer in­fras­truc­ture com­po­nents on de­mand, as a ser­vice, over an API. In case of in­fras­truc­ture or­ches­tra­tion, it is not only about VM pro­vi­sion­ing. It is about or­ches­trat­ing var­i­ous in­fras­truc­ture com­po­nents in stor­age, net­work­ing and com­pute, in an op­ti­mised man­ner. This helps pro­vi­sion­ing and de-pro­vi­sion­ing of com­po­nents as per the de­mands of busi­ness. The cloud so­lu­tions typ­i­cally of­fer con­trol over such or­ches­tra­tion through some pro­gram­ming lan­guage to con­fig­ure or­ches­tra­tion log­ics. For ex­am­ple,

AWS pro­vides cloud for­ma­tion and OpenS­tack pro­vides the Heat lan­guage for this. How­ever, nowa­days, in a mul­ti­cloud strat­egy, new lan­guages have come up for hy­brid cloud or­ches­tra­tion. Ter­raform and Cloud­ify are two prime ex­am­ples.

6. Con­fig­u­ra­tion man­age­ment as code and API

In IT, change and con­fig­u­ra­tion man­age­ment are the tra­di­tional ITIL pro­cesses that track ev­ery change in the con­fig­u­ra­tion of sys­tems. Typ­i­cally, the process is re­ac­tive, whereby change is per­formed on the sys­tems and then recorded in a cen­tral con­fig­u­ra­tion man­age­ment data­base.

How­ever, cur­rently, changes are first recorded in a data­base as per the need. Then these changes are ap­plied to sys­tems us­ing au­to­ma­tion tools to bring them to the de­sired state, as recorded in the data­base. This new-age model is known as the de­sired state of con­fig­u­ra­tion man­age­ment. cfEngine, Pup­pet, Chef, etc, are well known con­fig­u­ra­tion man­age­ment tools in the mar­ket.

These tools con­fig­ure the tar­get sys­tems as per the de­sired con­fig­u­ra­tion men­tioned in the files. Since this is done by writ­ing text files with a syn­tax and some log­i­cal con­structs, these files are known to be in­fras­truc­ture con­fig­u­ra­tion codes. Us­ing such code to man­age in­fras­truc­ture is known as ‘con­fig­u­ra­tion man­age­ment as code’ or ‘in­fras­truc­ture as code’. These tools typ­i­cally ex­pose an API end­point to cre­ate the de­sired con­fig­u­ra­tion on tar­get servers.

7. The Plat­form-as-a-Ser­vice API

Plat­form-as-a-Ser­vice (PaaS) so­lu­tions pro­vide the plat­form com­po­nents such as ap­pli­ca­tion, mid­dle­ware or data­base, on de­mand. These so­lu­tions hide the com­plex­ity of the in­fras­truc­ture at the back­end. At the fron­tend, they ex­pose a sim­ple GUI or API to pro­vi­sion, de-pro­vi­sion or scale plat­forms for the ap­pli­ca­tion to run.

So in­stead of say­ing, “I need a Linux server for in­stalling MySQL,” the de­vel­oper will just have to say, “I need a MySQL in­stance.” In a PaaS so­lu­tion, de­ploy­ing a data­base means it will de­ploy a new VM, in­stall the re­quired soft­ware, open up fire­wall ports and also pro­vi­sion the other de­pen­den­cies needed to ac­cess the data­base. It does all of this at the back­end, ab­stract­ing the com­plex­i­ties from the de­vel­op­ers, who only need to ask for the data­base in­stance, to get the details. Hence de­vel­op­ers can fo­cus on build­ing ap­pli­ca­tions with­out wor­ry­ing about the un­der­ly­ing com­plex­i­ties.

The APIs of a PaaS so­lu­tion can be used by the ap­pli­ca­tion to scale it­self. Most of the PaaS so­lu­tions are based on con­tain­ers which can run on any VM, be it within the data cen­tre or in the pub­lic cloud. So the PaaS so­lu­tions can stretch across pri­vate and pub­lic cloud en­vi­ron­ments.

There­fore, in the case of PaaS, cloud­burst­ing is much eas­ier than in IaaS. (Cloud­burst­ing is the process of scal­ing out from pri­vate cloud to pub­lic cloud re­sources as per the load/de­mand on the ap­pli­ca­tion.)

8. DevOps or­ches­tra­tion and the API

DevOps can be de­fined in two ways:

1. It is a new name for au­tomat­ing the re­lease man­age­ment process that makes de­vel­op­ers and the op­er­a­tions team work to­gether.

2. The op­er­a­tions team man­ages op­er­a­tions by writ­ing code, just like de­vel­op­ers.

In DevOps, the ap­pli­ca­tion re­lease man­age­ment and ap­pli­ca­tion’s re­source de­mand man­age­ment is of pri­mary im­por­tance.

The tra­di­tional work­flow tools like Jenk­ins have a new role of be­com­ing or­ches­tra­tors of all data cen­tre com­po­nents in an au­to­mated work­flow. In this age of DevOps and ADDC, ev­ery prod­uct ven­dor re­leases the Jenk­ins plug­ins for their prod­ucts as soon as it re­leases the prod­uct or its up­dates. This en­ables all of these ADDC com­po­nents and the API end­points to be or­ches­trated through a tool like Jenk­ins.

Apart from Jenk­ins, open source con­fig­u­ra­tion man­age­ment au­to­ma­tion tools like Pup­pet and Chef can also eas­ily in­te­grate with other lay­ers of ADDC to cre­ate a set of pro­gram­matic or­ches­tra­tion jobs ex­posed over API calls to run these jobs. These jobs can be run from API in­vo­ca­tion, to or­ches­trate the data cen­tre through the or­ches­tra­tion of all other API lay­ers.

ADDC is there­fore an ap­proach to com­bin­ing var­i­ous in­de­pen­dent tech­nol­ogy so­lu­tions to cre­ate API end­points for ev­ery­thing in a data cen­tre. The ben­e­fit is the pro­gramma­bil­ity of the en­tire data cen­tre. The­o­ret­i­cally, a pro­gram can be writ­ten to do all the jobs that are done by peo­ple in a tra­di­tional data cen­tre. That is the au­to­ma­tion nir­vana which will be ab­so­lutely free of hu­man er­rors and the most op­ti­mised process, be­cause it will re­move hu­man el­e­ments from the data cen­tre man­age­ment com­pletely. How­ever, such a holis­tic app has not ar­rived yet. Var­i­ous new age tools are com­ing up ev­ery day to take ad­van­tage of these APIs for spe­cific use cases. So, once the data cen­tre has been con­verted into an ADDC, it is only left to the de­vel­op­ers’ imag­i­na­tion as to how much can be au­to­mated – there is noth­ing that can­not be done.

Com­ing back to what we started with – the move to­wards ar­chi­tec­tures like ADDC is surely go­ing to im­pact jobs as hu­mans will be re­placed by au­to­ma­tion. How­ever, there is the op­por­tu­nity to be­come au­to­ma­tion ex­perts in­stead of stick­ing to man­ual labour pro­files. Hence, in or­der to com­bat the new au­to­ma­tion job role de­mands in the mar­ket, one needs to spe­cialise in one or some of these ADDC build­ing blocks to stay rel­e­vant in this trans­form­ing mar­ket. Hope­fully, this ar­ti­cle will help you build a mind map of all the do­mains you can try to skill up for.

Fig­ure 3: Tra­di­tional data cen­tre mapped to ADDC

Fig­ure 2: Ap­pli­ca­tion-driven in­fras­truc­ture

Fig­ure 1: Ap­pli­ca­tion-cen­tric in­fras­truc­ture

Fig­ure 4: A tra­di­tional net­work vs a soft­ware de­fined net­work

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.