be­yond the blame game

The Smart Manager - - Contents -

Au­to­ma­tion and AI need not nec­es­sar­ily be viewed as forces ob­vi­at­ing the need for hu­man ex­per­tise, opines Ra­jesh Mau­rya, Fortinet.

In an ar­ti­cle in The Guardian, Stephen Hawk­ing wrote, “The au­to­ma­tion of fac­to­ries has al­ready dec­i­mated jobs in tra­di­tional man­u­fac­tur­ing, and the rise of ar­ti­fi­cial in­tel­li­gence is likely to ex­tend this job de­struc­tion deep into the mid­dle classes, with only the most car­ing, cre­ative, or su­per­vi­sory roles re­main­ing.”* Here, the au­thor holds a slightly con­trar­ian view, ex­plain­ing how AI would cre­ate more jobs, than rather elim­i­nate them, and why hu­man in­sight would re­main cru­cial to suc­cess.

Adapt­ing to the new dig­i­tal econ­omy re­quires or­ga­ni­za­tions to re­tool not just their net­works, but in many cases, core busi­ness pro­cesses as well. The cre­ation, ex­change, and anal­y­sis of data—about cus­tomers, prod­ucts, and their us­age— en­ables or­ga­ni­za­tions to gain the in­sights they need to im­prove op­er­a­tional ef­fi­ciency, busi­ness agility, and cus­tomer ex­pe­ri­ence. The three pil­lars of dig­i­tal busi­ness are au­to­ma­tion, agility, and an­a­lyt­ics. As the speed of busi­ness ac­cel­er­ates, crit­i­cal pro­cesses need to oc­cur at dig­i­tal speeds, which means that hu­man be­ings, and hu­man er­ror, need to be re­moved from many of the ba­sic op­er­a­tions that sup­port the or­ga­ni­za­tion. The first steps to­wards AI and au­to­ma­tion are, in­deed, within reach. If or­ga­ni­za­tions quickly pri­or­i­tize the strate­gic en­ablers—speed, in­te­gra­tion, ad­vanced an­a­lyt­ics, risk-based de­ci­sion en­gines—they are primed to cre­ate a highly ef­fi­cient busi­ness model

Ra­jesh Mau­rya is Re­gional Vice Pres­i­dent, In­dia & SAARC at Fortinet.

that uti­lizes both hu­man and ma­chine re­sources for what each does best, and does so with ex­tra­or­di­nary agility. Our best tal­ent must be fo­cused on the most crit­i­cal de­ci­sions, while au­to­mated sys­tems han­dle lower-or­der de­ci­sions and pro­cess­ing. That means we need to de­velop and de­ploy risk-based de­ci­sion-making en­gines that take hu­mans out of the loop, and in­stead, put them above the loop. Af­ter fast, spe­cial­ized anal­y­sis and in­te­gra­tion, risk en­gines are the third ma­jor step to­ward AI. They will ex­e­cute the ‘OODA loop’ (ob­serve, ori­ent, de­cide, and act) for the vast ma­jor­ity of sit­u­a­tions. Pre-planned Cour­ses of Ac­tions (COAs) will free up valu­able cy­ber­se­cu­rity ex­perts to con­cen­trate on the more dif­fi­cult de­ci­sions, where hu­man cog­ni­tion and in­ter­ven­tion are most re­quired. The most so­phis­ti­cated of such en­gines will ac­tu­ally sug­gest COAs rather than only rely on pre­de­fined ones. So far so good on the case for AI and au­to­ma­tion to in­vade all spheres of busi­ness and the need to re­place hu­man de­ci­sion-making. On the flip side, there are ar­gu­ments that au­to­ma­tion is set to re­place hu­man be­ings, and will usher in an era of in­creased job loss and un­em­ploy­ment. Over the past two years, this de­bate has grown in­creas­ingly heated, an ar­gu­ment which has of­ten been in­tro­duced as a counter to some of the more abra­sive stances. At first glance, it may have ap­peared to be a fact-based re­sponse to the an­i­mos­ity and di­vi­sive­ness that de­fined the de­bate. For those of us with deeper, first-hand knowl­edge though, it was just as fear-based and mis­in­formed. The ar­gu­ment was most suc­cinctly summed up by an op-ed in Los An­ge­les Times, ‘Robots, not im­mi­grants, are tak­ing Amer­i­can jobs’. It states, “A White House re­port re­leased in De­cem­ber says 83 per cent of US jobs in which peo­ple make less than $20 per hour are now, or soon will be, sub­ject to au­to­ma­tion… and warns Amer­i­cans to get ready for an era of 60 per cent un­em­ploy­ment.” But here is the truth and coun­ter­ar­gu­ment. At Fortinet, we have been in­vest­ing in AI for years. It is an in­cred­i­ble tech­nol­ogy that presents ex­tra­or­di­nary op­por­tu­ni­ties for how to pro­tect net­works and, ul­ti­mately, the in­ter­net. As AI be­comes more com­mon and more so­phis­ti­cated, it con­sis­tently clar­i­fies an im­por­tant truth: the value, power, and ef­fi­ciency of AI do not arise from its abil­ity to re­place hu­man be­ings. In fact, AI does just the op­po­site. Both au­to­ma­tion and AI un­der­score how cen­tral and crit­i­cal hu­man in­sight and ex­per­tise are to suc­cess. Ar­gu­ments and head­lines that cast tech­nol­ogy as an en­croach­ing threat, widen­ing so­cial di­vides, and lim­it­ing op­por­tu­ni­ties may pro­voke stronger re­ac­tions (and more clicks), but in gen­eral, in­no­va­tion is not ad­di­tive or sub­trac­tive—it is mul­ti­plica­tive. It cre­ates ex­po­nen­tially more op­por­tu­ni­ties for more peo­ple in more ways than even those most di­rectly im­pacted by it can of­ten imag­ine at first. Has email re­placed the post of­fice? While the num­ber of to­tal ca­reer em­ploy­ees de­clined from 2007 to 2016, it is now just a bit more than it was in 1965. The vol­ume of mar­ket­ing mail and first-class mail has de­creased, but the vol­ume of to­tal ship­ping of pack­ages has in­creased from 3.3 bil­lion to 5.2 bil­lion pack­ages. Delivery points have in­creased from 148 mil­lion to 156 mil­lion, and there are also thou­sands of ad­di­tional delivery trucks on the roads.

In­no­va­tion is not ad­di­tive or sub­trac­tive—it is mul­ti­plica­tive. It cre­ates ex­po­nen­tially more op­por­tu­ni­ties for more peo­ple in more ways than even those most di­rectly im­pacted by it can of­ten imag­ine at first.

Did ATMs re­place banks? No—by low­er­ing the cost of open­ing a branch, ATMs helped in­crease the num­ber of banks by more than 40 per cent. In fact, they did not even re­place bank tell­ers, whose ranks in­creased to meet the de­mand of more branches. Over the decade, we have seen where one av­enue closes in the jobs mar­ket, oth­ers open. An ex­am­ple given by a Deloitte econ­o­mist ham­mers the nail on its head. The Deloitte econ­o­mists be­lieve that ris­ing in­comes from the adop­tion of tech­nol­ogy have al­lowed con­sumers to spend more on per­sonal ser­vices, such as groom­ing. That in turn has driven em­ploy­ment of hair dressers. So, while in 1871, there was one hair­dresser or bar­ber for ev­ery 1,793 cit­i­zens of Eng­land and Wales; to­day there is one for ev­ery 287 peo­ple. It is es­pe­cially im­por­tant to rec­og­nize these facts in light of the par­tic­u­larly cal­lous ar­gu­ment that the only jobs that AI kills are the ones no­body would want. All of us value the job that pro­vides for our fam­i­lies and lives. If any­thing, the rise and spread of AI force us to take a closer look at how charg­ing em­ploy­ees to do the only kinds of tasks that AI is good at—repet­i­tive, pre­cise, con­trolled tasks that re­quire no rea­son­ing, higher-or­der think­ing or even com­mon sense— rep­re­sents an out­dated, di­vi­sive man­age­ment style. It is hard to imag­ine an in­dus­try more heav­ily re­liant on dig­i­tal tech­nol­ogy than cy­ber­se­cu­rity. As of Q3 2017, our cy­ber­se­cu­rity tools and tech­nolo­gies were re­spon­si­ble for neu­tral­iz­ing 91,000 mal­ware pro­grams, block­ing ac­cess to 150,000 ma­li­cious web­sites, and re­sist­ing 4.4 mil­lion net­work in­tru­sion at­tempts—per minute.

The Deloitte econ­o­mists be­lieve that ris­ing in­comes from the adop­tion of tech­nol­ogy have al­lowed con­sumers to spend more on per­sonal ser­vices.

In a dig­i­tally driven world that is teem­ing with threat ac­tors—from ma­li­cious pranksters to crim­i­nals, ide­o­log­i­cally mo­ti­vated sects to state-spon­sored cy­ber ter­ror­ists, threat­en­ing every­thing from our in­di­vid­ual iden­ti­ties to the crit­i­cal in­fra­struc­ture of our so­ci­ety—there is no way to pro­tect data with­out self-learn­ing AI and au­to­ma­tion. For ef­fec­tive cy­ber­se­cu­rity, we must uti­lize AI to do time-con­sum­ing tasks, such as data min­ing and pars­ing data logs, while al­low­ing cy­ber­se­cu­rity teams to fo­cus on the much higher or­der tasks of threat iden­ti­fi­ca­tion and elim­i­na­tion. And yet, one of the gravest chal­lenges our in­dus­try faces is a short­age of tal­ent. Our in­dus­try’s un­em­ploy­ment rate stands at 0 per cent. In 2016, one mil­lion new cy­ber­se­cu­rity jobs were cre­ated, and es­ti­mates project an in­crease of five or six mil­lion over the next few years. In 2015, there was a 74 per cent in­crease in cy­ber­se­cu­rity job post­ings, half of which went un­filled. Across in­dus­tries, 45 per cent of or­ga­ni­za­tions claim to ex­pe­ri­ence a prob­lem­atic short­age of cy­ber­se­cu­rity skills. As a re­sult, cy­ber­se­cu­rity teams must race from one cri­sis or breach to the next, with lit­tle time for strate­gic plan­ning or con­tin­ued learn­ing to keep up with threat so­phis­ti­ca­tion. These are cer­tainly busi­ness chal­lenges, and in­creas­ingly costly ones at that. The de­mand it­self is driv­ing an ex­pen­sive bid­ding war for tal­ent, and the cost of cy­ber­crime is es­ti­mated to reach $2.1 tril­lion glob­ally by next year. These are also na­tional, global se­cu­rity risks, with every­thing from fi­nan­cial sys­tems to health­care to crit­i­cal in­fra­struc­ture in the crosshairs. Au­to­ma­tion and AI are not elim­i­nat­ing jobs. They are cre­at­ing them—high-pay­ing, high-level, and se­cure ones at that—at an un­prece­dented rate. As the lev­els of data con­tinue to grow, that will cre­ate even greater de­mand. We will never be able to fill these jobs with­out greater aware­ness of the need for them, early train­ing in mid­dle school and high school, and more out­reach to vet­er­ans and col­lege stu­dents—par­tic­u­larly women, who presently com­prise just 14 per cent of the cy­ber­se­cu­rity work­force. There are def­i­nitely clear causes of wage stag­na­tion and job loss—from who dis­pro­por­tion­ately ben­e­fits from eco­nomic gains to the im­pact of wealth cre­ation through cap­i­tal man­age­ment rather than goods or ser­vices, to the fund­ing and pri­or­i­ties of our ed­u­ca­tional sys­tem, to an in­creas­ingly volatile fi­nan­cial sys­tem, to the im­pact of rapid glob­al­iza­tion. Those are all ex­tra­or­di­nary chal­lenges, and it is far eas­ier to scape­goat tech­nol­ogy than to ad­dress the chal­lenges it presents. Blam­ing in­no­va­tion will not solve real prob­lems or pre­vent crises. It will only drive mis­un­der­stand­ings and clicks in an in­creas­ingly un­safe dig­i­tal land­scape.

Blam­ing in­no­va­tion will not solve real prob­lems or pre­vent crises. It will only drive mis­un­der­stand­ings and clicks in an in­creas­ingly un­safe dig­i­tal land­scape.

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.