The Fu­ture of Growth: AI Comes of Age

A new fac­tor of pro­duc­tion is on the hori­zon, and it prom­ises to trans­form eco­nomic growth for coun­tries around the world.

Rotman Management Magazine - - FRONT PAGE - by Jodie Wal­lis and Deb­o­rah San­ti­ago

IN THE MOD­ERN ECON­OMY, there are two tra­di­tiona ldrivers of pro­duc­tion: in­creases in cap­i­tal in­vest­ment and labour. How­ever, the decades-long abil­ity of these driv­ers to pro­pel eco­nomic progress in most de­vel­oped coun­tries is on the cusp of a mas­sive change.

With the re­cent con­ver­gence of a trans­for­ma­tive set of tech­nolo­gies, economies are en­ter­ing a new era in which ar­ti­fi­cial in­tel­li­gence (AI) has the po­ten­tial to over­come the phys­i­cal lim­i­ta­tions of cap­i­tal and labour and open up new sources of value and growth. AI is, with­out ques­tion, the sin­gle most dis­rup­tive tech­nol­ogy the world has ex­pe­ri­enced since the In­dus­trial Rev­o­lu­tion. In this ar­ti­cle we will dis­cuss some of the im­pli­ca­tions, chal­lenges and op­por­tu­ni­ties of this new fact of eco­nomic life.

A New Fac­tor of Pro­duc­tion

For three decades, rates of gross do­mes­tic prod­uct (GDP) growth have been shrink­ing across the globe. Key mea­sures of eco­nomic ef­fi­ciency are trending sharply down­ward, while labour-force growth across the de­vel­oped world is largely stag­nant.

Are we ex­pe­ri­enc­ing the end of growth and pros­per­ity as we know it? The short an­swer is an em­phatic No, be­cause the data misses an im­por­tant part of the story: How new tech­nolo­gies af­fect growth in the econ­omy. Tra­di­tion­ally, growth has oc­curred when the stock of cap­i­tal or labour in­creased, or when they were used more ef­fi­ciently. The growth that comes from in­no­va­tion and tech­no­log­i­cal change in the econ­omy is cap­tured in to­tal fac­tor pro­duc­tiv­ity (TFP). Econ­o­mists have al­ways thought of new tech­nolo­gies as driv­ing growth through their abil­ity to en­hance TFP, and this made sense for the tech­nolo­gies that we have seen — un­til now.

What if AI has the po­ten­tial to be not just another driver of TFP, but an en­tirely new fac­tor of pro­duc­tion that can repli­cate labour ac­tiv­i­ties at much greater scale and speed, and even per­form some tasks be­yond the ca­pa­bil­i­ties of hu­mans?

For ex­am­ple, Meta — now part of the Chan Zucker­berg Ini­tia­tive — uses AI to read, un­der­stand and act on the thou­sands of sci­en­tific pa­pers that are pub­lished daily. For con­text, over

4,000 pa­pers are pub­lished daily in the field of bio­med­i­cine alone. Meta’s sys­tem helps sci­en­tists ac­cess these moun­tains of re­search to learn from real-time in­sights and un­lock sci­en­tific dis­cov­er­ies years in ad­vance.

Un­like tra­di­tional cap­i­tal like ma­chines and build­ings, AI can im­prove over time, thanks to its self-learn­ing ca­pa­bil­i­ties. The Span­ish AI start-up NEM So­lu­tions, us­ing an al­go­rithm based on the hu­man im­mune sys­tem, is tar­get­ing wind-farm pro­duc­tiv­ity by pre­dict­ing and pre­vent­ing fail­ures. The plat­form first an­a­lyzes in­stances of wind tur­bine fail­ure to learn what the symp­toms are, then mon­i­tors the tur­bines in real time to de­tect the symp­toms and flag any po­ten­tial prob­lems.

Of course, AI is not a new field; much of its the­o­ret­i­cal and tech­no­log­i­cal un­der­pin­ning was de­vel­oped over the past 70 years. Its ap­pli­ca­bil­ity, though, is a rel­a­tively mod­ern devel­op­ment. AI went out of favour in the 1970s and 1980 be­cause tech­no­log­i­cal ca­pa­bil­i­ties such as lim­ited com­put­ing power fun­da­men­tally lim­ited the ca­pa­bil­i­ties of re­searchers. That changed in the early 2000s, when three Cana­dian-based re­searchers — Ge­of­frey Hin­ton, Yoshua Ben­gio and Rich Sut­ton — made break­throughs that re-pop­u­lar­ized the study of AI.

Over the last ten years, in­creases in ef­fi­cient com­put­ing power, data qual­ity and data quan­tity have re­de­fined how we look at AI. To­day, the term refers to mul­ti­ple tech­nolo­gies that can be com­bined in dif­fer­ent ways to sense, com­pre­hend and act. All three ca­pa­bil­i­ties are un­der­pinned by the abil­ity to learn from ex­pe­ri­ence and adapt over time.

Com­puter vi­sion and au­dio pro­cess­ing, for ex­am­ple, SENSE. per­ceive the world by ac­quir­ing and pro­cess­ing im­ages, sounds and speech to de­velop en­hanced data.

Nat­u­ral lan­guage pro­cess­ing and rec­om­men­da­tion COM­PRE­HEND. en­gines, for in­stance, can an­a­lyze and un­der­stand the data col­lected by gen­er­at­ing mean­ing and in­sights.

One of the key com­po­nents of AI sys­tems is their abil­ity to ACT. use the in­for­ma­tion gen­er­ated to take ac­tion like in the case of aug­mented re­al­ity or, more sim­ply, chat­bots.

Five Levers of AI-LED Growth

So, how can or­ga­ni­za­tions drive value from AI? Busi­nesses that suc­cess­fully ap­ply it could in­crease prof­itabil­ity by an av­er­age of 38 per cent by 2035, ac­cord­ing to a re­cent re­search report we did in con­junc­tion with Fron­tier Eco­nom­ics. That’s a com­pelling case. We think about AI de­liv­er­ing value in terms of five levers:

This in­volves de­ploy­ing cog­ni­tive 1. IN­TEL­LI­GENT AU­TOMA­TION. ca­pa­bil­i­ties on top of tra­di­tional au­toma­tion tech­nolo­gies to achieve self-learn­ing, greater au­ton­omy and flex­i­bil­ity. Re­sults in­clude more ef­fi­cient pro­cesses, ac­tiv­i­ties, and ser­vices be­yond what tra­di­tional au­toma­tion will de­liver.

This in­volves de­liv­er­ing su­pe­rior 2. IM­PROVED IN­TER­AC­TIONS. ex­pe­ri­ences to cus­tomers and users based on hy­per-per­son­al­iza­tion and the cu­ra­tion of real-time in­for­ma­tion. On top of over­all sat­is­fac­tion im­prove­ment, this can also gen­er­ate greater ac­qui­si­tion and re­ten­tion rates among cus­tomers.

Lever­ag­ing AI ca­pa­bil­i­ties to aug­ment 3. EN­HANCED JUDG­MENT. hu­man an­a­lyt­i­cal and man­age­ment ca­pa­bil­i­ties. Re­sults in­clude im­proved qual­ity and ef­fec­tive­ness of prediction and de­ci­sion mak­ing.

AI can be used to build trust with cus­tom4. DEEP­ENED TRUST. ers and within the or­ga­ni­za­tion by more ef­fec­tively pre­vent­ing and de­tect­ing anom­alies. It also pro­vides the abil­ity to sig­nif­i­cantly re­duce false pos­i­tives, which fur­ther im­proves ef­fi­ciency.

De­ploy­ing AI to en­able a new class of 5. IN­NO­VA­TION DIF­FU­SION. prod­ucts and ser­vices that use AI to en­hance the prod­uct devel­op­ment life­cy­cle and cre­ate new busi­nesses. Re­sults in­clude in­creased speed with which new prod­ucts and ser­vices are de­signed and de­liv­ered.

Let’s look at a few ex­am­ples. The air­craft man­u­fac­turer Air­bus was look­ing for ways to achieve more ac­cu­racy and qual­ity in cabin fur­nish­ing. Ac­cen­ture worked with Air­bus to de­velop

AI can boost labour pro­duc­tiv­ity only if com­pa­nies are will­ing to dis­rupt their le­gacy mod­els.

a so­lu­tion in­volv­ing smart glasses. Us­ing con­tex­tual mark­ing in­struc­tions, the smart glasses dis­play all re­quired in­for­ma­tion for an op­er­a­tor to help mark the floor faster and re­duce er­rors to zero, with a built-in abil­ity to val­i­date the work and pro­vide real-time feed­back to users along the way. You can imag­ine the ap­pli­ca­bil­ity to many op­er­a­tions that re­quire pre­ci­sion in the set up or im­ple­men­ta­tion of equip­ment.

In Cal­i­for­nia, AI start-up Ele­men­tum gen­er­ates real-time in­sights when in­ci­dents or dis­rup­tions threaten a sup­plier, help­ing its clients un­der­stand where ev­ery com­po­nent and fin­ished good is sup­plied, man­u­fac­tured and dis­trib­uted. Rather than sim­ply au­tomat­ing sup­plier man­age­ment pro­cesses, clients of Ele­men­tum can get early warn­ings of po­ten­tial prob­lems and al­ter­na­tive so­lu­tions to re­act be­fore pro­duc­tion is im­pacted. For in­stance, in 2014 a fire in a Chi­nese DRAM [Dy­namic Ran­domAc­cess Mem­ory] chip fac­tory put a 25 per cent squeeze on world sup­ply. Whereas most equip­ment man­u­fac­tur­ers only found out days later, Ele­men­tum’s cus­tomers knew about the in­ci­dent within min­utes and se­cured their sup­ply of DRAMS be­fore prices re­acted to the short­age.

It is not just pro­duc­tion and sup­ply chains that can ben­e­fit from in­tel­li­gent au­toma­tion. One of Ac­cen­ture’s clients — a global in­sur­ance com­pany — wanted to au­to­mate its claims pro­cess­ing for auto in­sur­ance. We worked with them to de­velop an al­go­rithm us­ing a data set of toy-car im­ages. The so­lu­tion en­ables cus­tomers to send their own pic­tures of the dam­aged car to the in­surer, and the al­go­rithm clas­si­fies the dam­age, repli­cat­ing the work of an ad­juster, with 90 per cent ac­cu­racy. In ad­di­tion to re­duc­ing the ef­fort of hu­mans in as­sess­ing the dam­age, the so­lu­tion can also be ex­tended to req­ui­si­tion parts and de­tect po­ten­tial fraud cases.

Else­where, in March, Cap­i­tal One re­vealed Eno, the first of its kind nat­u­ral lan­guage chat­bot for bank­ing. Dur­ing the pi­lot phase, cus­tomers could text Eno any­time to re­view their ac­counts, pay their credit card bill, or just ask gen­eral ques­tions. As of this month, Eno is avail­able to com­mu­ni­cate by text with mil­lions of Cap­i­tal One credit card and bank cus­tomers. Cap­i­tal One has re­vealed three sur­pris­ing things it learned in the pi­lot process:

• Ev­ery cus­tomer has their own lan­guage and con­ver­sa­tional style. There­fore, the agent had to learn how dif­fer­ent peo­ple like to text about their money. This in­cludes the use of emo­jis. For ex­am­ple, some cus­tomers like to use a thumbs-up emoji to con­firm their pay­ment;

• Lan­guage, tone and mean­ing train­ers have been re­quired to help Eno in­ter­pret the 2,200 dif­fer­ent ways cus­tomers may ask for their bal­ance;

• And chat­bots ac­tu­ally need em­pa­thy! Peo­ple will tend to build re­la­tion­ships with them even while know­ing that they are talk­ing to a bot.

How­ever, im­proved in­ter­ac­tion is not just about in­ter­act­ing with cus­tomers. One of the world’s largest oil­field ser­vices com­pa­nies — which cre­ates prod­ucts and ser­vices to an­a­lyze, drill, eval­u­ate, com­plete and pro­duce oil and gas re­serves and then trans­port and re­fine the hy­dro­car­bons — wanted a way to re­spond more ef­fi­ciently to its ven­dors’ in­quiries about their in­voices and pay­ments. Ven­dors can in­ter­act with a dig­i­tal as­sis­tant and re­ceive in­for­ma­tion about the sta­tus of their in­voices. This in­cludes check­ing in­voice sta­tus, and search­ing for in­voices in back-end sys­tems. Ven­dors can also use the vir­tual agent to up­load miss­ing in­voices and log trou­ble tick­ets.

A key use case for en­hanced judg­ment is in rec­om­mender sys­tems. Ma­chine learn­ing and deep-learn­ing mod­els have been used to per­son­al­ize rec­om­men­da­tions for movies, re­search ar­ti­cles and prod­ucts in gen­eral, and there are now rec­om­mender sys­tems for ex­perts, col­lab­o­ra­tors, job can­di­dates and ro­man­tic part­ners. Cana­dian com­pany Layer 6 re­cently won an in­ter­na­tional chal­lenge for its work on ‘cold-start’ rec­om­men­da­tion sys­tems — cases where there is no in­ter­ac­tion his­tory to draw from. Layer 6’s deep learn­ing plat­form al­lows users to lever­age a wide va­ri­ety of his­tor­i­cal data and solves the cold-start prob­lem by in­cor­po­rat­ing data from the cur­rent user ses­sion and con­text.

AI is also spread­ing to ar­eas where in­tel­lect and crit­i­cal think­ing have long dom­i­nated. For in­stance, start-up Nar­ra­tive Sci­ence is ‘hu­man­iz­ing’ data with tech­nol­ogy that in­ter­prets an or­ga­ni­za­tion’s data, then trans­forms it into in­tel­li­gent

nar­ra­tives in a style that a hu­man might write. Take, for ex­am­ple, the sus­pi­cious ac­tiv­ity re­port­ing AML [anti-money-laun­der­ing] in­ves­ti­ga­tors are re­quired to do. For a large bank that av­er­ages 4,000 alerts a year, typ­i­cally over 150 cases need to be filed with reg­u­la­tory bod­ies. Nar­ra­tive Sci­ence’s plat­form, Quill, can re­duce the time it takes to file cases by au­tomat­ing the nar­ra­tive re­quired that ex­plains the sus­pi­cious trans­ac­tions, sav­ing 2.5 hours per case.

Cal­i­for­nia ai com­pany Auto desk is pi­o­neer­ing this ap­proach with its com­puter-aided de­sign sys­tem, Dream­catcher. Us­ing AI to mimic the gen­er­a­tive de­sign work of na­ture, Dream­catcher cre­ates thou­sands of vir­tual pro­to­type it­er­a­tions and com­pares their func­tion, cost and ma­te­rial ac­cord­ing to spec­i­fied cri­te­ria. In the health­care in­dus­try, Dream­catcher has al­ready been used to de­sign a fa­cial im­plant that ac­cel­er­ates re­cov­ery and tis­sue re­growth.

Fac­tor­ing in AI

To un­der­stand the value of AI as a new fac­tor of pro­duc­tion, Ac­cen­ture, in as­so­ci­a­tion with Fron­tier Eco­nom­ics, mod­elled the po­ten­tial im­pact of AI for 12 de­vel­oped economies that to­gether gen­er­ate more than half of the world’s eco­nomic out­put. Our re­sults re­veal un­prece­dented op­por­tu­ni­ties for value cre­ation: AI has the po­ten­tial to dou­ble an­nual eco­nomic growth rates across these coun­tries. In Canada, the in­creased labour pro­duc­tiv­ity that AI of­fers could po­ten­tially re­duce the num­ber of years re­quired for Canada to dou­ble the size of its econ­omy by 13 years if it achieves an Ai-steady state by 2035.

AI also has the po­ten­tial to boost labour pro­duc­tiv­ity by up to 40 per cent by 2035 in the coun­tries we stud­ied. Op­ti­mal labour pro­duc­tiv­ity will not be driven by longer hours, though, but by in­no­va­tive tech­nolo­gies en­abling peo­ple to ef­fi­ciently use their time. This labour pro­duc­tiv­ity in­crease dra­mat­i­cally re­duces the num­ber of years re­quired for our an­a­lyzed coun­tries’ economies to dou­ble in size. The re­sults are pri­mar­ily driven by a coun­try’s abil­ity to dif­fuse tech­no­log­i­cal in­no­va­tions into its wider eco­nomic in­fra­struc­ture. While the gains vary in each coun­try sur­veyed, our re­search shows AI can tran­scend re­gional and struc­tural dis­par­i­ties, en­abling huge, rapid leaps in labour pro­duc­tiv­ity.

AI can boost labour pro­duc­tiv­ity, though, only if com­pa­nies are will­ing to dis­rupt their le­gacy mod­els. An Ac­cen­ture study found that com­pa­nies that op­ti­mally use AI will gen­er­ate higher share­holder value. How­ever, less than a fifth of lead­ing com­pa­nies that lever­age AI have achieved this per­for­mance. Ac­cen­ture’s re­search found that only 17 per cent of Cana­dian com­pa­nies lever­age AI suc­cess­fully — demon­strat­ing the abil­ity to in­no­vate from within and col­lab­o­rate ex­ter­nally. The re­search shows that com­pa­nies must con­verge and in­te­grate tech­nol­ogy, data and peo­ple to im­prove what we call their ‘AIQ’.

One third of the skills that will be re­quired in three years are not yet con­sid­ered cru­cial.

Clearing the Path to an AI Fu­ture

En­tre­pre­neur Elon Musk has warned that AI could be­come hu­man­ity’s ‘big­gest ex­is­ten­tial threat’. The more op­ti­mistic view of fu­tur­ist Ray Kurzweil is that AI can help us to make ‘ma­jor strides in ad­dress­ing the [world’s] grand chal­lenges’.

The truth is, it all de­pends on how we man­age the tran­si­tion to an AI econ­omy. To ful­fill the prom­ise of AI as a new fac­tor of pro­duc­tion that can reignite eco­nomic growth, rel­e­vant stake­hold­ers must be thor­oughly pre­pared — in­tel­lec­tu­ally, tech­no­log­i­cally, po­lit­i­cally, eth­i­cally and so­cially — to ad­dress the chal­lenges that arise as AI be­comes more in­te­grated into our lives. A good start­ing point is un­der­stand­ing the com­plex­ity of the fol­low­ing is­sues.

There are three PRE­PAR­ING THE NEXT GEN­ER­A­TION FOR THE AI FU­TURE. things we need to do to cre­ate the AI work­force of the fu­ture: ac­cel­er­ate the re-skilling of em­ploy­ees; un­lock hu­man po­ten­tial; and strengthen the tal­ent pipe­line. These ac­tions will en­able lead­ers to build on a work­force that is al­ready highly en­gaged with new tech­nolo­gies in their daily lives. And these lead­ers will re­shape their or­ga­ni­za­tions to al­low work­ers to flour­ish in an AI econ­omy in a way that drives real busi­ness value as well as in­no­va­tion and cre­ativ­ity.

An ex­am­ple of us­ing AI to power re-skilling is Mon­treal’s Eru­dite AI, which is tack­ling the hu­man is­sue of aca­demic and ca­reer stag­na­tion due to a lack of pro­duc­tiv­ity and learn­ing. Eru­dite uses AI to aug­ment hu­man col­lab­o­ra­tion and knowl­edge shar­ing at work or school. Un­like other such tools, its knowl­edge man­age­ment sys­tem en­ables in­di­vid­u­als to am­plify and share their ex­per­tise through the power of hu­man col­lab­o­ra­tion. It op­ti­mizes knowl­edge trans­fer and skill aug­men­ta­tion by map­ping — in real-time — unique knowl­edge and skill pro­files of learn­ers and match­ing them with the right ex­pert at the right time. It also pro­vides coach­ing and col­lab­o­ra­tion among ex­perts within the plat­form to instantly en­hance the qual­ity of re­sponses. Ul­ti­mately, com­pa­nies must make rad­i­cal changes to their train­ing, per­for­mance and tal­ent ac­qui­si­tion strate­gies. Re-skilling should be viewed as a new way of think­ing about con­tin­u­ous ed­u­ca­tion, as one third of the skills that will be re­quired in three years are not yet con­sid­ered cru­cial.

AI will be in­stru­men­tal in not only mak­ing ex­ist­ing work­ers more pro­duc­tive, but also in help­ing them de­liver bet­ter work. This in­volves fos­ter­ing a cul­ture of life­long learn­ing, much of it en­abled by tech­nol­ogy, such as per­son­al­ized on­line cour­ses that re­place tra­di­tional class­room cur­ric­ula and wear­able ap­pli­ca­tions such as smart glasses that im­prove work­ers’ knowl­edge and skills as they carry out their work. Suc­cess will also de­pend on part­ner­ships with start-ups, uni­ver­si­ties and in­di­vid­ual ex­perts to ac­cess knowl­edge and skills at scale.

In prepa­ra­tion for the AI econ­omy of the fu­ture, coun­tries need to do bet­ter in align­ing their ed­u­ca­tion sys­tems with the needs of the new econ­omy and forg­ing part­ner­ships be­tween in­sti­tu­tions and in­dus­try. This means en­hanc­ing pri­mary and sec­ondary pro­grams, col­lege pro­grams and un­der­grad­u­ate pro­grams with con­tent in crit­i­cal think­ing, cre­ativ­ity, math, ro­bot­ics and hu­man-ma­chine in­ter­ac­tion, as well as con­tin­u­ing to

grow post-grad­u­ate pro­gram­ming. This will re­quire ex­tend­ing the learn­ing cy­cle be­yond tra­di­tional time­frames and into the work­place.

Last fall, Que­bec’s Quartier In­no­va­tion and the École de Tech­nolo­gie Supérieure an­nounced a part­ner­ship with Vidéotron and Eric­s­son to cre­ate an ‘open lab­o­ra­tory’ for smart tech­nol­ogy to unite the tele­com and man­u­fac­tur­ing in­dus­tries, mu­nic­i­pal­i­ties and ad­vanced learn­ing in the cre­ation of new tech­nolo­gies. Gov­ern­ments are also be­gin­ning to un­der­stand the im­por­tance of col­lab­o­ra­tion in AI. Ear­lier this year, the gov­ern­ments of On­tario and Que­bec signed a mem­o­ran­dum of un­der­stand­ing (MOU) to work to­gether to fos­ter AI devel­op­ment. The MOU aims to keep On­tario, Que­bec and, more gen­er­ally, Canada com­pet­i­tive among other ju­ris­dic­tions both in the ex­pan­sion of fun­da­men­tal knowl­edge and in the wide­spread devel­op­ment and ap­pli­ca­tion of these tech­nolo­gies. One of the im­por­tant ways they want to achieve this is by bol­ster­ing ties be­tween re­search and in­dus­try and be­tween tech­nol­ogy com­pa­nies and start-ups.

As au­ton­o­mous ma­chines EN­COUR­AG­ING AI-POW­ERED REG­U­LA­TION. take over tra­di­tion­ally hu­man tasks, cur­rent laws will need to be re­vis­ited. For in­stance, the State of New York’s 1967 law that re­quires driv­ers to keep one hand on the wheel was de­signed to im­prove safety, but may in­hibit the up­take of semi-au­ton­o­mous safety fea­tures, such as au­to­matic lane cen­tral­iza­tion. In other cases, new reg­u­la­tion is called for. For ex­am­ple, though AI could be enor­mously ben­e­fi­cial in aid­ing med­i­cal di­ag­noses, if physi­cians avoid us­ing these tech­nolo­gies, fear­ing that that they will be ex­posed to ac­cu­sa­tions of mal­prac­tice, this uncer­tainty could in­hibit up­take and hin­der in­no­va­tion.

AI it­self can be part of the so­lu­tion, though, cre­at­ing adap­tive, self-im­prov­ing reg­u­la­tions that close the gap be­tween the pace of tech­no­log­i­cal change and that of reg­u­la­tory re­sponse. For ex­am­ple, AI could be used to up­date reg­u­la­tions con­sid­er­ing new cost-ben­e­fit eval­u­a­tion.

In­tel­li­gent sys­tems are rapADVOCATING A CODE OF ETHICS FOR AI. idly mov­ing into so­cial en­vi­ron­ments that were pre­vi­ously ex­clu­sively hu­man, open­ing up eth­i­cal and so­ci­etal is­sues that could slow AI’S progress. These range from how to re­spond to racially-bi­ased al­go­rithms to whether au­ton­o­mous cars should give pref­er­ence to their driver’s life over oth­ers in the case of an ac­ci­dent. Given AI’S rapid growth, pol­i­cy­mak­ers need to en­sure the devel­op­ment of a code of ethics for the AI ecosys­tem and eth­i­cal de­bates need to be sup­ple­mented by tan­gi­ble stan­dards and best prac­tices in the devel­op­ment of in­tel­li­gent ma­chines.

Many peo­ple are conADDRESSING THE REDISTRIBUTION EF­FECTS. cerned that AI will elim­i­nate jobs, worsen in­equal­ity and erode in­comes. This ex­plains the rise in protests around the world and dis­cus­sions tak­ing place in sev­eral coun­tries around the in­tro­duc­tion of a uni­ver­sal ba­sic in­come. Pol­i­cy­mak­ers must rec­og­nize that these ap­pre­hen­sions are valid. Their re­sponse should be twofold.

First, pol­i­cy­mak­ers should high­light how AI can re­sult in tan­gi­ble ben­e­fits. For in­stance, an Ac­cen­ture sur­vey high­lighted that 84 per cent of man­agers be­lieve ma­chines will make them more ef­fec­tive and their work more in­ter­est­ing. Be­yond the work­place, AI prom­ises to al­le­vi­ate se­ri­ous global is­sues such as cli­mate change and poor ac­cess to health­care. Ben­e­fits like these should be clearly ar­tic­u­lated to en­cour­age a more pos­i­tive out­look on AI’S po­ten­tial.

Sec­ond, pol­i­cy­mak­ers need to ad­dress and pre-empt the down­sides of AI. Some groups will be af­fected dis­pro­por­tion­ately by these changes. To pre­vent a back­lash, pol­i­cy­mak­ers should iden­tify the groups at high risk of dis­place­ment and cre­ate strate­gies that fo­cus on rein­te­grat­ing them into the econ­omy.

In clos­ing

In­creases in cap­i­tal and labour are no longer driv­ing the lev­els of eco­nomic growth that the world has be­come ac­cus­tomed to. As in­di­cated herein, a new fac­tor of pro­duc­tion is on the hori­zon. AI prom­ises to trans­form the ba­sis of eco­nomic growth for coun­tries around the world.

To avoid miss­ing out on this op­por­tu­nity, pol­i­cy­mak­ers and busi­ness lead­ers alike must work to­wards a fu­ture with ar­ti­fi­cial in­tel­li­gence. They must do so not with the idea that AI is sim­ply another pro­duc­tiv­ity en­hancer; rather, they must see AI as a tool that will trans­form our think­ing about how growth is cre­ated.

Jodie Wal­lis is Man­ag­ing Di­rec­tor and Ar­ti­fi­cial In­tel­li­gence Lead at Ac­cen­ture Canada. Deb­o­rah San­ti­ago is Global Le­gal Lead of Ac­cen­ture’s Dig­i­tal & Strate­gic Offerings le­gal team, which in­cludes An­a­lyt­ics, In­ter­ac­tive, Mo­bil­ity, Cloud and Soft­ware.

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.