Gov­ern­ments must wake to the risks of new tech­nolo­gies

The Star Early Edition - - OPINION & ANALYSIS -

In­creas­ing au­to­ma­tion of pro­cesses may worsen in­equal­i­ties

TECH­NO­LOG­I­CAL ad­vances as­so­ci­ated with the fourth in­dus­trial revo­lu­tion – in­clud­ing ar­ti­fi­cial in­tel­li­gence – al­low the au­to­ma­tion of an in­creas­ingly wide ar­ray of pro­cesses in in­creas­ingly in­ter­ac­tive and so­phis­ti­cated ways. These ad­vances will likely give rise to many op­por­tu­ni­ties for eco­nomic and so­cial de­vel­op­ment in de­vel­op­ing coun­tries, for in­stance by in­creas­ing food pro­duc­tion.

But the new tech­nolo­gies also in­volve im­por­tant risks, which have spe­cial sig­nif­i­cance in de­vel­op­ing coun­tries. They may build upon and ex­ac­er­bate ex­ist­ing in­equal­i­ties – both within de­vel­op­ing coun­tries and be­tween de­vel­op­ing and more de­vel­oped re­gions.

Three of these in­ter­re­lated risks are wors­en­ing un­em­ploy­ment, in­creas­ing con­cen­tra­tion of eco­nomic power and wealth, and the spread of bi­ases in in­flu­en­tial al­go­rithms. They will man­i­fest in dif­fer­ent ways and re­quire dif­fer­ent re­sponses in di­verse con­texts. A cross-cut­ting prob­lem is that too few de­vel­op­ing coun­try gov­ern­ments are giv­ing these risks se­ri­ous at­ten­tion. Risk 1: Wors­en­ing un­em­ploy­ment The con­cern that new tech­nolo­gies – es­pe­cially ar­ti­fi­cial in­tel­li­gence – will lead to wide­spread job losses has been widely dis­cussed. But his­tor­i­cally new tech­nolo­gies have of­ten given rise to more new jobs than the ones that have been au­to­mated away.

What’s per­haps dif­fer­ent now is that the new, in­ter­con­nected dig­i­tal tech­nolo­gies will likely have a broader and more far-reach­ing ar­ray of abil­i­ties. And so the prospect of new kinds of jobs may well be di­min­ished or lim­ited to in­creas­ingly so­phis­ti­cated do­mains, such as ma­chine learn­ing.

In ad­di­tion, new tech­nolo­gies are now not just re­plac­ing jobs, but they are also en­abling the dis­rup­tion and re­struc­tur­ing of en­tire in­dus­tries. For in­stance, Uber has al­ready pulled the rug from un­der­neath the con­ven­tional taxi in­dus­try in many places. Imag­ine the pos­si­ble con­se­quences of Uber’s shift to driver-less cars.

As­pects of de­vel­op­ing coun­tries’ con­texts in­crease the pos­si­ble sever­ity of dis­rup­tion.

First, the dearth of ef­fec­tive ed­u­ca­tion sys­tems and skills in coun­tries like South Africa will make it more dif­fi­cult for peo­ple to be re­trained for the tech­nol­ogy-in­ten­sive new jobs that will be­come avail­able.

Se­condly, all gov­ern­ments are strug­gling with the im­pli­ca­tions of new tech­nolo­gies and as­so­ci­ated new busi­ness mod­els. This struggle is par­tic­u­larly strong in de­vel­op­ing coun­tries.

Risk 2: In­creas­ing con­cen­tra­tion of wealth

Many de­vel­op­ing coun­tries are char­ac­terised by high lev­els of in­equal­ity within their pop­u­la­tions. Elites within these coun­tries will be more likely to make use of AI and other new tech­nolo­gies. This will fur­ther in­crease re­turns to cap­i­tal wi­den­ing the gap be­tween elites’ pro­duc­tive ca­pac­ity and that of ev­ery­one else.

A sim­i­lar ef­fect is likely at a global level. It’s no co­in­ci­dence that Rus­sia’s Pres­i­dent Vladimir Putin has iden­ti­fied AI as the new ter­rain for global com­pe­ti­tion be­tween na­tions.

New tech­nolo­gies’ ad­van­tages for cap­i­tal are not just due to in­creas­ing pro­duc­tiv­ity, but also be­cause they al­low new busi­ness mod­els that may con­trol en­tire sub-sec­tors and sti­fle com­pe­ti­tion. For in­stance, it could be­come pos­si­ble for a sin­gle com­pany to con­trol large fleets of au­to­mated ve­hi­cles in large ar­eas.

Again, much will de­pend on whether states can keep up with these de­vel­op­ments and re­spond ef­fec­tively. Risk 3: Bias baked into al­go­rithms Fi­nally, the AI al­go­rithms at the cen­tre of the fourth in­dus­trial revo­lu­tion will re­flect and per­pet­u­ate the con­texts and bi­ases of those who cre­ate them. Dif­fi­cul­ties faced by voice recog­ni­tion soft­ware in recog­nis­ing par­tic­u­lar ac­cents are a rel­a­tively in­nocu­ous ex­am­ple. Of course, the prom­ise is that AI will en­able such sys­tems to learn to ad­dress such is­sues. But the learn­ing process it­self might be in­flu­enced by racial, gen­der, or other prej­u­dices.

AI al­go­rithms are de­vel­oped al­most en­tirely in de­vel­oped re­gions. Thus they may not suf­fi­ciently re­flect the con­texts and pri­or­i­ties of de­vel­op­ing coun­tries.

En­sur­ing that AI al­go­rithms are ap­pro­pri­ately trained and adapted in dif­fer­ent con­texts is part of the re­quired re­sponse.

It would be even bet­ter if de­vel­op­ing coun­tries be­come more en­gaged in the de­vel­op­ment of new tech­no­log­i­cal sys­tems from the get-go.

These three risks re­quire that aca­demics, busi­nesses, and civil so­ci­ety ac­tors at­tend to the role of new tech­nolo­gies in de­vel­op­ing coun­tries. But a spe­cial re­spon­si­bil­ity lies with gov­ern­ments. – The Con­ver­sa­tion

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.