Truth be told

Fail­ure to curb the power of Ar­ti­fi­cial In­tel­li­gence will lead to a web of lies Robin Pag­na­menta

The Daily Telegraph - Business - - Front Page - Robin Pag­na­menta Ben Marlow is away

In 2017 – just a few months be­fore his death – Stephen Hawk­ing de­scribed the emer­gence of ar­ti­fi­cial in­tel­li­gence as pos­si­bly “the worst event in the his­tory of our civil­i­sa­tion”. The ad­vent of GPT-3 – a highly ac­cu­rate syn­thetic hu­man lan­guage gen­er­a­tor pow­ered by AI – is the kind of tech­nol­ogy that might just prove him right. As Don­ald Trump and Joe Bi­den square off in what is likely to be a bru­tal US elec­tion cam­paign, ex­perts in the dark arts of mis­in­for­ma­tion have warned of the risks posed by “deep­fakes” to ma­nip­u­late pub­lic opin­ion.

At a crit­i­cal mo­ment, so the the­ory goes, a doc­tored video – per­haps de­pict­ing one of the can­di­dates say­ing or do­ing some­thing un­con­scionable – could be re­leased into the ether to man­u­fac­ture a scan­dal de­signed to dom­i­nate the news cy­cle dur­ing the cli­max of the cam­paign and swing the vote.

It’s a cred­i­ble risk which we should be pre­pared for, es­pe­cially af­ter wide­spread ev­i­dence of med­dling in the last pres­i­den­tial cam­paign.

In the longer run, how­ever, it is GPT-3 and tech­nolo­gies like it which pose a big­ger threat and about which we should be truly wor­ried.

AI-pow­ered syn­thetic hu­man writ­ing is be­com­ing un­can­nily ac­cu­rate and ever more dif­fi­cult to dis­tin­guish from the real thing.

It raises the prospect of a dystopian fu­ture where much of the writ­ten text we read on the web is pro­duced by al­go­rithms – easy to pro­duce in high vol­ume and a pro­pa­gan­dist’s dream.

Gen­er­a­tive Pre-trained Trans­former 3 (GPT-3), a lan­guage model that uses deep learn­ing to pro­duce highly cred­i­ble hu­man writ­ing, is at the cut­ting edge of con­tem­po­rary AI.

It is the third gen­er­a­tion of a tech­nol­ogy de­vel­oped by OpenAI, a non-profit San Fran­cisco AI re­search lab backed by Elon Musk, Peter Thiel and a string of other Sil­i­con Val­ley lu­mi­nar­ies.

Mi­crosoft and In­fosys, the In­dian tech gi­ant, have also helped fund the group, which started in 2015. The model – which is 115 times more pow­er­ful than its pre­de­ces­sor GPT-2 and which has been trained to read and write us­ing mil­lions of pages of writ­ten text drawn from the in­ter­net – was in­tro­duced in May 2020 and en­tered beta test­ing last month.

OpenAI wants to com­mer­cialise it by the end of this year – and points to its po­ten­tial use build­ing bet­ter chat­bots, for ex­am­ple.

‘It raises the prospect of a dystopian fu­ture where text is pro­duced by al­go­rithms’

An ar­ti­cle in MIT Tech­nol­ogy Re­view sum­marised the qual­ity of its writ­ing by de­scrib­ing it as “shock­ingly good – and com­pletely mind­less”.

David Chalmers, an Aus­tralian philoso­pher, has de­scribed GPT-3 as “one of the most in­ter­est­ing and im­por­tant AI sys­tems ever pro­duced”.

As is so of­ten the case, its de­vel­op­ers are all too aware of its po­ten­tial for mis­use, but pressed ahead re­gard­less.

“Any so­cially harm­ful ac­tiv­ity that re­lies on gen­er­at­ing text could be aug­mented by pow­er­ful lan­guage mod­els,” the re­searchers be­hind the tech­nol­ogy warned in a pa­per that was pub­lished in May.

“Ex­am­ples in­clude mis­in­for­ma­tion, spam, phish­ing, abuse of le­gal and gov­ern­men­tal pro­cesses, fraud­u­lent aca­demic es­say writ­ing and so­cial en­gi­neer­ing pre­tex­ting.

“Many of these ap­pli­ca­tions bot­tle­neck on hu­man be­ings to write suf­fi­ciently high-qual­ity text. Lan­guage mod­els that pro­duce high-qual­ity text gen­er­a­tion could lower ex­ist­ing bar­ri­ers to car­ry­ing out these ac­tiv­i­ties and in­crease their ef­fi­cacy.”

In other words, if you think fake news is a prob­lem now, wait un­til it can be man­u­fac­tured in bulk by ma­chines and pumped out into our in­for­ma­tion ecosys­tem on an in­dus­trial scale.

Un­like deep­fake videos which can be ex­posed, un­de­tectable textfakes mas­querad­ing as or­di­nary chat on so­cial me­dia plat­forms like Twit­ter or Face­book have the po­ten­tial to in­flu­ence us in po­ten­tially more sub­tle and dan­ger­ous ways.

By weav­ing an elab­o­rate web of lies de­signed to de­ceive and ma­nip­u­late, the aim will be to shift the way we think by im­mers­ing us in a soup of per­va­sive mis­in­for­ma­tion.

Toss in a plethora of other syn­thetic ma­te­rial – fake videos, images and au­dio – and it be­comes in­creas­ingly dif­fi­cult to trust any­thing at all on the in­ter­net, fur­ther erod­ing trust.

The tech­nol­ogy also poses a new kind of chal­lenge for the so­cial me­dia com­pa­nies, of course. As neu­tral “plat­form op­er­a­tors” rather than pub­lish­ers, they have al­ways ar­gued that it is not their job to judge whether peo­ple are us­ing their ser­vices to tell the truth or not.

What hap­pens if the lies are be­ing pro­duced, spread and com­mented upon en­tirely by al­go­rithms? Do they have a re­spon­si­bil­ity to shut these down?

Hawk­ing’s con­cern about ar­ti­fi­cial in­tel­li­gence re­volved around the lack of rules or any kind of gov­er­nance over a pow­er­ful new tech­nol­ogy – and the ur­gent need to set stan­dards to su­per­vise its use. “We sim­ply need to be aware of the dan­gers, iden­tify them, em­ploy the best pos­si­ble prac­tice and man­age­ment, and pre­pare for its con­se­quences well in ad­vance,” Hawk­ing re­marked.

There can be few more press­ing ar­eas where rules and stan­dards need to be ap­plied than here.

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.