The threat from ar­ti­fi­cial in­tel­li­gence may al­ready be here

The Washington Post Sunday - - SUNDAY OPINION - ANNE APPLEBAUM ap­ple­baum­let­ters@wash­post.com

You know the sce­nario from 19th­cen­tury fic­tion and Hol­ly­wood movies: Mankind has in­vented a com­puter, or a ro­bot or an­other ar­ti­fi­cial thing that has taken on a life of its own. In “Franken­stein,” the mon­ster is built from corpses; in “2001: A Space Odyssey,” it’s an all-see­ing com­puter with a hu­man voice; in “West­world,” the ro­bots are life­like an­droids that be­gin to think for them­selves. But in al­most ev­ery case, the out-of-con­trol ar­ti­fi­cial life form is an­thro­po­mor­phic. It has a face or a body, or at least a hu­man voice and a phys­i­cal pres­ence in the real world.

But what if the real threat from “ar­ti­fi­cial life” doesn’t look or act hu­man at all? What if it’s just a piece of com­puter code that can af­fect what you see and there­fore what you think and feel? In other words — what if it’s a bot, not a ro­bot?

For those who don’t know (and apolo­gies to those who are wearily fa­mil­iar), a bot re­ally is just a piece of com­puter code that can do things that hu­mans can do. Wikipedia uses bots to cor­rect spell­ing and gram­mar on its ar­ti­cles; bots can also play com­puter games or place gam­bling bets on be­half of hu­man con­trollers. No­to­ri­ously, bots are now a ma­jor force on so­cial me­dia, where they can “like” peo­ple and causes, post com­ments, re­act to oth­ers. Bots can be pro­grammed to tweet out in­sults in re­sponse to par­tic­u­lar words, to share Face­book pages, to re­peat slo­gans, to sow dis­trust.

Slowly, their in­flu­ence is grow­ing. One tech ex­ec­u­tive told me he reck­ons that half of the users on Twit­ter are bots, cre­ated by com­pa­nies that ei­ther sell them or use them to pro­mote var­i­ous causes. The Com­pu­ta­tional Pro­pa­ganda Re­search Project at the Univer­sity of Ox­ford has de­scribed how bots are used to pro­mote ei­ther po­lit­i­cal par­ties or govern­ment agen­das in 28 coun­tries. They can ha­rass po­lit­i­cal op­po­nents or their fol­low­ers, pro­mote poli­cies, or sim­ply seek to get ideas into cir­cu­la­tion.

About a week ago, for ex­am­ple, sym­pa­thiz­ers of the Pol­ish govern­ment — pos­si­bly alt-right Amer­i­cans — launched a co­or­di­nated Twit­ter bot cam­paign with the hash­tag “#as­tro­turf­ing” (not ex­actly a Pol­ish word) that sought to con­vince Poles that anti-govern­ment demon­stra­tors were fake, out­siders or for­eign­ers paid to demon­strate. An in­ves­ti­ga­tion by the At­lantic Coun­cil’s Dig­i­tal Foren­sic Re­search Lab pointed out the irony: An ar­ti­fi­cial Twit­ter cam­paign had been pro­grammed to smear a gen­uine so­cial move­ment by call­ing it . . . ar­ti­fi­cial.

That par­tic­u­lar cam­paign failed. But oth­ers suc­ceed — or at least they seem to. The ques­tion now is whether, given how many dif­fer­ent bot­nets are run­ning at any given mo­ment, we even know what that means. It’s pos­si­ble for com­puter sci­en­tists to ex­am­ine and ex­plain each one in­di­vid­u­ally. It’s pos­si­ble for psy­chol­o­gists to study why peo­ple re­act the way they do to on­line in­ter­ac­tions — why fact-check­ing doesn’t work, for ex­am­ple, or why so­cial me­dia in­creases ag­gres­sion.

But no one is re­ally able to ex­plain the way they all in­ter­act, or what the im­pact of both real and ar­ti­fi­cial on­line cam­paigns might be on the way peo­ple think or form opin­ions. An­other Dig­i­tal Foren­sic Re­search Lab in­ves­ti­ga­tion into pro-Trump and anti-Trump bots showed the ex­tra­or­di­nary num­ber of groups that are in­volved in these du­el­ing con­ver­sa­tions — some com­mer­cial, some po­lit­i­cal, some for­eign. The con­clu­sion: They are dis­tort­ing the con­ver­sa­tion, but to­ward what end, no­body knows.

Which is my point: Maybe we’ve been imag­in­ing this sce­nario in­cor­rectly all of this time. Maybe this is what “com­put­ers out of con­trol” re­ally look like. There’s no gi­ant space­ship, nor are there armies of life­like ro­bots. In­stead, we have cre­ated a swamp of un­re­al­ity, a world where you don’t know whether the emo­tions you are feel­ing are ma­nip­u­lated by men or ma­chines, and where — once all news moves on­line, as it surely will — it will soon be im­pos­si­ble to know what’s real and what’s imag­ined. Isn’t this the dystopia we have so long feared?

IL­LUS­TRA­TION BY KACPER PEMPEL/REUTERS

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.