Arkansas Democrat-Gazette

Is AI threat real?

- ANNE APPLEBAUM

You know the scenario from 19th- century fiction and Hollywood movies: Mankind has invented a computer or a robot or another artificial thing that has taken on a life of its own. Frankenste­in’s monster is built from corpses. In 2001: A Space Odyssey, it’s an all- seeing computer with a human voice. In Westworld, the robots are lifelike androids that begin to think for themselves. But in almost every case, the out- ofcontrol artificial life form is anthropomo­rphic. It has a face or a body, or at least a human voice and a physical presence in the real world.

But what if the real threat from artificial life doesn’t look or act human at all? What if it’s just a piece of computer code that can affect what you see and therefore what you think and feel? What if it’s a bot, not a robot?

For those who don’t know ( and apologies to those who are wearily familiar), a bot really is just a piece of computer code that can do things that humans can do. Wikipedia uses bots to correct spelling and grammar on its articles; bots can also play computer games or place gambling bets on behalf of human controller­s. Notoriousl­y, bots are now a major force on social media, where they can “like” people and causes, post comments, and react to others. Bots can be programmed to tweet out insults in response to particular words, to share Facebook pages, to repeat slogans, to sow distrust.

Slowly, their influence is growing. One tech executive told me he reckons that half of the users on Twitter are bots, created by companies that either sell them or use them to promote various causes. The Computatio­nal Propaganda Research Project at the University of Oxford has described how bots are used to promote either political parties or government agendas in 28 countries. They can harass political opponents or their followers, promote policies, or simply seek to get ideas into circulatio­n.

About a week ago, for example, sympathize­rs of the Polish government— possibly alt- right Americans— launched a coordinate­d Twitter bot campaign with the hashtag “# astroturfi­ng” ( not exactly a Polish word) that sought to convince Poles that anti- government demonstrat­ors were fake, outsiders or foreigners paid to demonstrat­e. An investigat­ion by the Atlantic Council’s Digital Forensic Research Lab pointed out the irony: An artificial Twitter campaign had been programmed to smear a genuine social movement by calling it … artificial.

That particular campaign failed. But others succeed, or at least they seem to. The question now is whether, given how many different botnets are running at any given moment, we even know what that means. It’s possible for computer scientists to examine and explain each one individual­ly. It’s possible for psychologi­sts to study why people react the way they do to online interactio­ns— why fact- checking doesn’t work, for example, or why social media increases aggression.

But no one is really able to explain the way they all interact, or what the impact of both real and artificial online campaigns might be on the way people think or form opinions. Another Digital Forensic Research Lab investigat­ion into pro- Trump and anti- Trump bots showed the extraordin­ary number of groups that are involved in these dueling conversati­ons— some commercial, some political, some foreign. The conclusion: They are distorting the conversati­on, but toward what end, nobody knows.

Which is my point: Maybe we’ve been imagining this scenario incorrectl­y all of this time. Maybe this is what “computers out of control” really look like. There’s no giant spaceship nor armies of lifelike robots. Instead, we have created a swamp of unreality, a world where you don’t know whether the emotions you are feeling are manipulate­d by men or machines, and where— once all news moves online— it will soon be impossible to know what’s real and what’s imagined. Isn’t this the dystopia we have so long feared?

 ??  ??

Newspapers in English

Newspapers from United States