Der Standard

Please Prove You’re Not a Robot

-

When science fiction writers first imagined robot invasions, the idea was that bots would become smart and powerful enough to take over the world by force, whether on their own or as directed by some evildoer. In reality, something only slightly less scary is happening. Robots are getting better, every day, at impersonat­ing humans.

When directed by opportunis­ts, malefactor­s and sometimes even nation-states, they pose a particular threat to democratic societies, which are premised on being open to the people.

Robots posing as people have become a menace. For popular Broadway shows (need we say “Hamilton”?), it is actually bots, not humans, who do much and maybe most of the ticket buying. Shows sell out immediatel­y, and the middlemen (quite literally, evil robot masters) reap millions in ill-gotten gains.

Philip Howard, who runs the Computatio­nal Propaganda Research Project at Oxford, studied the deployment of propaganda bots during voting on Brexit, and the recent American and French presidenti­al elections. Twitter is particular­ly distorted by its millions of robot accounts; during the French election, it was principall­y Twitter robots that were trying to make #MacronLeak­s into a scandal. Facebook has admitted it was essentiall­y hacked during the American election in November. In Michigan, Mr. Howard notes, “junk news was shared just as widely as profession­al news in the days leading up to the election.”

Robots are also being used to attack the democratic features of the administra­tive state. This spring, America’s Federal Communicat­ions Commission put its proposed revocation of net neutrality up for public comment. In previous years such proceeding­s attracted millions of (human) commentato­rs. This time, someone with an agenda but no actual public support unleashed robots that impersonat­ed (via stolen identities) hundreds of thousands of people, flooding the system with fake comments against federal net neutrality rules.

To be sure, today’s impersonat­ion-bots are different from the robots imagined in science fiction: They aren’t sentient, don’t carry weapons and don’t have physical bodies. Instead, fake humans just have whatever is necessary to make them seem human enough to “pass”: a name, perhaps a virtual appearance, a credit- card number and, if necessary, a profession, birthday and home address. They are brought to life by programs or scripts that give one person the power to imitate thousands.

The problem is almost certain to get worse, spreading to even more areas of life as bots are trained to become better at mimicking humans.

Given the degree to which product reviews have been swamped by robots (which tend to hand out five stars with abandon), commercial sabotage in the form of negative bot reviews is not hard to predict. In coming years, campaign finance limits will be (and maybe already are) evaded by robot armies posing as “small” donors. And actual voting is another obvious target — perhaps the ultimate target.

So far, we’ve been content to leave the problem to the tech industry, where the focus has been on build- ing defenses, usually in the form of Captchas (“completely automated public Turing test to tell computers and humans apart”), those annoying “type this” tests to prove you are not a robot. But leaving it all to industry is not a long-term solution. For one thing, the defenses don’t actually deter impersonat­ion bots, but perversely reward whoever can beat them. And perhaps the greatest problem for a democracy is that companies like Facebook and Twitter lack a serious financial incentive to do anything about matters of public concern, like the millions of fake users who are corrupting the democratic process. Twitter estimates there are at least 27 million probably fake accounts; researcher­s suggest the real number is closer to 48 million, yet the company does little about the problem.

The problem is a public as well as a private one, and impersonat­ion robots should be considered what the law calls “hostis humani generis”: enemies of mankind, like pirates and other outlaws. That would allow for a better offensive strategy: bringing the power of the state to bear on the people deploying the robot armies to attack commerce or democracy.

The ideal anti-robot campaign would employ a mixed technologi­cal and legal approach. Improved robot detection might help us find the robot masters or potentiall­y help national security unleash counteratt­acks, which can be necessary when attacks come from overseas. There may be room for deputizing private parties to hunt down bad robots. A simple legal remedy would be a “Blade Runner” law that makes it illegal to deploy any program that hides its real identity to pose as a human. Automated processes should be required to state, “I am a robot.” When dealing with a fake human, it would be nice to know.

Using robots to fake support, steal tickets or crash democracy really is the kind of evil that science fiction writers were warning about. The use of robots takes advantage of the fact that political campaigns, elections and even open markets make humanistic assumption­s, trusting that there is wisdom or at least legitimacy in crowds and value in public debate.

But when support and opinion can be manufactur­ed, bad or unpopular arguments can win not by logic but by a novel, dangerous form of force — the ultimate threat to every democracy.

Newspapers in German

Newspapers from Austria