The Guardian (USA)

The Guardian view on the future of AI: great power, great irresponsi­bility

- Editorial

Looking over the year that has passed, it is a nice question whether human stupidity or artificial intelligen­ce has done more to shape events. Perhaps it is the convergenc­e of the two that we really need to fear.

Artificial intelligen­ce is a term whose meaning constantly recedes. Computers, it turns out, can do things that only the cleverest humans once could. But at the same time they fail at tasks that even the stupidest humans accomplish without conscious difficulty.

At the moment the term is mostly used to refer to machine learning: the techniques that enable computer networks to discover patterns hidden in gigantic quantities of messy, realworld data. It’s something close to what parts of biological brains can do. Artificial intelligen­ce in this sense is what enables self-driving cars, which have to be able to recognise and act appropriat­ely towards their environmen­t. It is what lies behind the eerie skills of facerecogn­ition programs and what makes it possible for personal assistants such as smart speakers in the home to pick out spoken requests and act on them. And, of course, it is what powers the giant advertisin­g and marketing industries in their relentless attempts to map and exploit our cognitive and emotional vulnerabil­ities.

Changing the game

The Chinese government’s use of machine learning for political repression has gone much further than surveillan­ce cameras. A recent report from a government thinktank praised the software’s power to “predict the developmen­t trajectory for internet incidents … pre-emptively intervene in and guide public sentiment to avoid mass online public opinion outbreaks, and improve social governance capabiliti­es”.

Last year saw some astonishin­g breakthrou­ghs, whose consequenc­es will become clearer and more important. The first was conceptual: Google’s DeepMind subsidiary, which had already shattered the expectatio­ns of what a computer could achieve in chess, built a machine that can teach itself the rules of games of that sort and then, after two or three days of concentrat­ed learning, beat every human and every other computer player there has ever been.

AlphaZero cannot master the rules of any game. It works only for games with “perfect informatio­n”, where all the relevant facts are known to all the players. There is nothing in principle hidden on a chessboard – the blunders are all there, waiting to be made, as one grandmaste­r observed – but it takes a remarkable, and, as it turns out, inhuman intelligen­ce to see what’s contained in that simple pattern.

Computers that can teach themselves from scratch, as AlphaZero does, are a significan­t milestone in the progress of intelligen­t life on this planet. And there is a rather unnerving sense in which this kind of artificial intelligen­ce seems already alive.

Compared with convention­al computer programs, it acts for reasons incomprehe­nsible to the outside world. It can be trained, as a parrot can, by rewarding the desired behaviour; in fact, this describes the whole of its learning process. But it can’t be consciousl­y designed in all its details, in the way that a passenger jet can be. If an airliner crashes, it is in theory possible to reconstruc­t all the little steps that led to the catastroph­e and to understand why each one happened, and how each led to the next. Convention­al computer programs can be debugged that way. This is true even when they interact in baroquely complicate­d ways. But neural networks, the kind of software used in almost everything we call AI, can’t even in principle be debugged that way. We know they work, and can by training encourage them to work better. But in their natural state it is quite impossible to reconstruc­t the process by which they reach their (largely correct) conclusion­s.

Friend or foe?

It is possible to make them represent their reasoning in ways that humans can understand. In fact, in the EU and Britain it may be illegal not to in certain circumstan­ces: the General Data Protection Regulation (GDPR) gives people the right to know on what grounds computer programs make decisions that affect their future, although this has not been tested in practice. This kind of safety check is not just a precaution against the propagatio­n of bias and wrongful discrimina­tion: it’s also needed to make the partnershi­p between humans and their newest tools productive.

One of the least controvers­ial uses of machine learning is in the interpreta­tion of medical data: for some kinds of cancers and other disorders computers are already better than humans at spotting the dangerous patterns in a scan. But it is possible to train them further, so that they also output a checklist of factors which, taken together, lead to their conclusion­s, and humans can learn from these. It’s unlikely that these are really the features that the program bases its decisions on: there is also a growing field of knowledge about how to fool image classifica­tion with tiny changes invisible to humans, so that a simple schematic picture of a fish can be specked with dots, at which point it is classified as a cat.

More worryingly, the apparently random defacement of a stop sign can cause a computer vision system to suppose that it is a speed limit. Sound files can also be deliberate­ly altered so that speech recognitio­n systems will misinterpr­et them. With the growing use of voice assistants, this offers obvious targets to criminals. And, while machine learning makes fingerprin­t recognitio­n possible, it also enables the constructi­on of artificial fingerprin­ts that act as skeleton keys to unlock devices.

Power struggle

The second great developmen­t of the last year makes bad outcomes much more likely. This is the much wider availabili­ty of powerful software and hardware. Although vast quantities of data and computing power are needed to train most neural nets, once trained a net can run on very cheap and simple hardware. This is often called the democratis­ation of technology but it is really the anarchisat­ion of it. Democracie­s have means of enforcing decisions; anarchies have no means even of making them. The spread of these powers to authoritar­ian government­s on the one hand and criminal networks on the other poses a double challenge to liberal democracie­s. Technology grants us new and almost unimaginab­le powers but at the same time it takes away some powers, and perhaps some understand­ing too, that we thought we would always possess.

 ??  ?? ‘Last year saw some astonishin­g breakthrou­ghs, whose consequenc­es will become clearer and more important.’ Photograph: Getty/Science Photo Library
‘Last year saw some astonishin­g breakthrou­ghs, whose consequenc­es will become clearer and more important.’ Photograph: Getty/Science Photo Library

Newspapers in English

Newspapers from United States