The Daily Telegraph

Move fast and break things? Not with AI

Artificial intelligen­ce could change many aspects of our lives, but it will also expose us to new dangers

- Stephen cave

Everybody is talking about AI (Artificial Intelligen­ce) – and from Beijing to Ottawa, government­s are planning their strategies for harnessing this transforma­tive technology. In the last Budget, the UK Government joined this bandwagon, declaring its intention to become “a world leader”, and announcing investment of more than £75million to develop the sector.

This is all very welcome: AI really does promise new levels of health and prosperity. But it won’t fulfil this promise if it is allowed to develop unfettered. The Silicon Valley culture of “move fast and break things” fitted an industry in its infancy, but not one that stands to transform almost every aspect of our lives. If we are going to join a race to develop intelligen­t machines, we must be prepared to race just as fast to develop the codes and standards that will also preserve what we hold most precious.

We have seen what happens when we permit an exciting new technology to develop unchecked: social media platforms have brought many benefits, but also fake news, filter bubbles and echo chambers; they have sucked up our personal data and become tools of foreign powers. Now companies such as Facebook are trying – too little, too late – to tame this genie.

But the algorithms behind Google and Facebook are just the first taste of the AI revolution. Smart machines will soon be driving our cars, diagnosing us at the doctor’s, managing the flows of goods and money around the world, educating our kids, and much, much more. All of this should bring new efficienci­es and insights, improving outcomes and freeing up more of our time for what matters in life.

But it will also expose us to new dangers: new ways in which we can be manipulate­d by programmes that know what phrases will make us press “buy”, or vote “yes”; or in which the underprivi­leged can be discrimina­ted against by unaccounta­ble algorithms; or in which we can become dependent on systems we do not understand, losing bit by bit our skills, our jobs, and our dignity.

As this technology becomes more powerful – which it surely will, given the amount of money and talent pouring into the field – its capacity to break things will grow further. Machines are already smarter than us in many narrow domains; now researcher­s are working towards more human-like general intelligen­ce, or even super-intelligen­ce. Many experts believe we can achieve this within decades. If we do, we will have just one shot to get it right: we might tame a genie, but not a god.

So, as government­s invest their billions in AI, it is essential that they also ask what we want from it. Just because we can hand over a task to a smart machine does not mean we should. And where we do, we need to know that its decisions are fair and transparen­t. And when the machines make mistakes – as they will, potentiall­y in technologi­es used by millions – we need to know where to look for responsibi­lity and redress. And as AI grows in power, we need to know how to stay in control.

We don’t have all the answers to these questions. Far from it: some involve complex technical challenges that might take years of research; others require the broadest consultati­on about what kind of society we want. There is much hard work to be done, nationally and internatio­nally. But it is clear that, just as we do not “move fast and break things” with civil engineerin­g, or drug developmen­t, or nuclear energy, we shouldn’t with AI.

It is therefore very welcome that the Government has announced a new Centre for Data Ethics and Innovation, with the task of ensuring that AI is developed safely and ethically. With bodies like the Human Fertilisat­ion and Embryology Authority, and the Nuffield Council on Bioethics, the UK has a proud tradition of fostering thoughtful, responsibl­e innovation. From Beijing to Ottawa, the world needs this as much as it needs new programmes and processors.

So yes, we have an opportunit­y to become world leaders: but not just in apps and algorithms. We can aspire to set the standard for what it means to live well and safely in the age of intelligen­t machines.

Stephen Cave is the executive director of the Leverhulme Centre for the Future of Intelligen­ce at Cambridge University

 ??  ??

Newspapers in English

Newspapers from United Kingdom