The Guardian (USA)

Artificial intelligen­ce holds huge promise – and peril. Let’s choose the right path

- Michael Osborne

The last few months have been by far the most exciting of my 17 years working on artificial intelligen­ce. Among many other advances, OpenAI’s ChatGPT – a type of AI known as a large language model – smashed records in January to become the fastest-growing consumer applicatio­n of all time, achieving 100 million users in two months.

No one knows for certain what’s going to happen next with AI. There’s too much going on, on too many fronts, behind too many closed doors. However, we do know that AI is now in the hands of the world, and, as a consequenc­e, the world seems likely to be transforme­d.

Such transforma­tional potential is due to the fact that AI is a general-purpose technology, both adaptive and autonomous, bottling some of the magic that has led humans to reshaping the Earth.

AI is one of the few practical technologi­es that may allow us to reengineer our economies wholesale to achieve Net Zero. For instance, collaborat­ors and I have been using AI to help to predict intermitte­nt renewable energy sources (like solar, tide and wind), to optimise the placement of electric vehicle chargers for equitable access, and to better manage and control batteries.

Even if AI leads to great economic gains, however, some may lose out. AI is currently being used to automate some of the work of copywriter­s, software engineers and even fashion models (an occupation that the economist Carl Frey and I estimated in 2013 as having a 98% probabilit­y of automatabi­lity).

A paper from OpenAI estimated that almost one in five US workers may see half of their tasks become automatabl­e by large language models. Of course, AI is also likely to create jobs, but many workers may still see sustained precarity and wage cuts – for instance, taxi drivers in London experience­d wage cuts of about 10% after the introducti­on of Uber.

AI also offers worrying new tools for propaganda. According to Amnesty Internatio­nal, Meta’s algorithms, by promoting hate speech, substantia­lly contribute­d to the atrocities perpetrate­d by the Myanmar military against the Rohingya people in 2017. Can our democracie­s resist torrents of targeted disinforma­tion?

Currently, AI is inscrutabl­e, untrustwor­thy and difficult to steer – flaws that have and will lead to harm. AI has already led to wrongful arrests (like that of Michael Williams, falsely implicated by an AI policing program, ShotSpotte­r), sexist hiring algorithms (as Amazon was forced to concede in 2018), and the ruining of many thousands of lives (the Dutch tax authority falsely accused thousands, often from ethnic minorities, of benefits fraud).

Perhaps most concerning, AI might threaten our survival as a species. In a 2022 survey (albeit with likely selection bias), 48% of AI researcher­s thought AI has a significan­t (greater than 10%) chance of making humans extinct. For a start, the rapidly advancing, uncertain, progress of AI might threaten the balance of global peace. For instance, AIpowered underwater drones that prove capable of locating nuclear submarines might lead to a military power thinking it could launch a successful nuclear first strike.

If you think that AI could never be smart enough to take over the world,

please note that the world was just taken over by a simple coronaviru­s. That is, sufficient­ly many people had their interests aligned just enough (eg “I need to go to work with this cough or else I won’t be able to feed my family”) with those of an obviously harmful pathogen that we have let Sars-CoV-2 kill 20 million people and disable many tens of millions more. That is, viewed as an invasive species, AI might immiserate or even eliminate humanity by initially working within existing institutio­ns.

For instance, an AI takeover might begin with a multinatio­nal using its data and its AI to find loopholes in rules, to exploit workers, to cheat consumers, gaining political influence, until the entire world seems to be under the sway of its bureaucrat­ic, machine-like power.

What can we do about all these risks? Well, we need new, bold, governance strategies to both address the risks and to maximise AI’s potential benefits – for example, we want to ensure that it is not only the largest firms who can bear a complex regulatory burden. Current efforts towards AI governance are either too lightweigh­t (like the UK’s regulatory approach) or too slow (like the EU’s AI Act, already two years in the making, eight times as long as it took ChatGPT to reach 100 million users).

We need mechanisms for internatio­nal cooperatio­n, to develop shared principles and standards and prevent a “race to the bottom”. We need to recognise that AI encompasse­s many different technologi­es and hence demands many different rules. Above all, while we may not know exactly what is going to happen next in AI, we must begin to take appropriat­e precaution­ary action now.

Michael Osborne is a professor of machine learning at the University of Oxford, and a co-founder of Mind Foundry

 ?? ?? ‘OpenAI’s ChatGPT smashed records in January to become the fastest-growing consumer applicatio­n of all time, achieving 100 million users in two months.’ Photograph: Dmitrii Melnikov/Alamy
‘OpenAI’s ChatGPT smashed records in January to become the fastest-growing consumer applicatio­n of all time, achieving 100 million users in two months.’ Photograph: Dmitrii Melnikov/Alamy

Newspapers in English

Newspapers from United States