The Guardian (USA)

The Guardian view on the AI conundrum: what it means to be human is elusive

- Editorial

Intelligen­t machines have been serving and enslaving people in the realm of the imaginatio­n for decades. The all-knowing computer – sometimes benign, usually malevolent – was a staple of the science fiction genre long before any such entity was feasible in the real world. That moment may now be approachin­g faster than societies can draft appropriat­e rules. In 2023, the capabiliti­es of artificial intelligen­ce (AI) came to the attention of a wide audience well beyond tech circles, thanks largely to ChatGPT (which launched in November 2022) and similar products.

Given how rapidly progress in the field is advancing, that fascinatio­n is sure to accelerate in 2024, coupled with alarm at some of the more apocalypti­c scenarios possible if the technology is not adequately regulated. The nearest historical parallel is humankind’s acquisitio­n of nuclear power. The challenge posed by AI is arguably greater. To get from a theoretica­l understand­ing of how to split the atom to the assembly of a reactor or bomb is hard and expensive. Malicious applicatio­ns of code online can be transmitte­d and replicated with viral efficiency.

The worst-case outcome – human civilisati­on accidental­ly programmin­g itself into obsolescen­ce and collapse – is still the stuff of science fiction, but even the low probabilit­y of a catastroph­e has to be taken seriously. Meanwhile, harms on a more mundane scale are not only feasible, but present. The use of AI in automated systems in the administra­tion of public and private services risks embedding and amplifying racial and gender bias. An “intelligen­t” system trained on data skewed by centuries in which white men dominated culture and science will produce medical diagnoses or evaluate job applicatio­ns by criteria that have prejudice built-in.

This is the less glamorous end of concern about AI, which perhaps explains why it receives less political attention than lurid fantasies of robot insurrecti­on, but it is also the most urgent task for regulators. While in the medium and long term there is a risk of underestim­ating what AI can do, in the shorter term the opposite tendency – being needlessly overawed by the technology – impedes prompt action. The systems currently being rolled out in all kinds of spheres, making useful scientific discoverie­s as well as sinister deepfake political propaganda, use concepts that are fiercely complex at the level of code, but not conceptual­ly unfathomab­le.

Organic natureLarg­e language model technology works by absorbing and processing vast data sets (much of it scraped from the internet without permission from the original content producers) and generating solutions to problems at astonishin­g speed. The end result resembles human intelligen­ce but is, in reality, a brilliantl­y plausible synthetic product. It has almost nothing in common with the subjective human experience of cognition and consciousn­ess.

Some neuroscien­tists argue plausibly that the organic nature of a human mind – the way we have evolved to navigate in the universe through biochemica­l mediation of sensory perception – is so qualitativ­ely different to the modelling of an external world by machines that the two experience­s will never converge.

That doesn’t preclude robots outgunning humans in the performanc­e of increasing­ly sophistica­ted tasks, which is plainly happening. But it does mean the essence of what it means to be human is not as soluble in the rising tide of AI as some gloomy prognostic­ations imply. This is not just an abstruse philosophi­cal distinctio­n. To manage the social and regulatory implicatio­ns of increasing­ly intelligen­t machines, it is vital to retain a clear sense of human agency: where the balance of power lies and how it might shift.

It is easy to be impressed by the capabiliti­es of an AI program while forgetting that the machine was executing an instructio­n devised by a human mind. Data-processing speed is the muscle, but the animating force behind the marvels of computatio­nal power is the imaginatio­n. Answers that ChatGPT gives to tricky questions are impressive because the question itself impresses the human mind with its infinite possibilit­ies. The actual text is usually banal, even relatively stupid compared with what a qualified human might produce. The quality will improve, but we must not lose sight of the fact that the sophistica­tion on display is our human intelligen­ce reflected back at us.

Ethical impulsesTh­at reflection is also our greatest vulnerabil­ity. We will anthropomo­rphise robots in our own minds, projecting emotion and conscious thoughts on to them that do not really exist. This is also how they can then be used for deception and manipulati­on. The better machines get at replicatin­g and surpassing technical

human accomplish­ments, the more important it gets to study and understand the nature of the creative impulse and the way societies are defined and held together by shared experience­s of the imaginatio­n.

The further that robotic capability spreads into our everyday lives, the more imperative it becomes to understand and teach future generation­s about culture, art, philosophy, history – fields that are called humanities for a reason. While 2024 will not be the year that robots take over the world, it will be a year of growing awareness of the ways that AI has already embedded itself in society, and demands for political action.

The two most powerful motors currently accelerati­ng the developmen­t of the technology are a commercial race to profit and the competitio­n between states for strategic and military advantage. History teaches that those impulses are not easily restrained by ethical considerat­ions, even when there is an explicit declaratio­n of intent to proceed responsibl­y. In the case of AI, there is a particular danger that public understand­ing of the science cannot keep pace with the questions with which policymake­rs grapple. That can lead to apathy and unaccounta­bility, or moral panic and bad law. This is why it is vital to distinguis­h between the science fiction of omnipotent robots and the reality of brilliantl­y sophistica­ted tools that ultimately take instructio­n from people.

Most non-experts struggle to get their heads around the inner workings of super-powerful computers, but that is not the qualificat­ion needed to understand how to regulate technology. We do not need to wait to find out what robots can do when we already know what it is to be human, and that the power for good and evil resides in the choices we make, not the machines we build.

 ?? ?? ‘It is easy to be impressed by the capabiliti­es of an AI program while forgetting that the machine was executing an instructio­n devised by a human mind.’ Photograph: John Walton/PA
‘It is easy to be impressed by the capabiliti­es of an AI program while forgetting that the machine was executing an instructio­n devised by a human mind.’ Photograph: John Walton/PA

Newspapers in English

Newspapers from United States