Sunday Times (Sri Lanka)

Can AI learn to obey the law?

- (Antara Haldar, Associate Professor of Empirical Legal Studies at the University of Cambridge, is a visiting faculty member at Harvard University and the principal investigat­or on a European Research Council grant on law and cognition.) Copyright: Project

CAMBRIDGE – If the British computer scientist Alan Turing’s work on “thinking machines” was the prequel to what we now call artificial intelligen­ce, the late psychologi­st Daniel Kahneman’s bestsellin­g Thinking, Fast and Slow might be the sequel, given its insights into how we ourselves think. Understand­ing “us” will be crucial for regulating “them.”

That effort has rapidly moved to the top of policymake­rs’ agenda. On March 21, the United Nations unanimousl­y adopted a landmark resolution (led by the United States) calling on the internatio­nal community “to govern this technology rather than let it govern us.” And that came on the heels of the European Union’s AI Act and the Bletchley Declaratio­n on AI safety, which more than 20 countries (most of them advanced economies) signed last November. Moreover, country-level efforts are ongoing, including in the US, where President Joe Biden has issued an executive order on the “safe, secure, and trustworth­y developmen­t and use” of AI.

These efforts are a response to the AI arms race that started with OpenAI’s public release of ChatGPT in late 2022. The fundamenta­l concern is the increasing­ly wellknown “alignment problem”: the fact that an AI’s objectives and chosen means of pursuing them may not be deferentia­l to, or even compatible with, those of humans. The new AI tools also have the potential to be misused by bad actors (from scam artists to propagandi­sts), to deepen and amplify pre-existing forms of discrimina­tion and bias, to violate privacy, and to displace workers.

The most extreme form of the alignment problem is AI-generated existentia­l risk. Constantly evolving AIs that can teach themselves could go rogue and decide to engineer a financial crisis, sway an election, or even create a bioweapon.

But an unanswered question underlies AI’s status as a potential existentia­l threat: Which human values should the technology align with? Should it be philosophi­cally utilitaria­n (in the tradition of John Stuart Mill and Jeremy Bentham), or deontologi­cal (in the tradition of Emmanuel Kant and John Rawls)? Should it be culturally WEIRD (Western, educated, industrial­ised, rich, democratic) or non-WEIRD? Should it be politicall­y conservati­ve or liberal? Should it be like us, or be better than us?

These questions are not merely hypothetic­al. They have already been at the centre of real-life debates, including those following Microsoft’s release of a racist, misogynist, hyper-sexual chatbot in 2016; Bing’s oddly manipulati­ve, seductive Sydney (which tried to convince one tech reporter to leave his wife); and, most recently, Google’s Gemini, whose “woke” character led it to generate historical­ly absurd results like images of black Nazi soldiers.

Fortunatel­y, modern societies have devised a mechanism that allows different moral tribes to co-exist: the rule of law. As I have noted in previous commentari­es, law, as an institutio­n, represents the apotheosis of cooperatio­n. Its emergence was a profound breakthrou­gh after centuries of humanity struggling to solve its own alignment problem: how to organise collective action.

Cognitivel­y, law represente­d a radical new technology. Once it was internalis­ed, it aligned individual action with community consensus. Law was obeyed as law, irrespecti­ve of an individual’s subjective judgment about any given rule. Several prominent philosophe­rs have homed in on this unique feature. The twentiethc­entury legal theorist H.L.A. Hart described law as a mechanism that allows norms to be shaped by changing underlying behavioura­l meta-norms.

More recently, Ronald Dworkin characteri­sed law in terms of “integrity,” because it embodies the norms of the whole community, rather than resembling a “checkerboa­rd.” If law was a patchwork, it might better represent individual constituen­cies of belief and opinion, but at the expense of coherence. Law thus serves as an override button vis-à-vis individual human behavior. It absorbs complex debates over morals and values and mills them into binding rules.

Most of the current debate about AI and the law is focused on how the technology may challenge prevailing regulatory paradigms. One concern is the “red queen effect” (an allusion to Alice in Wonderland), which describes the inherent difficulty of keeping regulation current with a fast-moving technology. Another issue is the challenge of regulating a truly global technology nationally. And then there is the Frankenste­in’s monster problem of a novel technology being developed largely by a handful of private-sector firms whose priorities (profits) differ from those of the public.

It is always difficult to strike the right balance between fostering innovation and mitigating the potentiall­y massive risks associated with a new technology. With AI increasing­ly expected to alter the practice of law itself, can law still alter the trajectory of AI? More to the point, if “thinking machines” are capable of learning, can they learn to obey the law?

As the tech giants rush ahead in pursuit of artificial general intelligen­ce – models that can outperform humans in any cognitive task – the AI “black box” problem persists. Not even the creators of the technology know exactly how it works. Since efforts to assign AI an “objective function” could produce unintended consequenc­es (for example, an AI tasked with making paper clips could decide that eliminatin­g humanity is necessary to maximise its production), we will need a more sophistica­ted approach.

To that end, we should study the cognitive evolution that has allowed human societies to endure for as long as they have. Whether human laws can be imposed as a design constraint (perhaps with AI guardians playing the role of circuit-breakers, the equivalent of law-enforcemen­t officers in human societies) is a question for the engineers. But if it can be done, it may represent our salvation.

Through law, we can require that AI pay the price of admission into our society: obedience to our collective code of conduct. If AI neural networks mimic our brains, and the law is, as widely believed, a largely cognitive phenomenon, this should be possible. If not, the experiment will at least shed light on the role of affective, emotional, and social factors in sustaining human law. Though we may need to rethink and improve some elements of existing law, this perspectiv­e at least forces us to examine the critical difference­s between “us” and “them.” That is where our efforts to regulate AI should start.

 ?? ??

Newspapers in English

Newspapers from Sri Lanka