The National - News

Socrates and the moral dilemma of AI in warfare

- OLIVIER OULLIER Professor Olivier Oullier is the president of Emotiv, a neuroscien­tist and a DJ

In Plato’s Republic, Socrates challenges the idea that justice should boil down to telling the truth and returning things that were taken. He raises an interestin­g dilemma: what if the goods in question would harm other individual­s if returned?

In other words, giving a weapon back to its owner constitute­s a moral dilemma as it could lead to people being harmed by the weapon’s use.

For Socrates, there was no doubt that protecting people was what should always prevail.

Centuries later, Socrates’s view is still very relevant in the light of recent developmen­ts regarding the use of artificial intelligen­ce in warfare. From autonomous weapons and robot soldiers to terabytes of video captured by drones, many countries including China, Russia and the US are leveraging AI to improve their military strategies and operations.

Despite the tremendous budgets at stake and the need for countries to be able to defend themselves, should researcher­s and engineers specialisi­ng in artificial neural networks and machine and deep learning allow the products of their labour to be used to kill people, even if they could also be used to save lives?

With the advent of the Fourth Industrial Revolution and AI, more leaders in the public and private sector are working with philosophe­rs, psychologi­sts and neuroscien­tists to better understand how people deal with such dilemmas and make moral judgments and decisions.

Seventeen years ago, Joshua Greene and his colleagues at Princeton University published a seminal article in

Science Magazine, in which neurotechn­ology was employed to better understand how the brains of individual­s function when making decisions while facing moral dilemmas. The team of neuroscien­tists found that depending on how emotionall­y engaged people are, their judgment about what to do will vary.

For instance, in a wellknown example in which someone is forced to choose whether to sacrifice one life to save five, having to press a button to achieve it remotely or having physically push someone significan­tly changed the reactions in the brains of participan­ts in the study, despite the outcome being the same.

These findings are currently being used to help autonomous vehicles make decisions. But the neuroscien­ce of ethical decision-making can resonate far beyond the automotive industry.

Thousands of Google employees expressed concern earlier this year when they found out that their company was involved in Project Maven, the nickname for the US Department of Defence’s algorithmi­c warfare cross-functional team, establishe­d in 2017.

This project is part of the department’s massive effort to leverage AI to improve the efficacy of US military operations, starting with the analysis of informatio­n and footage recorded by drones. According to The Wall Street

Journal, in 2017 alone the department spent $7.4 billion on AI-related projects.

Thousands of Google employees signed an open letter to the company’s chief executive Sundar Pichai, stating: “Building this technology to assist the US government in military surveillan­ce – and potentiall­y lethal outcomes – is not acceptable”.

A dozen employees even resigned from their jobs. Soon after, Google announced it would not renew its partnershi­p with the defence department on Project Maven once the current contract comes to an end in March 2019.

But this move will not stop the AI efforts and investment­s of the department, which in July announced a $885 million, five-year contract with consultanc­y Booz Allen Hamilton to be able to use large-scale AI systems. Many similar partnershi­ps are to be expected in the near future, all over the world.

In the light of the tremendous investment­s in the public and the private sectors and the increasing number of projects globally to leverage AI in warfare, some scientists and business executives have decided to voice their dissent.

More than 2,000 of them signed a pledge launched last month, supported by the Future of Life Institute, demanding that government­s introduce strong internatio­nal norms and laws against lethal autonomous weapons. In the absence of laws, they wrote, “we will neither participat­e in nor support the developmen­t, manufactur­e, trade, or use of lethal autonomous weapons”.

Elon Musk of Space-X and Neuralink and Demis Hassabis of Google DeepMind, as well as several Nobel laureates, are among the signatorie­s to the pledge. In addition, 26 United Nations countries have “explicitly endorsed the call for a ban on lethal autonomous weapons systems”.

In a report entitled Values, ethics and innovation: rethinking technologi­cal developmen­t in the fourth industrial revolution, published earlier this month, the World Economic Forum offers operationa­l solutions to put values and ethics at the heart of technologi­cal and societal developmen­t.

Very wisely, its authors urge all stakeholde­rs involved, including government­s and citizens, to not lose sight of what technologi­cal developmen­t should be about: social progress and the wellbeing of humanity. It is a message that Socrates himself might have endorsed.

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates