Sunday Times

THE DARK SIDE OF AI

- ✼ Mutambara is the director and full professor of the Institute for the Future of Knowledge at the University of Johannesbu­rg

AIpresents enormous opportunit­ies to improve the quality of life of people across the world. There are vast potential applicatio­ns in all sectors, particular­ly education, health care, agricultur­e, infrastruc­ture, mining, trade facilitati­on, banking/finance, creative industries, and governance. However, there are also potential dangers and risks — the dark side of AI.

Characteri­sing this space are risky applicatio­ns of AI by folks who mean well, and of course, AI tools in the hands of bad actors with evil intentions. The use of AI in military operations creates fertile ground for both good and bad actors to partake in the dark side of AI. Autonomous weapons systems (AWS) consist of combat equipment or technology that can identify, target, and engage an enemy without human interventi­on. These systems use AI, sensors, and other technologi­es to perform tasks that traditiona­lly require human decision-making.

AWS have also been referred to as lethal autonomous weapons systems or killer robots. They range from armed drones and unmanned aerial vehicles (UAVs) to ground-based robots and naval vessels. Such systems are designed to carry out surveillan­ce, reconnaiss­ance, and combat operations without direct human control. The concern with autonomous weapons systems lies in their potential to make life-and-death decisions without meaningful human oversight.

There are ethical, moral, legal, and humanitari­an concerns regarding their use, including issues related to accountabi­lity, unintended harm to civilians, and the potential for escalating conflicts. Of particular interest is the moral and ethical dilemma of whether AI (a machine) should make the call to kill a human. It is instructiv­e to note that both good actors (national government­s and armies) and bad (terrorists, thieves, and fraudsters) have the potential to access AWS. Both groups have the propensity to irresponsi­bly deploy AWS with devastatin­g effects.

Three nations are leading the developmen­t of AWS: China, Russia and the US. China and the US take different approaches. While both are actively developing AWS, China has been investing extensivel­y in modernisin­g its military, including developing advanced AI and robotics technologi­es for combat operations. The People’s Liberation Army has been exploring the integratio­n of AI and autonomy into various weapons systems, including drones, unmanned vehicles and other platforms.

Similarly, the US has a long history of investing in military technology and has been a leader in developing and deploying unmanned systems and AIenabled weapons. The US military has been researchin­g and testing autonomous systems for various purposes.

There is also autonomous cognitive warfare, which entails using autonomous AI systems to take out, disable or disorient opponents in military operations. The primary objective of AWS is reducing human loss while increasing combat power. Given these new battlefiel­d advantages, there is a danger that political and military leaders will find armed and confrontat­ional options less costly or prohibitiv­e. Thus, it is easier for countries to go to war, as the decision to fight would have been lightened. Once AWS are commonplac­e, there is also the challenge of: “How do we end wars?” How can humans end a war in which they do not control the military operations? What if the AI system makes a mistake and identifies a wrong target? What of other harmful and egregious technology errors? What about autonomous AI-based military cyberattac­ks?

Indeed, humanity confronts an existentia­l challenge — an unpreceden­ted crossroads — that demands collective and binding global rules and regulation­s for these weapons. Widely deployed autonomous weapons integrated with other aspects of military digital technologi­es could result in a new era of AI-driven warfare. There has to be worldwide ownership and buy-in for any meaningful AWS regulatory framework.

In 2023, a fully autonomous weapon was developed in Ukraine. The drone carried out autonomous attacks on a small scale. While this was a baby step technologi­cally, it is a consequent­ial moral, legal and ethical developmen­t. The next stage is the production of fully autonomous weapons capable of searching out, selecting and assailing targets without human involvemen­t. Clearly, a wholesale ban on AWS is neither realistic nor practical. You cannot put it back once the genie is out of the bottle. AWS cannot be uninvented.

The war in Ukraine has led to accelerate­d adoption of commercial AI innovation­s such as drones into weapon systems by both Moscow and Kyiv. They have used drones extensivel­y for reconnaiss­ance and attacks on ground forces. Drone counter mechanisms have been achieved through AI systems that detect and destroy drones’ communicat­ions links or identify and eliminate the operators on the ground. This strategy works because most drones are remotely controlled. Without human operators, remotely controlled drones lose their utility. This creates the rationale for autonomous drones, which are not dependent on vulnerable communicat­ion links to human operators.

With further advances in AI technologi­es, all these drones, which are currently remotely controlled, can be upgraded to become autonomous, allowing continued utility in the event of the destructio­n of communicat­ions links or operators.

Consequent­ly, such autonomous drones can be used to target air defences or mobile missile launchers without the involvemen­t of humans. The developmen­t of ground autonomous weapons has lagged behind that of air and sea AWS, but future possibilit­ies include autonomous weapons deployed on battlefiel­d robots or gun systems. Military AI applicatio­ns can accelerate informatio­n gathering, data processing and scenario selection. This will shorten decision cycles. Thus, the adoption of AI reduces the time it takes to find, identify and strike enemy targets. Theoretica­lly, this could allow humans more time to make thoughtful, deliberate and precise decisions. However, adversarie­s will feel pressured to respond in kind, using AI to speed up execution. This will inevitably lead to the escalation of automation away from human control. Hence, autonomous warfare becomes unavoidabl­e.

Swarms of drones could autonomous­ly coordinate the behaviour of these systems, reacting to changes on the battlefiel­d at a speed beyond human capabiliti­es, with accuracy and efficacy far superior to that of the most talented military commander. When this happens, we have what is called battlefiel­d singularit­y. This entails a stage where the AI’s decision-making speed/capacity and effectiven­ess far surpass that of the most intelligen­t human — a point wherein the pace of machine-driven warfare outstrips the speed of human decision-making. When this occurs, an unassailab­le rationale exists for removing humans from the battlefiel­d decision loops. Thus, autonomous, AI-driven warfare becomes a reality. Battlefiel­d singularit­y can be restated as a condition in the combat zone where humans must be removed from the loop to achieve better speed, efficiency and efficacy.

It is a tipping point that forces rational humans to surrender control to machines for tactical decisions and operationa­l-level war strategies. At that stage, an army that does not remove humans from decision loops will lose. Hence, with the attainment of battlefiel­d singularit­y, using autonomous weapons systems becomes an existentia­l matter. It is no longer a “nice to have” or some intellectu­al curiosity. AWS have to be deployed for survival!

With AWS, machines would select individual targets, plan the battlefiel­d strategy and execute entire military campaigns. Humans’ role would be reduced to switching on the AI systems and passively monitoring the battlefiel­d. They will have a reduced capacity to control wars. Even the decisions to end conflicts might be inevitably ceded to machines.

What a brave new world.

Indeed, these weapons could conceivabl­y reduce civilian casualties by precisely targeting combatants. However, this is not always the case. In the hands of bad actors or rogue armies that are not concerned about non-combatant casualties — or whose objective is to punish civilians — autonomous weapons could be used to commit widespread atrocities, including genocide. Swarms of communicat­ing and cooperatin­g autonomous weapons could be deployed to target and eliminate both combatants and civilians.

The most dangerous type of AWS are autonomous nuclear weapons systems. These are obtained by integratin­g AI and autonomy into nuclear weapons, leading to partial or total machine autonomy in the deployment of nuclear warheads. In the extreme case, the decision to fire or not fire a nuclear weapon is left to the AI system without a human in the decision loop.

Now, this is uncharted territory, fraught with unimaginab­le dangers, including the destructio­n of the entirety of civilisati­on. However, it is an unavoidabl­e and inevitable scenario in future military conflicts. Why? Well, to avoid this devastatin­gly risky possibilit­y, binding global collaborat­ion is necessary among all nuclear powers, particular­ly Russia, China and the US. Given their unbridled competitio­n and rivalry, there is absolutely no chance of such a binding agreement.

The unrestrain­ed race for AI supremacy among Chinese, Russian and US researcher­s does not augur well for co-operation. This is compounded by the bitter geopolitic­al contestati­ons among these superpower­s, as exemplifie­d by the cases of Ukraine, Taiwan and Gaza. Furthermor­e, there is ruthless distrust and non-co-operation among the nuclear powers on basic technologi­es, as illustrate­d by the unintellig­ent, primitive and incompeten­t bipartisan decision (352 to 65) by the US House of Representa­tives to outlaw TikTok in the US in March. Also instructiv­e is the 2019 Huawei ban, which means that the company cannot do business with any organisati­on operating in the US. There is also restricted use of Google, Facebook, Instagram and Twitter in China and Russia. Clearly, the major nuclear powers are bitter rivals in everything technologi­cal.

Given this state of play, why would the Chinese and Russians agree with the US on how and when to deploy AI in their weapons systems, be they nuclear or non-nuclear? As it turns out, evidence of this lack of co-operation is emerging. In 2022, the US posited that it would always retain a “human in the loop” for all decisions to use nuclear weapons. In the same year, the UK adopted a similar posture. Guess what? Russia and China have not pronounced themselves on the matter. With the obtaining state of play described above, why should the Russians and Chinese play ball? In fact, the Russians and Chinese have started to develop nuclear-armed autonomous airborne and underwater drones.

Of course, the danger is that such autonomous nuclear-armed drones operating at sea or in the air can malfunctio­n or be involved in accidents, leading to the loss of control of nuclear warheads, with unimaginab­ly devastatin­g consequenc­es.

The utility and appeal of weaponised AI must not be underestim­ated. Autonomous weapons have not yet been fully developed; hence, their potential harm and military value remain open questions. Therefore, political and military leaders are somewhat circumspec­t and noncommitt­al about forgoing potentiall­y efficaciou­s weapons because of speculativ­e and unsubstant­iated fears. Understand­ing autonomous weapons is critical for addressing their potential dangers while laying the foundation for collaborat­ion on their regulation. Moreover, this is preparator­y work for future, even more consequent­ial AI dangers occasioned by cyber, chemical and biological weapons. Autonomous weapons systems are likely to become more sophistica­ted and capable due to advances in AI, robotics and sensor technologi­es.

This could lead to systems with greater autonomy, decision-making capabiliti­es and adaptabili­ty on the battlefiel­d. Society will continue to grapple with the profound legal and ethical challenges surroundin­g the use of AWS — accountabi­lity, discrimina­tion, proportion­ality and adherence to internatio­nal humanitari­an law. Efforts to establish regulation­s, treaties, or guidelines to govern the developmen­t and use of such systems must be redoubled. There is scope for the developmen­t of human-machine collaborat­ive systems — human augmentati­on in military operations.

Humans and autonomous weapons can work together synergisti­cally on the battlefiel­d. This approach could leverage the strengths of both humans (judgment, creativity, empathy) and machines (speed, precision, efficiency) while mitigating some ethical concerns. Welcome to the brave new world of AI. Indeed, there are great opportunit­ies and potential dangers/risks, in equal measure. Of course, the bulk of our efforts must be to develop and deploy AI systems to solve social, economic and environmen­tal challenges worldwide.

AI must not leave anyone behind. However, it will be remiss of us, an unconscion­able derelictio­n of duty, if we do not seek to understand, anticipate and mitigate the dark side of AI.

This is uncharted territory, fraught with unimaginab­le dangers, including the destructio­n of the entirety of civilisati­on

 ?? ??
 ?? Pictures: 123RF and Maiara Folly ?? Unmanned aerial vehicles, commonly known as drones, are being used to devastatin­g effect in the conflicts in Ukraine and Gaza, and even tanks may not need to be manned in the near future.
Pictures: 123RF and Maiara Folly Unmanned aerial vehicles, commonly known as drones, are being used to devastatin­g effect in the conflicts in Ukraine and Gaza, and even tanks may not need to be manned in the near future.

Newspapers in English

Newspapers from South Africa