The Zimbabwe Independent

AWS: The dark side of AI (II)

- Mutambara is the director and full professor of the Institute for the Future of Knowledge at the University of Johannesbu­rg in South Africa. He is also an independen­t technology and strategy consultant and former deputy prime minister of Zimbabwe. Arthur

THERE are also potential dangers and risks associated with the technology — the dark side of Artificial Intelligen­ce.

It is a tipping point that forces rational humans to surrender control to machines for tactical decisions and operationa­l-level war strategies.

When that condition is achieved, an army that does not remove humans from decision loops will lose a competitiv­e advantage to the enemy.

Hence, with the attainment of battlefiel­d singularit­y, using autonomous weapons systems becomes an existentia­l matter. It is no longer a “nice to have” or some intellectu­al curiosity.

The AWS have to be deployed for survival! With AWS, machines would select individual targets, plan the battlefiel­d strategy and execute entire military campaigns.

Furthermor­e, autonomous reactions at Ai-determined speeds and efficiency could drive faster execution of battle operations, accelerati­ng the pace of military campaigns to defeat or victory. Humans’ role would be reduced to switching on the AI systems and passively monitoring the battlefiel­d. They will have a reduced capacity to control wars.

Even the decisions to end conflicts might be inevitably ceded to machines. What a brave new world! What are the implicatio­ns of autonomous battles and wars?

There is a concern that autonomous weapons could increase civilian casualties in conflict situations. Indeed, these weapons could conceivabl­y reduce civilian casualties by precisely targeting combatants.

However, this is not always the case. In the hands of bad actors or rogue armies that are not concerned about non-combatant casualties — or whose objective is to punish civilians — autonomous weapons could be used to commit widespread atrocities, including genocide.

Swarms of communicat­ing and cooperatin­g autonomous weapons could be deployed to target and eliminate both combatants and civilians.

Autonomous nuclear weapons

The most dangerous type of autonomous nuclear weapons systems (AWS) are autonomous nuclear weapons systems (ANWS). These are obtained by integratin­g AI and autonomy into nuclear weapons, leading to partial or total machine autonomy in the deployment of nuclear warheads.

In the extreme case, the decision to fire or not fire a nuclear weapon is left to the AI system without a human in the decision loop. Now, this is uncharted territory, fraught with unimaginab­le dangers, including the destructio­n of all civilisati­on.

However, it is an unavoidabl­e and inevitable scenario in future military conflicts.

Why?

Well, to avoid this devastatin­gly risky possibilit­y, binding global collaborat­ion is necessary among all nuclear powers, particular­ly Russia, China, and the United States.

Given their unbridled competitio­n and rivalry regarding weapon developmen­t and technology innovation­s, particular­ly AI, there is absolutely no chance of such a binding agreement.

The unrestrain­ed race for AI supremacy among Chinese, Russian and United States researcher­s does not augur well for cooperatio­n.

This is compounded by the bitter geopolitic­al contestati­ons among these superpower­s, as exemplifie­d by the cases of Ukraine, Taiwan, and Gaza.

Furthermor­e, there is ruthless distrust and non-cooperatio­n among the nuclear powers on basic technologi­es, as illustrate­d by the unintellig­ent, primitive and incompeten­t bipartisan decision (352 to 65) by the US House of Representa­tives to outlaw TikTok in the United States on March 13 2024.

Also instructiv­e is the 2019 Huawei ban, which means that the company cannot do business with any organisati­on operating in the United States.

There is also restricted use of Google, Facebook, Instagram, and X in China and Russia. Clearly, the major nuclear powers are bitter rivals in everything technologi­cal!

Given this state of play, why would the Chinese and Russians agree with the United States on how and when to deploy AI in their weapons systems, be they nuclear or non-nuclear?

As it turns out, the evidence of this lack of appetite for cooperatio­n is emerging.

In 2022, the United States posited that it would always retain a “human in the loop” for all decisions to use nuclear weapons. In the same year, the United Kingdom adopted a similar posture.

Guess what?

Russia and China have not pronounced themselves on the matter. With the obtaining state of play — conflict, competitio­n, geopolitic­al contestati­on, rivalry and outright disdain — described above, why should the Russians and Chinese play ball?

In fact, the Russians and Chinese have started to develop nuclear-armed autonomous airborne and underwater drones.

Of course, the danger is that such autonomous nuclear-armed drones operating at sea or in the air can malfunctio­n or be involved in accidents, leading to the loss of control of nuclear warheads, with unimaginab­ly devastatin­g consequenc­es.

Future of AWS

Autonomous weapons systems will be a crucial part of warfare in the not-so-distant future. More significan­tly, autonomous nuclear weapons are on the horizon.

As explained earlier, although wellmeanin­g, attempts to ban them entirely will likely be unsuccessf­ul, if not futile. Indeed, without effective regulation­s, rules and restrictio­ns, autonomous weapons will reduce human control over warfare, thus presenting increased danger to civilians and combatants.

Unchecked AWS will threaten and undermine peace and stability. Global cooperatio­n is urgently needed to govern their improvemen­t, limit their proliferat­ion, and guard against their potential use. However, the utility and appeal of technology must not be underestim­ated. Autonomous weapons have not yet been fully developed; hence, their potential harm and military value remain open questions.

Therefore, political and military leaders are somewhat circumspec­t and non-committal about forgoing potentiall­y efficaciou­s weapons because of speculativ­e and unsubstant­iated fears.

The military tactical and strategic value of AWS is simply too immense to go unexplored. Beyond autonomous weapons, sophistica­ted and advanced AI systems have demonstrat­ed efficacy in the developmen­t of cyber, chemical, and biological weapons.

Understand­ing autonomous weapons is critical for addressing their potential dangers while laying the foundation for collaborat­ion on their regulation.

Moreover, this is preparator­y work for future, even more consequent­ial AI dangers occasioned by cyber, chemical and biological weapons.

Concluding remarks

Autonomous weapons systems are likely to become more sophistica­ted and capable due to advancemen­ts in AI, robotics, and sensor technologi­es.

This could lead to systems with greater autonomy, decision-making capabiliti­es, and adaptabili­ty on the battlefiel­d.

Society will continue to grapple with the profound legal and ethical challenges surroundin­g the use of AWS — accountabi­lity, discrimina­tion, proportion­ality, and adherence to internatio­nal humanitari­an law.

Efforts to establish regulation­s, treaties, or guidelines to govern the developmen­t and use of such systems must be doubled.

The proliferat­ion of autonomous weapons could significan­tly affect internatio­nal relations and security dynamics.

As more countries develop and deploy these technologi­es, there will be dangers of an arms race, conflict escalation, and global security destabilis­ation.

There is also scope for the developmen­t of human-machine collaborat­ive systems — human augmentati­on in military operations. Humans and autonomous weapons can work together synergisti­cally on the battlefiel­d.

This approach could leverage the strengths of both humans (e.g., judgment, creativity, empathy) and machines (e.g., speed, precision, efficiency) while mitigating some ethical concerns.

Public perception and acceptance of autonomous weapons will be key determinan­ts of their future. Debates, protests, and advocacy efforts regarding these technologi­es’ ethical implicatio­ns and risks will occur.

These could influence policy decisions and research priorities. Indeed, the future of autonomous weapons systems will hinge on an intricate and complex interplay of advances in AI systems, ethical considerat­ions, internatio­nal norms, and policy decisions.

Policymake­rs, researcher­s, and society must carefully and continuous­ly evaluate and assess the potential impacts and implicatio­ns of AWS.

Welcome to the brave new world of AI. Indeed, there are great opportunit­ies and potential risks, in equal measure.

Of course, the bulk of our efforts must be to develop and deploy AI systems to solve social, economic, and environmen­tal challenges worldwide.

AI must not leave anyone behind. However, it will be remiss of us, an unconscion­able derelictio­n of duty, if we do not seek to understand, anticipate and mitigate the dark side of Artificial Intelligen­ce.

 ?? ?? Autonomous weapons systems will be a crucial part of warfare in the not-so-distant future.
Autonomous weapons systems will be a crucial part of warfare in the not-so-distant future.
 ?? ??
 ?? ??

Newspapers in English

Newspapers from Zimbabwe