THE DARK SIDE OF AI
AIpresents enormous opportunities to improve the quality of life of people across the world. There are vast potential applications in all sectors, particularly education, health care, agriculture, infrastructure, mining, trade facilitation, banking/finance, creative industries, and governance. However, there are also potential dangers and risks — the dark side of AI.
Characterising this space are risky applications of AI by folks who mean well, and of course, AI tools in the hands of bad actors with evil intentions. The use of AI in military operations creates fertile ground for both good and bad actors to partake in the dark side of AI. Autonomous weapons systems (AWS) consist of combat equipment or technology that can identify, target, and engage an enemy without human intervention. These systems use AI, sensors, and other technologies to perform tasks that traditionally require human decision-making.
AWS have also been referred to as lethal autonomous weapons systems or killer robots. They range from armed drones and unmanned aerial vehicles (UAVs) to ground-based robots and naval vessels. Such systems are designed to carry out surveillance, reconnaissance, and combat operations without direct human control. The concern with autonomous weapons systems lies in their potential to make life-and-death decisions without meaningful human oversight.
There are ethical, moral, legal, and humanitarian concerns regarding their use, including issues related to accountability, unintended harm to civilians, and the potential for escalating conflicts. Of particular interest is the moral and ethical dilemma of whether AI (a machine) should make the call to kill a human. It is instructive to note that both good actors (national governments and armies) and bad (terrorists, thieves, and fraudsters) have the potential to access AWS. Both groups have the propensity to irresponsibly deploy AWS with devastating effects.
Three nations are leading the development of AWS: China, Russia and the US. China and the US take different approaches. While both are actively developing AWS, China has been investing extensively in modernising its military, including developing advanced AI and robotics technologies for combat operations. The People’s Liberation Army has been exploring the integration of AI and autonomy into various weapons systems, including drones, unmanned vehicles and other platforms.
Similarly, the US has a long history of investing in military technology and has been a leader in developing and deploying unmanned systems and AIenabled weapons. The US military has been researching and testing autonomous systems for various purposes.
There is also autonomous cognitive warfare, which entails using autonomous AI systems to take out, disable or disorient opponents in military operations. The primary objective of AWS is reducing human loss while increasing combat power. Given these new battlefield advantages, there is a danger that political and military leaders will find armed and confrontational options less costly or prohibitive. Thus, it is easier for countries to go to war, as the decision to fight would have been lightened. Once AWS are commonplace, there is also the challenge of: “How do we end wars?” How can humans end a war in which they do not control the military operations? What if the AI system makes a mistake and identifies a wrong target? What of other harmful and egregious technology errors? What about autonomous AI-based military cyberattacks?
Indeed, humanity confronts an existential challenge — an unprecedented crossroads — that demands collective and binding global rules and regulations for these weapons. Widely deployed autonomous weapons integrated with other aspects of military digital technologies could result in a new era of AI-driven warfare. There has to be worldwide ownership and buy-in for any meaningful AWS regulatory framework.
In 2023, a fully autonomous weapon was developed in Ukraine. The drone carried out autonomous attacks on a small scale. While this was a baby step technologically, it is a consequential moral, legal and ethical development. The next stage is the production of fully autonomous weapons capable of searching out, selecting and assailing targets without human involvement. Clearly, a wholesale ban on AWS is neither realistic nor practical. You cannot put it back once the genie is out of the bottle. AWS cannot be uninvented.
The war in Ukraine has led to accelerated adoption of commercial AI innovations such as drones into weapon systems by both Moscow and Kyiv. They have used drones extensively for reconnaissance and attacks on ground forces. Drone counter mechanisms have been achieved through AI systems that detect and destroy drones’ communications links or identify and eliminate the operators on the ground. This strategy works because most drones are remotely controlled. Without human operators, remotely controlled drones lose their utility. This creates the rationale for autonomous drones, which are not dependent on vulnerable communication links to human operators.
With further advances in AI technologies, all these drones, which are currently remotely controlled, can be upgraded to become autonomous, allowing continued utility in the event of the destruction of communications links or operators.
Consequently, such autonomous drones can be used to target air defences or mobile missile launchers without the involvement of humans. The development of ground autonomous weapons has lagged behind that of air and sea AWS, but future possibilities include autonomous weapons deployed on battlefield robots or gun systems. Military AI applications can accelerate information gathering, data processing and scenario selection. This will shorten decision cycles. Thus, the adoption of AI reduces the time it takes to find, identify and strike enemy targets. Theoretically, this could allow humans more time to make thoughtful, deliberate and precise decisions. However, adversaries will feel pressured to respond in kind, using AI to speed up execution. This will inevitably lead to the escalation of automation away from human control. Hence, autonomous warfare becomes unavoidable.
Swarms of drones could autonomously coordinate the behaviour of these systems, reacting to changes on the battlefield at a speed beyond human capabilities, with accuracy and efficacy far superior to that of the most talented military commander. When this happens, we have what is called battlefield singularity. This entails a stage where the AI’s decision-making speed/capacity and effectiveness far surpass that of the most intelligent human — a point wherein the pace of machine-driven warfare outstrips the speed of human decision-making. When this occurs, an unassailable rationale exists for removing humans from the battlefield decision loops. Thus, autonomous, AI-driven warfare becomes a reality. Battlefield singularity can be restated as a condition in the combat zone where humans must be removed from the loop to achieve better speed, efficiency and efficacy.
It is a tipping point that forces rational humans to surrender control to machines for tactical decisions and operational-level war strategies. At that stage, an army that does not remove humans from decision loops will lose. Hence, with the attainment of battlefield singularity, using autonomous weapons systems becomes an existential matter. It is no longer a “nice to have” or some intellectual curiosity. AWS have to be deployed for survival!
With AWS, machines would select individual targets, plan the battlefield strategy and execute entire military campaigns. Humans’ role would be reduced to switching on the AI systems and passively monitoring the battlefield. They will have a reduced capacity to control wars. Even the decisions to end conflicts might be inevitably ceded to machines.
What a brave new world.
Indeed, these weapons could conceivably reduce civilian casualties by precisely targeting combatants. However, this is not always the case. In the hands of bad actors or rogue armies that are not concerned about non-combatant casualties — or whose objective is to punish civilians — autonomous weapons could be used to commit widespread atrocities, including genocide. Swarms of communicating and cooperating autonomous weapons could be deployed to target and eliminate both combatants and civilians.
The most dangerous type of AWS are autonomous nuclear weapons systems. These are obtained by integrating AI and autonomy into nuclear weapons, leading to partial or total machine autonomy in the deployment of nuclear warheads. In the extreme case, the decision to fire or not fire a nuclear weapon is left to the AI system without a human in the decision loop.
Now, this is uncharted territory, fraught with unimaginable dangers, including the destruction of the entirety of civilisation. However, it is an unavoidable and inevitable scenario in future military conflicts. Why? Well, to avoid this devastatingly risky possibility, binding global collaboration is necessary among all nuclear powers, particularly Russia, China and the US. Given their unbridled competition and rivalry, there is absolutely no chance of such a binding agreement.
The unrestrained race for AI supremacy among Chinese, Russian and US researchers does not augur well for co-operation. This is compounded by the bitter geopolitical contestations among these superpowers, as exemplified by the cases of Ukraine, Taiwan and Gaza. Furthermore, there is ruthless distrust and non-co-operation among the nuclear powers on basic technologies, as illustrated by the unintelligent, primitive and incompetent bipartisan decision (352 to 65) by the US House of Representatives to outlaw TikTok in the US in March. Also instructive is the 2019 Huawei ban, which means that the company cannot do business with any organisation operating in the US. There is also restricted use of Google, Facebook, Instagram and Twitter in China and Russia. Clearly, the major nuclear powers are bitter rivals in everything technological.
Given this state of play, why would the Chinese and Russians agree with the US on how and when to deploy AI in their weapons systems, be they nuclear or non-nuclear? As it turns out, evidence of this lack of co-operation is emerging. In 2022, the US posited that it would always retain a “human in the loop” for all decisions to use nuclear weapons. In the same year, the UK adopted a similar posture. Guess what? Russia and China have not pronounced themselves on the matter. With the obtaining state of play described above, why should the Russians and Chinese play ball? In fact, the Russians and Chinese have started to develop nuclear-armed autonomous airborne and underwater drones.
Of course, the danger is that such autonomous nuclear-armed drones operating at sea or in the air can malfunction or be involved in accidents, leading to the loss of control of nuclear warheads, with unimaginably devastating consequences.
The utility and appeal of weaponised AI must not be underestimated. Autonomous weapons have not yet been fully developed; hence, their potential harm and military value remain open questions. Therefore, political and military leaders are somewhat circumspect and noncommittal about forgoing potentially efficacious weapons because of speculative and unsubstantiated fears. Understanding autonomous weapons is critical for addressing their potential dangers while laying the foundation for collaboration on their regulation. Moreover, this is preparatory work for future, even more consequential AI dangers occasioned by cyber, chemical and biological weapons. Autonomous weapons systems are likely to become more sophisticated and capable due to advances in AI, robotics and sensor technologies.
This could lead to systems with greater autonomy, decision-making capabilities and adaptability on the battlefield. Society will continue to grapple with the profound legal and ethical challenges surrounding the use of AWS — accountability, discrimination, proportionality and adherence to international humanitarian law. Efforts to establish regulations, treaties, or guidelines to govern the development and use of such systems must be redoubled. There is scope for the development of human-machine collaborative systems — human augmentation in military operations.
Humans and autonomous weapons can work together synergistically on the battlefield. This approach could leverage the strengths of both humans (judgment, creativity, empathy) and machines (speed, precision, efficiency) while mitigating some ethical concerns. Welcome to the brave new world of AI. Indeed, there are great opportunities and potential dangers/risks, in equal measure. Of course, the bulk of our efforts must be to develop and deploy AI systems to solve social, economic and environmental challenges worldwide.
AI must not leave anyone behind. However, it will be remiss of us, an unconscionable dereliction of duty, if we do not seek to understand, anticipate and mitigate the dark side of AI.
This is uncharted territory, fraught with unimaginable dangers, including the destruction of the entirety of civilisation