Skip to main content

AI-Influenced Weapons Need Better Regulation

The weapons are error-prone and could hit the wrong targets

Large fireball from artillery fire strike

Flames and smoke rise from a fire following an artillery fire on the 30th day of the invasion of Ukraine by Russian forces in the northeastern city of Kharkiv on March 25, 2022.

With Russia’s invasion of Ukraine as the backdrop, the United Nations recently held a meeting to discuss the use of autonomous weapons systems, commonly referred to as killer robots. These are essentially weapons that are programmed to find a class of targets, then select and attack a specific person or object within that class, with little human control over the decisions that are made.

Russia took center stage in this discussion, in part because of its potential capabilities in this space, but also because its diplomats thwarted the effort to discuss these weapons, saying sanctions made it impossible to properly participate. For a discussion that to date had been far too slow, Russia’s spoiling slowed it down even further.

I have been tracking the development of autonomous weapons and attending the UN discussions on the issue for over seven years, and Russia’s aggression is becoming an unfortunate test case for how artificial intelligence (AI)–fueled warfare can and likely will proceed.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The technology behind some of these weapons systems is immature and error-prone, and there is little clarity on how the systems function and make decisions. Some of these weapons will invariably hit the wrong targets, and competitive pressures might result in deployment of more systems that are not ready for the battelfield.

To avoid the loss of innocent lives and the destruction of critical infrastructure in Ukraine and beyond, we need nothing less that the strongest diplomatic effort to prohibit in some cases, and regulate, in others, the use of these weapons and the technologies behind them, including AI and machine learning. This is critical because when military operations are proceeding poorly countries might be tempted to use new technologies to gain an advantage. An example of this is Russia’s KUB-BLA loitering munition, which has the ability to identify targets using AI.

Data fed into AI-based systems can teach remote weapons what a target looks like, and what to do upon reaching that target. While similar to facial recognition tools, AI technologies for military use have different implications, particularly when they are meant to destroy and kill, and as such, experts have raised concerns about their introduction into dynamic war contexts. And while Russia may have been successful in thwarting real-time discussion of these weapons, it isn’t alone. The U.S., India and Israel are all fighting regulation of these dangerous systems.

AI might be more mature and well-known in its use in cyberwarfare, including to supercharge malware attacks or to better impersonate trusted users in order to access to critical infrastructure, such as the electric grid. But, major powers are using it to develop physically destructive weapons. Russia has already made important advances in autonomous tanks, machines that can run without human operators who could theoretically override mistakes, while the United States has demonstrated a number of capabilities, including munitions that can destroy a surface vessel using a swarm of drones. AI is employed in the development of swarming technologies and loitering munitions, also called kamikaze drones. Rather than the futuristic robots seen in science-fiction movies, these systems use previously existing military platforms that leverage AI technologies. Simply, a few lines of code and new sensors can make a difference in whether a military system is functioning autonomously or under human control. Crucially, introducing AI into decision-making by militaries could lead to overrealiance on the technology, shaping military decision-making and potentially escalating conflicts.  

AI-based warfare might seem like a video game, but last September, according to Secretary of the Air Force Frank Kendall, the U.S. Air Force, for the first time, used AI to help to identify a target or targets in “a live operational kill chain.” Presumably, this means AI was used to identify and kill human targets.

Little information was provided about the mission, including whether any casualties that occurred were the intended targets. What inputs were used to identify such individuals and could there have been possible errors in identification? AI technologies have been shown to be biased, particularly against women and people in minority communities. False identifications disproportionately impact already marginalized and racialized groups.

If recent social media discussions among the AI community are any indication, the developers, largely from the private sector, who are creating the new technologies that some militaries are already deploying are largely unaware of their impact. Tech journalist Jeremy Kahn argues in Fortune that a dangerous disconnect exists between developers and leading militaries, including U.S. and Russian, which are using AI in decision-making and data analysis. The developers seem to be unaware of the general-purpose nature of some of the tools they are building and how militaries could use them in warfare, including to target civilians.

Undoubtedly, lessons from the current invasion will also shape the technology projects the militaries pursue. At the moment, the United States is at the head of the pack, but a joint statement by Russia and China in early February notes that they aim to “jointly build international relations of a new type,” and specifically points to their aim to shape governance of new technologies, including what I believe will be military uses of AI.

Independently, the U.S. and its allies are developing norms on responsible military uses of AI, but generally are not talking with potential adversaries. In general, states with more technologically advanced militaries have been unwilling to accept any constraints on the developments of AI technology. This is where international diplomacy is critical: there must be constraints on these types of weapons, and everyone has to agree to shared standards and transparency in use of the technologies.

The war in Ukraine should be a wake-up call regarding the use of technology in warfare, and the need to regulate AI technologies to ensure civilian protection. Unchecked and potentially hasty development of military applications of artificial intelligence will continue to undermine international humanitarian law and norms regarding civilian protection. Though the international order is in disarray, the solutions to current and future crises are diplomatic, not military, and the next gathering of the U.N. or another group needs to rapidly address this new era of warfare.