Skip to content
The Present

Autonomous killer robots may have already killed on the battlefield

A brief passage from a recent UN report describes what could be the first-known case of an autonomous weapon, powered by artificial intelligence, killing in the battlefield.

STM

Kargu-2 drone
Key Takeaways
  • Autonomous weapons have been used in war for decades, but artificial intelligence is ushering in a new category of autonomous weapons.
  • These weapons are not only capable of moving autonomously but also identifying and attacking targets on their own without oversight from a human.
  • There's currently no clear international restrictions on the use of new autonomous weapons, but some nations are calling for preemptive bans.

Nothing transforms warfare more violently than new weapons technology. In prehistoric times, it was the club, the spear, the bow and arrow, the sword. The 16th century brought rifles. The World Wars of the 20th century introduced machine guns, planes, and atomic bombs.

Now we might be seeing the first stages of the next battlefield revolution: autonomous weapons powered by artificial intelligence.

In March, the United Nations Security Council published an extensive report on the Second Libyan War that describes what could be the first-known case of an AI-powered autonomous weapon killing people in the battlefield.

The incident took place in March 2020, when soldiers with the Government of National Accord (GNA) were battling troops supporting the Libyan National Army of Khalifa Haftar (called Haftar Affiliated Forces, or HAF, in the report). One passage describes how GNA troops may have used an autonomous drone to kill retreating HAF soldiers:

“Logistics convoys and retreating HAF were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2… and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

Still, because the GNA forces were also firing surface-to-air missiles at the HAF troops, it’s currently difficult to know how many, if any, troops were killed by autonomous drones. It’s also unclear whether this incident represents anything new. After all, autonomous weapons have been used in war for decades.

Lethal autonomous weapons

Lethal autonomous weapon systems (LAWS) are weapon systems that can search for and fire upon targets on their own. It’s a broad category whose definition is debatable. For example, you could argue that land mines and naval mines, used in battle for centuries, are LAWS, albeit relatively passive and “dumb.” Since the 1970s, navies have used active protection systems that identify, track, and shoot down enemy projectiles fired toward ships, if the human controller chooses to pull the trigger.

Then there are drones, an umbrella term that commonly refers to unmanned weapons systems. Introduced in 1991 with unmanned (yet human-controlled) aerial vehicles, drones now represent a broad suite of weapons systems, including unmanned combat aerial vehicles (UCAVs), loitering munitions (commonly called “kamikaze drones”), and unmanned ground vehicles (UGVs), to name a few.

Some unmanned weapons are largely autonomous. The key question to understanding the potential significance of the March 2020 incident is: what exactly was the weapon’s level of autonomy? In other words, who made the ultimate decision to kill: human or robot?

The Kargu-2 system

One of the weapons described in the UN report was the Kargu-2 system, which is a type of loitering munitions weapon. This type of unmanned aerial vehicle loiters above potential targets (usually anti-air weapons) and, when it detects radar signals from enemy systems, swoops down and explodes in a kamikaze-style attack.

Kargu-2 is produced by the Turkish defense contractor STM, which says the system can be operated both manually and autonomously using “real-time image processing capabilities and machine learning algorithms” to identify and attack targets on the battlefield.

STM | KARGU – Rotary Wing Attack Drone Loitering Munition Systemyoutu.be

In other words, STM says its robot can detect targets and autonomously attack them without a human “pulling the trigger.” If that’s what happened in Libya in March 2020, it’d be the first-known attack of its kind. But the UN report isn’t conclusive.

It states that HAF troops suffered “continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems,” which were “programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

What does that last bit mean? Basically, that a human operator might have programmed the drone to conduct the attack and then sent it a few miles away, where it didn’t have connectivity to the operator. Without connectivity to the human operator, the robot would have had the final call on whether to attack.

To be sure, it’s unclear if anyone died from such an autonomous attack in Libya. In any case, LAWS technology has evolved to the point where such attacks are possible. What’s more, STM is developing swarms of drones that could work together to execute autonomous attacks.

Noah Smith, an economics writer, described what these attacks might look like on his Substack:

“Combined with A.I., tiny cheap little battery-powered drones could be a huge game-changer. Imagine releasing a networked swarm of autonomous quadcopters into an urban area held by enemy infantry, each armed with little rocket-propelled fragmentation grenades and equipped with computer vision technology that allowed it to recognize friend from foe.”

Smarter faster: the Big Think newsletter
Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

But could drones accurately discern friend from foe? After all, computer-vision systems like facial recognition don’t identify objects and people with perfectaccuracy; one study found that very slightly tweaking an image can lead an AI to miscategorize it. Can LAWS be trusted to differentiate between a soldier with a rifle slung over his back and, say, a kid wearing a backpack?

Opposition to LAWS

Unsurprisingly, many humanitarian groups are concerned about introducing a new generation of autonomous weapons to the battlefield. One such group is the Campaign to Stop Killer Robots, whose 2018 survey of roughly 19,000 people across 26 countries found that 61 percent of respondents said they oppose the use of LAWS.

In 2018, the United Nations Convention on Certain Conventional Weapons issued a rather vague set of guidelines aiming to restrict the use of LAWS. One guideline states that “human responsibility must be retained when it comes to decisions on the use of weapons systems.” Meanwhile, at least a couple dozen nations have called for preemptive bans on LAWS.

The U.S. and Russia oppose such bans, while China’s position is a bit ambiguous. It’s impossible to predict how the international community will regulate AI-powered autonomous weapons in the future, but among the world’s superpowers, one assumption seems safe: If these weapons provide a clear tactical advantage, they will be used on the battlefield.


    Related

    Up Next