Stephen Bowler / Flickr

The Case Against Killer Robots

Why the United States Should Ban Them

In the Terminator movies, fully autonomous robots wage war against humanity. Although cyborg assassins won’t be arriving from the future anytime soon, offensive “Terminator-style” autonomous robots that are programmed to kill could soon escape Hollywood science fiction and become reality. This actual rise of the machines raises important strategic, moral, and legal questions about whether the international community should empower robots to kill.

This debate goes well beyond drones, as they are yesterday’s news. Existing armed unmanned aerial vehicles are precursors to lethal autonomous robotics -- that is, killer robots -- that could choose targets without further human intervention once they are programmed and activated. The Pentagon is already planning for them, envisioning a gradual reduction by 2036 of the degree of human control over such unmanned weapons and systems, until humans are completely out of the loop. But just because the Department of Defense wants it doesn’t mean the United States should allow it. Instead, Washington should take the lead in drafting a new, international agreement to ban killer robots and regulate other kinds of autonomous systems. There is no better time to push for such a prohibition than next week, on May 13, when 117 countries will meet in Geneva for the first multilateral UN talks on killer robots at the United Nations. There, the United States should stand up and tell the world that people must remain in complete control when it comes to war and peace.

SKYNET TAKES OVER

Wars fought by killer robots are no longer hypothetical. The technology is nearly here for all kinds of machines, from unmanned aerial vehicles to nanobots to humanoid Terminator-style robots. According to the U.S. Government Accountability Office, in 2012, 76 countries had some form of drones, and 16 countries possessed armed ones. In other words, existing drone technology is already proliferating, driven mostly by the commercial interests of defense contractors and governments, rather than by strategic calculations of potential risks. And innovation is picking up. Indeed, China, Israel, Russia, the United Kingdom, the United States, and 50 other states have plans to further develop their robotic arsenals, including killer robots. In the race to build such fully autonomous unmanned systems, China is moving faster than anyone; it exhibited 27 different armed drone models in 2012. One of these was an autonomous air-to-air supersonic combat aircraft.

Several countries have already deployed forerunners of killer robots. The Samsung Techwin security surveillance guard robots, which South Korea uses in the demilitarized zone it shares with North Korea, can detect targets through infrared sensors. Although they are currently operated by humans, the robots have an automatic feature that can detect body heat in the demilitarized zone and fire with an onboard machine gun without the need for human operators. The U.S. firm Northrop Grumman has developed an autonomous drone, the X-47B, which can travel on a preprogrammed flight path while being monitored by a pilot on a ship. It is expected to enter active naval service by 2019. Israel, meanwhile, is developing an armed drone known as the Harop that could select targets on its own with a special sensor, after loitering in the skies for hours.

Militaries insist that such hardware protects human life by taking soldiers and pilots out of harm’s way. But the risk of malfunctions from failed software or cyber attacks could result in new dangers altogether. Countries will have dissimilar computer programs that, when interacting with each other, may be erratic. Further, signal jamming and hacking become all the more attractive -- and more dangerous -- as armies increasingly rely on drones and other robotic weaponry. According to killer robot advocates, removing the human operator could actually solve some of those problems, since killer robots could ideally operate without touching communication networks and cyberspace. But that wouldn’t help if a killer robot were successfully hacked and turned against its home country.

The use of robots also raises an important moral question. As Noel Sharkey, a British robotics expert, has asked: “Are we losing our humanity by automating death?” Killer robots would make war easier to pursue and declare, given the distance between combatants and, in some cases, their removal from the battlefield altogether. Automated warfare would reduce long-established thresholds for resorting to violence and the use of force, which the UN has carefully built over decades. Those norms have been paramount in ensuring global security, but they would be easier to break with killer robots, which would allow countries to declare war without having to worry about causing casualties on their own side.

I, ROBOT

There are also other hard realities to consider. Although the United States might use killer robots to wage war without putting its soldiers in harm’s way, other nations might use them to terrorize their own citizens or those of neighboring countries. Put simply, such weapons could increase the risks to civilians. In the last few decades, international law has been able to rein in abuses that come with new military technologies, from chemical and biological weapons to landmines, blinding laser weapons, and cluster bombs.

Four branches of international law have historically been used to constrain violence in war: the law of state responsibility, the law on the use of force, international humanitarian law, and human rights law. As they are currently conducted, U.S. drone strikes violate all of them. Killer robots would likely only continue the trend. International humanitarian law mandates that the use of violence must be proportional and avoid indiscriminate damage and killings. But killer robots will be unable to satisfactorily evaluate proportionality and precision: according to scientists at the International Committee for Robot Arms Control, a nongovernmental organization, the hard decisions of proportionality have to be weighed in dynamic environments that require highly qualitative and subjective knowledge -- just the things that robots could lack. According to the International Court of Justice, even if a means of war does not violate international law, it may still breach the dictates of public conscience through what is known as the Martens Clause, a preamble in the Hague Convention that its drafters inserted to cover new and unexpected contingencies. The clause recommended that states effectively evaluate the moral and ethical repercussions of any new technologies. Organizations such as Human Rights Watch and Amnesty International have invoked the Martens Clause in advocating for a preemptive ban on killer robots.

How would a robot decide if it is proportional to strike a target if the attack would also kill children in a school next door? Terrorists and insurgents often use human shields, or coerce civilians and noncombatants into situations in which they could appear to be combatants from afar. Would robots be able to detect such subtleties and act -- or not act -- accordingly? Although the human record is hardly perfect, current computer technologies are still very limited. Automatic target recognition can detect a tank only in an uncluttered environment, such as a desert. Vision systems cannot distinguish between a combatant and a child. Thus subtleties are out of the question. Sensory processing systems will improve with time, but it is unlikely that the type of reasoning to determine details or even the legitimacy of targets will be available in the foreseeable future.

For all of their own faults, therefore, humans must be kept in the loop to oversee targets, authorize attacks, and override potential judgment calls as the operation evolves. Battles are too unpredictable to let robots take over. They might be effective killing machines, but therein lies the danger. Of course, we might be able to build robots that are capable of making such judgments in the future. But that possibility is all the more reason to prevent killer robots from being developed at all. Just as threatening, if not more so, than a killer robot that can’t discern civilians from combatants is a robot that can make complex decisions about who it wants to kill.

JUDGMENT DAY

The Campaign to Stop Killer Robots, an international coalition of nongovernmental organizations, has gathered supporters more quickly than any other disarmament movement. It took the campaign just six months to get on the UN agenda; other campaigns took at least five years.

To win that support, it has been able to capitalize on two issues. First, the U.S. government has failed to be transparent and inform the American public about its decisions to invest billions of dollars in lethal new technologies. Since the beginning of the armed drones program, in 2001, American taxpayers have invested $11.8 billion dollars in it -- and the Department of Defense has spent $6 billion every year on the research and development of better drones. Second, autonomous killer robots would come with an inherent lack of accountability. If anything were to go wrong with such weapons, their inventors, manufacturers, software programmers, and the officials who released them could all be considered eligible guilty parties. Lacking a clear chain of accountability for such a weapon system is ethically problematic and would inevitably lead to disagreements over culpability and responsibility.

Recognizing the problems with killer robots, the United States should work to rein in research and development programs. Rapid proliferation would put the United States at risk as much as everyone else, even if the country has an initial edge in the development of militarized robot technology. If other countries successfully develop and acquire them, though, the United States could lose the battlefield advantages that it counts on now. And if non-state actors and terrorist organizations acquired them, the risks could be even greater. Out of self-interest, then, the United States should want other countries to agree to preventively ban such weapons now.

The United States also stands to gain a great deal of moral legitimacy if it leads a ban on killer robots, akin to its role in passing the Biological Weapons Convention in the 1970s (the first multilateral disarmament treaty to ban an entire class of weapons). Similarly, most countries have embraced efforts to prohibit landmines and cluster munitions, winning their battles with the 1997 Mine Ban Treaty, the 2008 Convention on Cluster Munitions, and the 2013 Arms Trade Treaty, the first global legal agreement on the transfer of conventional arms. In other words, many of the world’s existing weapons bans were the result of individual states picking a cause to champion and aggressively campaigning on its behalf. In their work, they were supported by scientists and activists with common goals. They were also able to generate credible information to support their arguments.

The movement against autonomous weapons already has a robust advocacy group: the International Committee for Robot Arms Control, along with its affiliated Campaign to Stop Killer Robots. The United States is in the right position to champion its cause. It has all the diplomatic resources needed to advance a ban, given its standing on the UN Security Council and its global alliances. The United States has also helped create some of the cornerstones of existing international law, playing a leading role in founding the United Nations, drafting international humanitarian law, and advocating for human rights. Washington should remember this track record now and work to prohibit machines capable of killing on their own. Killer robots might seem like an unreasonable idea, but they could become an unacceptable reality. There is a small window of opportunity. Now is the time to use it. 

Browse Related Articles on {{search_model.selectedTerm.name}}

{{indexVM.results.hits.total | number}} Articles Found

  • {{bucket.key_as_string}}