Beware the Guns of August—in Asia
How to Keep U.S-Chinese Tensions From Sparking a War
More and more, machines are waging war. Unmanned craft conduct reconnaissance and serve as mechanical beasts of burden over inhospitable terrain, and other automated systems assist medics and defuse explosives. But when it comes to weaponized systems, governments have so far kept humans as the operators. Now militaries worldwide are taking steps to develop armed autonomous robots with the capacity to use lethal force on their own. According to a U.N. report, Israel, South Korea, the United Kingdom, and the United States have developed weapons systems with various degrees of autonomy. Since 2010, for example, armed sentry robots with autonomous sensors and targeting systems have patrolled South Korea’s demilitarized zone. Although humans are still behind the controls, military doctrine leaves open the possibility of fully autonomous weapons in the future. The U.S. Department of Defense’s Directive on Autonomous Weapons permits under-secretaries of defense, in certain cases, to bend its rule that weapons remain under meaningful human control. And the machines are spreading: India is the latest nation to openly establish a program to develop fully autonomous robotic soldiers.
The prospect of gradually outsourcing kill decisions has made a growing number of robotics experts, ethicists, and, now, a transnational network of human security campaigners and governments uneasy. Concerned scientists formed the International Committee for Robot Arms Control in 2009. In 2012, NGOs began expressing concern, and Human Rights Watch released a report on the perils of fully autonomous weapons. And this past April, the Campaign to Stop Killer Robots launched on the steps of Parliament in London. Endorsed by Nobel laureate and renowned anti–land mine campaigner Jody Williams, it quickly grew into a coalition of over 30 nongovernmental organizations. The United Nations has since issued a report calling for a moratorium on the development of lethal autonomous robots.
This new movement is unique among disarmament campaigns, since the organizations are lobbying against a class of weapons that has not yet been widely deployed or shown to cause massive humanitarian harm. Most successful weapons bans in the past have been justified on the basis that they violate the distinction or proportionality principles of just war. Distinction requires that weapons be capable of hitting primarily combatants; proportionality requires that the suffering inflicted on combatants by weapons does not exceed what is deemed militarily necessary. Land mines, chemical weapons, and cluster munitions were successfully banned because the humanitarian toll showed they could not be used in a discriminate manner. But few weapons have been banned preemptively based on the harm they might cause, precisely because it is difficult to make such a case on empirical grounds. The only two types of weapons that have been banned preemptively -- expanding bullets and blinding lasers -- were outlawed solely because they failed the second test: they were intended to cause “superfluous injury.”
In the case of autonomous weapons systems, neither case is so clear. Campaigners argue that these weapons lack the situational awareness to comply with humanitarian law, could increase levels of armed conflict given their ease of use, and undermine existing war law by limiting accountability for mistakes. But proponents of autonomous systems contend that these concerns are hypothetical. Until the weapons are deployed, they argue, no one can know for certain what harm they would actually cause. Proponents expect that safeguards will be sufficient, and that they might do some good, since using robotic soldiers could protect human troops -- and they might even commit fewer war crimes against civilians. Furthermore, proponents see these trends as inevitable, arguing that prohibition treaties simply don’t work.
Both camps have more speculation than facts on their side. There is little clear evidence that the growing distance of human combatants from battlefields contributes to an increase in international armed conflict, which has generally declined since the end of World War II. But nor is there data to support the idea that autonomous weapons would necessarily be more humane than humans. Such conjecture often relies on anecdotes of the human propensity for war crimes but neglects studies on soldiers’ overall compliance with the Geneva Conventions, which show that armed groups generally do follow rules, such as not targeting civilians, given appropriate small-unit dynamics and lawful orders from the top.
The bigger problem isn’t that some claims in this debate are open to question on empirical grounds, but that so many of them simply cannot be evaluated empirically, since there is no data or precedent with which to weigh discrimination and proportionality against military necessity. Perhaps the use of armed robots could save some troops’ lives. But if using them undermined existing humanitarian law, it might increase the harm not only to troops but civilians as well, by removing the rules that shield them from the worst of war. Will engineers someday create a machine to distinguish a child holding an ice-cream cone from a child holding a Kalashnikov, much less a soldier who is attempting surrender from one who is about to fire? If not, might machines be able to comply with laws of war under rare, limited conditions approximating desert or sea battles where civilians are not present? Even if so, it is not clear how useful that would be given current combat realities, where militaries typically fight in urban environments and engage in police operations. All of these factors -- the humanitarian cost, the military benefits, what wars will look like in the future -- make it hard for stakeholders to adequately weigh the risks and benefits of going down this road.
REIN IN THE ROBOTS
So, instead of resting on discrimination and proportionality principles as with earlier weapons ban campaigns, the lethal machines debate is converging around two very different questions." First, in situations of uncertainty, does the burden of proof rest on governments, to show that emerging technologies meet humanitarian standards, or on global civil society, to show that they don’t? And second, even if autonomous systems could one day be shown to be useful and lawful in utilitarian terms, is a deeper underlying moral principle at stake in outsourcing matters of life or death to machines?
The disarmament camp argues yes to both; techno-optimists argue no. To some extent these are questions of values, but each can also be empirically evaluated by the social realities of international normative precedent. In each case, those cautioning against the untrammeled development of unmanned military technology are on firmer ground.
The precautionary principle concept is well known in scientific circles, and it has a long history in international treaty law. It states that when an action or policy has a suspected but unproven risk of public harm, the burden of proof that it is not harmful falls on those undertaking the action. Originating with environmental issues, it is now statutory law in the European Union and has been applied to disarmament as well. In earlier weapons ban campaigns, the global consensus was to put the burden of proof on governments to demonstrate that weapons comply with humanitarian law. That principle is also enshrined in Article 36 of the First Additional Protocol to the Geneva Conventions, which requires governments to review new weapons for compliance with the rules of war. The United States is not a signatory to the First Additional Protocol. Nevertheless, the strength of this widespread precautionary norm is already shaping the diplomatic debate around autonomous weapons and government decision-making. At a recent United Nations Human Rights Council meeting attended by dozens of states, most country representatives expressed concern over autonomous weapons. Several expressed support for an outright ban, but only one, the United Kingdom, argued a ban was unnecessary.
In addition to successfully framing their arguments around such precedent, campaigners have used a legal argument that bypasses the unresolvable questions about whether these weapons could hypothetically meet discrimination or proportionality standards. Simply put, they have shifted the conversation to whether the whole notion of automated weapons is contrary to principles of humanity. International humanitarian law states that the so-called dictates of the public conscience should be used to gauge what is appropriate in military affairs in cases like this one, where existing legal frameworks provide inadequate guidance and the concerns being raised have not yet materialized.
Anti-autonomous-weapons campaigners are making much of the argument outlined in the Martens Clause, which was inserted into the Hague Convention as sort of a backup plan for situations not foreseen by its drafters. Stressing the relevance of public opinion in regulating new weapons, the passage reads: “Until a more complete code of the laws of war is issued, the High Contracting Parties think it right to declare that in cases not included in the Regulations adopted by them, populations and belligerents remain under the protection and empire of the principles of international law, as they result from the usages established between civilized nations, from the laws of humanity and the requirements of the public conscience.” Human Rights Watch invoked the clause explicitly last November in its report Losing Humanity, which called on nations to support a preemptive ban on the development and deployment of fully autonomous weapons.
New survey data from the University of Massachusetts confirm a general level of concern among the U.S. public over outsourcing kill decisions to machines. Fifty-five percent of Americans surveyed were opposed to autonomous weapons (nearly 40 percent were “strongly opposed”) and a majority (53 percent) expressed support for the new ban campaign. Of those who were not directly opposed, nearly 20 percent were “unsure,” with many expressing reservation in open-ended comments. That confirms the precautionary principle concept in the face of technological uncertainty. Only ten percent of Americans firmly support the development of autonomous weapons, a finding that was consistent across the political spectrum. The strongest opposition came from the far right and the far left, the highly educated, the well-informed, and members of the military.
Open-ended answers on the survey also conveyed a sense that lethal machine decisions would violate basic moral principles, which respondents expressed as concerns over matters of conscience. The most frequent explanation for opposing autonomous weapons in the data was a sense that machines cannot equal human beings in situational judgment. Many stressed the importance of accountability, of human empathy, and the possibility of machine error. A significant number of responses referred to the idea as “terrifying,” “frightening,” or “repulsive.” Therefore, the Campaign to Ban Killer Robots may be tapping into a visceral fear among the public of outsourcing killing decisions to machines. That alone may be enough to strengthen their legal case, and to sustain the pressure on stakeholders necessary to achieve a treaty against the use of autonomous weapons.
Supporters of the development of autonomous weapons criticize the campaign for stoking a climate of fear through charged language -- some have already attributed the campaign’s success to “scare-mongering.” But the University of Massachusetts poll shows that Americans are equally afraid of autonomous weapons whether they are referred to as “autonomous weapons” or “killer robots.” And whether or not engineers could hypothetically build a perfectly ethical robot soldier, there is no evidence that the American public believes that robots will kill fewer civilians than humans. In fact, many respondents both on the far right and the far left are terrified that in the wrong hands such machines could become tools of tyranny.
The argument for autonomous weapons systems that most appeals to the public is the possibility that they could protect soldiers. Protecting the troops, not protecting civilians, was the most frequent reason given by the ten percent of survey respondents who strongly support the use of autonomous weapons. But the point carries little political weight, since many military personnel themselves do not buy it. In the same survey, military personnel, veterans, and those with family in the military were more strongly opposed to autonomous weapons than the general public, with the highest opposition among active duty troops. Indeed, members of the armed forces have recently made a number of compelling arguments against autonomous weapons, including the notion that a so-called warrior ethic requires risk-taking on behalf of civilians as well as human moral judgment.
So if a convention to ban the use of killer robots becomes the next big multilateral arms control treaty, it will not be because of cheap scare tactics, as critics claim. Nor will it be because the civilian bodies have already piled up, or because the weapons are necessarily in violation of key humanitarian principles. It will be because global civil society tapped into a serious and pervasive moral conviction: that just because we can do something does not mean that we should.