Afghan residents look at a robot that is searching for IEDs (improvised explosive devices) during a road clearance patrol by the U.S. army in Logar province, eastern Afghanistan November 23, 2011.
Umit Bektas / Reuters

Robots in Isaac Asimov’s science fiction classic I, Robot are capable of independent thought, judgment, and action. His story begins when one overrides a direct verbal instruction and saves the life of an imperiled adult instead of an ostensibly doomed child. Some would find that choice morally reprehensible, and others would accept it because of the probabilistic outcomes; the robot determined that the child was far less likely to survive than the adult and made its own decision.

Autonomous systems are already in trusted control of many automated services in health, transportation, and digital communications. Although today their reactive duties are often narrowly defined—such as re­routing Internet traffic or forward­-collision alerts systems in automobiles—increasingly, they will be capable of robustly detecting anomalous or potentially threatening behavior with greater speed and accuracy than expert humans. Of course, Asimov’s cautionary tale is rooted in a kind of artificial intelligence and machine-based agency that are still many years away. However, his once phantasmagorical dilemma has become increasingly relevant to real-­life national security.

From a moral perspective, the implications of developing and employing intelligent autonomous systems are far more nuanced and complex. Similar to other military technologies that could have far-reaching impacts, autonomous systems will need to be developed and employed thoughtfully. However, autonomous systems can also cut through the fog of war, enable more accurate information, and reduce reaction time to help protect both innocent bystanders and sailors, soldiers, marines, and airmen.

Deputy Secretary of Defense Bob Work, in a speech to the CNAS Defense Forum, recently explained why autonomous systems are critical to the future of military operations. “We know that China is investing heavily in robotics and autonomy,” he said, “and the Russian Chief of the General Staff, Gerasimov, recently said that the Russian military is preparing to fight on a robotized battle field.” Indeed, Gerasimov believes that “it is possible that a fully robotized unit will be created, capable of independently conducting military operations.”

By what criteria, then, and in what framework will policymakers decide when human intervention and moral judgment are required? In some cases, for example the deployment of strategic assets, there is broad agreement that we want the president in iron-fisted control. At the other end of the spectrum, some autonomous systems independently perceive and understand their environments, decide on the best course of action, and execute decisions without direct human oversight or management. Between these antipodes stretches a gradient of grey, shaded by military force, computational speed, human experience, and compliance to policy. How nations deploy autonomous systems, and how the international community will assign accountability to their actions when inevitable lethal mistakes occur, are still developing issues in policy and law.

Consider, for example, how computer assisted personnel overrode their instrumentation and mistakenly shot down Iranian Air flight 655 in July of 1988. The civilian Airbus A300, which departed from a joint military/civilian airfield 27 minutes behind schedule, was erroneously “tagged” as an F­14 by a watch assistant, and presented as a hostile target to the commanding officer of the USS Vincennes, who was simultaneously engaged in surface action with Iranian gunboats. An air tracker on a sister ship, who believed the tag was incorrect, was told to “shut up” during the most intense part of the air­sea engagement, when voices were raised and attention diverted.

According to a thesis by Kristen Ann Dotterway at the Naval Postgraduate School, “The Vincennes’ system data indicated . . . [IA 655] was climbing through an altitude of 12,000 feet.” The identification supervisor somehow believed it was descending at 7,800 feet. At that point, “the commanding officer turned the firing key, initiating the standard missile launch sequence,” based on erroneous information in a moment of combat stress and perceived lethal attack. The Vincennes’ Command and Decision operator had not affirmatively identified IA 655 as hostile, and it clearly showed the flight ascending; the commanding officer was told the opposite.

Intelligent autonomous systems are emotionally inert and data intensive. They can highlight tagging inconsistencies, recall historical flight patterns and delays, and sensibly challenge hypothesis and, perhaps, instruction. In comparison to digital systems, people make relatively slow and erroneous decisions. Incomplete or faulty information, coupled with moral ambiguity, can mistakenly tilt the tactical balance between restraint and response. How do we expect autonomous systems, especially those capable of supporting or executing lethal action, to decide on their own? How do we imbue them with judgment consistent with our values?

Basically, the same way we train personnel: by leveraging our knowledge about past events and previous situations in order to predict future outcomes. Brute force technologies are already capable of pattern­-matching and behavior mapping encounters to previous situations. Advances in machine learning have extended the boundaries of orientation and decision, and accurately predict near­-term outcomes based on nuanced deviations or change. Our data science also enables quantitative assessment of confidence—how certain policymakers are, or can be, about a particular future outcome—based on past experience, even for new scenarios. For example, we can now simulate event horizons about the behavior of groups of hostile people, or the position of a battalion or fleet.

As observable data becomes even more granular and algorithms improve, autonomous systems will be able to support commander decisions by anticipating adversarial intent, even at the tactical level. However, autonomous decisions put fast machines between human adversaries, and can inadvertently escalate situations that hasten the entrée into war. Consequently, connecting information to policy, and policy to values, is a great technological challenge of our time.

The scale and speed of machine-­based capabilities already overwhelm human reaction times, and if Work and Gersimov are correct, will soon inform (and if neglected, will compromise) national security. The United States cannot afford to dither or deny that reality. Moreover, it is imperative that the ethical, legal, and regulatory framework for the military use of intelligent autonomous systems evolve in parallel with their technological development.

Just as the United States and the global community have come to understand the moral implications of strategic deterrence, so too must we all come to terms with accepted use of autonomous systems. Teaming military forces with autonomous systems will fundamentally alter how peace is kept and wars fought, and because potential adversaries are already heavily investing in their use, it is imperative that the United States keep pace. Military superpowers in the next century will have superior autonomous capabilities, or they will not be superpowers.

Subscribe to continue reading

Get a full year of access for as low as $34.95

  • Paywall-free reading of new articles posted daily online and almost a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print, online, and audio editions
Subscribe Now
  • CARA LAPOINTE is a naval officer and an engineer. The views expressed here are her own and do not reflect those of the U.S. Navy or the federal government.
  • PETER L. LEVIN is a Non-Resident Fellow at the Beeck Center for Social Innovation and Impact at Georgetown University and the CEO of Amida Technology Solutions.
  • More By Cara LaPointe
  • More By Peter L. Levin