Yuya Shino / Reuters SoftBank's human-like robot named 'pepper' performs during a news conference in Chiba, Japan, June 18, 2015.
Foreign Affairs From The Anthology: The Fourth Industrial Revolution
Explore the Anthology

The Moral Code

How To Teach Robots Right and Wrong

At the most recent International Joint Conference on Artificial Intelligence, over 1,000 experts and researchers presented an open letter calling for a ban on offensive autonomous weapons. The letter, signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind CEO Demis Hassabis, and Professor Stephen Hawking, among others, warned of a “military artificial intelligence arms race.” Regardless of whether these campaigns to ban offensive autonomous weapons are successful, though, robotic technology will be increasingly widespread in many areas of military and economic life.

Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations. For example, a robot is not currently able to distinguish between combatants and noncombatants or to understand that enemies sometimes disguise themselves as civilians.

Auto workers feed aluminum panels to robots at Ford's Kansas City Assembly Plant in Claycomo, Missouri, May 2015.

To address this failing, in 2014, the U.S. Office of Naval Research offered a $7.5 million grant to an interdisciplinary research team from Brown, Georgetown, Rensselaer Polytechnic Institute, Tufts, and Yale to build robots endowed with moral competence. They intend to capture human moral reasoning as a set of algorithms, which will allow robots to distinguish between right and wrong and to override rigid instructions when confronted with new situations.

The idea of formalizing ethical guidelines is not new. More than seven decades ago, science-fiction writer Isaac Asimov described the “three laws of robotics”—a moral compass for artificial intelligence. The laws required robots to protect humans, obey instructions, and preserve themselves, in that order. The fundamental premise behind Asimov’s laws was to minimize conflicts between humans and robots. In Asimov’s stories, however, even these simple moral guidelines lead to often disastrous unintended consequences. Either by receiving conflicting instructions or by exploiting loopholes and ambiguities in these laws, Asimov’s robots ultimately tend to cause harm or lethal injuries to humans.

Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.” Robots will

Loading, please wait...

Browse Related Articles on {{search_model.selectedTerm.name}}

{{indexVM.results.hits.total | number}} Articles Found

  • {{bucket.key_as_string}}