The Moral Code

How To Teach Robots Right and Wrong

SoftBank's human-like robot named 'pepper' performs during a news conference in Chiba, Japan, June 18, 2015. Yuya Shino / Reuters

At the most recent International Joint Conference on Artificial Intelligence, over 1,000 experts and researchers presented an open letter calling for a ban on offensive autonomous weapons. The letter, signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind CEO Demis Hassabis, and Professor Stephen Hawking, among others, warned of a “military artificial intelligence arms race.” Regardless of whether these campaigns to ban offensive autonomous weapons are successful, though, robotic technology will be increasingly widespread in many areas of military and economic life.

Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations. For example, a robot is not currently able to distinguish between combatants and noncombatants or to understand that enemies sometimes disguise themselves as civilians.

Auto workers feed aluminum panels to robots at Ford's Kansas City Assembly Plant in Claycomo, Missouri, May 2015. Dave Kaup / Courtesy Reuters
To address this failing, in 2014, the U.

Loading, please wait...

To read the full article

Related Articles

This site uses cookies to improve your user experience. Click here to learn more.