The Fourth Industrial Revolution
What It Means and How to Respond
How to Make Almost Anything
The Digital Fabrication Revolution
As Objects Go Online
The Promise (and Pitfalls) of the Internet of Things
The Rise of Big Data
How It's Changing the Way We Think About the World
The Mobile-Finance Revolution
How Cell Phones Can Spur Development
Biology's Brave New World
The Promise and Perils of the Synbio Revolution
The Robots Are Coming
How Technological Breakthroughs Will Transform Everyday Life
New World Order
Labor, Capital, and Ideas in the Power Law Economy
Will Humans Go the Way of Horses?
Labor in the Second Machine Age
Same as It Ever Was
Why the Techno-optimists Are Wrong
The Future of Cities
The Internet of Everything will Change How We Live
The Coming Robot Dystopia
All Too Inhuman
The Political Power of Social Media
Technology, the Public Sphere, and Political Change
From Innovation to Revolution
Do Social Media Make Protests Possible?
The Next Safety Net
Social Policy for a Digital Age
The Moral Code
How To Teach Robots Right and Wrong
Focus on Data Use, Not Data Collection
The Power of Market Creation
How Innovation Can Spur Development
The Innovative State
Governments Should Make Markets, Not Just Fix Them
Food and the Transformation of Africa
Getting Smallholders Connected
At the most recent International Joint Conference on Artificial Intelligence, over 1,000 experts and researchers presented an open letter calling for a ban on offensive autonomous weapons. The letter, signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind CEO Demis Hassabis, and Professor Stephen Hawking, among others, warned of a “military artificial intelligence arms race.” Regardless of whether these campaigns to ban offensive autonomous weapons are successful, though, robotic technology will be increasingly widespread in many areas of military and economic life.
Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations. For example, a robot is not currently able to distinguish between combatants and noncombatants or to understand that enemies sometimes disguise themselves as civilians.
To address this failing, in 2014, the U.S. Office of Naval Research offered a $7.5 million grant to an interdisciplinary research team from Brown, Georgetown, Rensselaer Polytechnic Institute, Tufts, and Yale to build robots endowed with moral competence. They intend to capture human moral reasoning as a set of algorithms, which will allow robots to distinguish between right and wrong and to override rigid instructions when confronted with new situations.
The idea of formalizing ethical guidelines is not new. More than seven decades ago, science-fiction writer Isaac Asimov described the “three laws of robotics”—a moral compass for artificial intelligence. The laws required robots to protect humans, obey instructions, and preserve themselves, in that order. The fundamental premise behind Asimov’s laws was to minimize conflicts between humans and robots. In Asimov’s stories, however, even these simple moral guidelines lead to often disastrous unintended consequences. Either by receiving conflicting instructions or by exploiting loopholes and ambiguities in these laws, Asimov’s robots ultimately tend to cause harm or lethal injuries to humans.
Today, robotics requires a much more nuanced moral code than Asimov’s “three laws.” Robots will
Loading, please wait...