The nation that leads in the development of artificial intelligence will, Russian President Vladimir Putin proclaimed in 2017, “become the ruler of the world.” That view has become commonplace in global capitals. Already, more than a dozen governments have announced national AI initiatives. In 2017, China set a goal of becoming the global leader in AI by 2030. Earlier this year, the White House released the American AI Initiative, and the U.S. Department of Defense rolled out an AI strategy.
But the emerging narrative of an “AI arms race” reflects a mistaken view of the risks from AI—and introduces significant new risks as a result. For each country, the real danger is not that it will fall behind its competitors in AI but that the perception of a race will prompt everyone to rush to deploy unsafe AI systems. In their desire to win, countries risk endangering themselves just as much as their opponents.
AI promises to bring both enormous benefits, in everything from health care to transportation, and huge risks. But those risks aren’t something out of science fiction; there’s no need to fear a robot uprising. The real threat will come from humans.
Right now, AI systems are powerful but unreliable. Many of them are vulnerable to sophisticated attacks or fail when used outside the environment in which they were trained. Governments want their systems to work properly, but competition brings pressure to cut corners. Even if other countries aren’t on the brink of major AI breakthroughs, the perception that they’re rushing ahead could push others to do the same. And if a government deployed an untested AI weapons system or relied on a faulty AI system to launch cyberattacks, the result could be disaster for everyone involved.
Policymakers should learn from the history of computer networks and make security a leading factor in AI design from the beginning. They should also ratchet down the rhetoric about an AI arms race and look for opportunities to
Loading, please wait...