Second Lieutenant William Liggett works at the Air Force Space Command at Peterson Air Force Base in Colorado Springs, CO.
Second Lieutenant William Liggett works at the Air Force Space Command at Peterson Air Force Base, Colorado Springs, July 20, 2010
Rick Wilking / Courtesy Reuters

These days, most of Washington seems to believe that a major cyberattack on U.S. critical infrastructure is inevitable. In March, James Clapper, U.S. director of national intelligence, ranked cyberattacks as the greatest short-term threat to U.S. national security. General Keith Alexander, the head of the U.S. Cyber Command, recently characterized “cyber exploitation” of U.S. corporate computer systems as the “greatest transfer of wealth in world history.” And in January, a report by the Pentagon’s Defense Science Board argued that cyber risks should be managed with improved defenses and deterrence, including “a nuclear response in the most extreme case.”

Although the risk of a debilitating cyberattack is real, the perception of that risk is far greater than it actually is. No person has ever died from a cyberattack, and only one alleged cyberattack has ever crippled a piece of critical infrastructure, causing a series of local power outages in Brazil. In fact, a major cyberattack of the kind intelligence officials fear has not taken place in the 21 years since the Internet became accessible to the public.

Thus, while a cyberattack could theoretically disable infrastructure or endanger civilian lives, its effects would unlikely reach the scale U.S. officials have warned of. The immediate and direct damage from a major cyberattack on the United States could range anywhere from zero to tens of billions of dollars, but the latter would require a broad outage of electric power or something of comparable damage. Direct casualties would most likely be limited, and indirect causalities would depend on a variety of factors such as whether the attack disabled emergency 911 dispatch services. Even in that case, there would have to be no alternative means of reaching first responders for such an attack to cause casualties. The indirect effects might be greater if a cyberattack caused a large loss of confidence, particularly in the banking system. Yet scrambled records would probably prove insufficient to incite a run on the banks.

Officials also warn that the United States might not be able to identify the source of a cyberattack as it happens or in its immediate aftermath. Cyberattacks have neither fingerprints nor the smell of gunpowder, and hackers can make an intrusion appear legitimate or as if it came from somewhere else. Iran, for example, may not have known why its centrifuges were breaking down prematurely before its officials read about the covert cyber-sabotage campaign against the country’s nuclear program in The New York Times. Victims of advanced persistent threats -- extended intrusions into organization networks for the purpose of espionage -- are often unaware for months, or even years, that their servers have been penetrated. The reason that such attacks go undetected is because the removal of information does not affect the information in the system, so nothing seems amiss. The exfiltration of information can also be easily hidden, such as in the daily flow of web traffic from an organization.

But since everything is becoming increasingly dependent on computers, could levels of damage impossible today become inevitable tomorrow? As it happens, all of the trend lines -- good and bad -- in cyberspace are rising simultaneously: the sophistication of attackers, but also that of the defenders; the salience of cyberattacks as weapons, but also the awareness of the threat they pose; the bandwidth available for organizing larger attacks, but also the resources to ward them off. It is bad news that Iran is beginning to see cyberwar as a deniable means of exploiting easy targets. And it is good news that software companies are now rethinking the architectural features of their systems that permit such vulnerabilities to exist in the first place.


Among the world’s potential interstate confrontations, one between the United States and Iran has the greatest potential for a significant cyber component. Indeed, Iran has already started to flex its muscles in cyberspace. In late 2012, cyberattackers linked to Iran penetrated the network of Aramco, Saudi Arabia’s national oil and gas company, effectively trashing 30,000 computers. Rasgas, a Qatari corporation, faced similar treatment. This spring, anonymous U.S. officials claimed that Iranian hackers were able to gain access to control-system software that could allow them to manipulate U.S. oil and gas pipelines.

And Iran has plenty of reasons to launch a cyberattack against the United States. For one, Tehran has not forgotten Stuxnet, a U.S. cyberattack on Iran’s uranium enrichment facility at Natanz in 2009. Through a cyberattack on the U.S. homeland, Iran could exact revenge and signal to those who would consider attacking its nuclear program -- whether by airstrike or cyberattack -- that they cannot move against Iran with impunity. Iran might also seek to undermine U.S. preparations for a preventive strike on its nuclear program. In this case, Iran could also hope to distract political leaders in Washington; give U.S. allies second thoughts about supporting U.S. military action; and divert a potential strike against Iranian nuclear sites, reasoning that the United States would respond by counterattacking Iran’s cyberinfrastructure instead.

It is here that the greatest risk of a cyberconflict comes in, one that has less to do with the initial damage than with how the United States would choose to respond. Determining that a cyberattack is an act of war would be more than just a conclusion; it is a decision that could initiate a war of choice. Even if an attack occurred during a burgeoning U.S.-Iran crisis, the United States might still not be able to attribute the attack directly to the Iranian government; other states, nonstate actors, or rogue elements within Iran may have their own reasons for lighting matches. Retaliation would therefore be risky.

In addition, a retaliatory cyberattack could quickly push both sides up the escalation ladder and even draw in third parties. If the Iranians considered cyberattacks to be a form of terrorism, they could respond by ramping up their sponsorship of conventional terrorist attacks. The United States, too, might decide to abandon virtual war for real war, as current U.S. policy allows for conventional military responses to cyberattacks. That would further poison U.S.-Iranian relations to such an extent that the next crisis in the physical world could be increasingly difficult to manage. Bluntly put, it might not be worth risking all these consequences in order to reduce the odds that Iran could, from time to time, attack U.S. computers.


The United States can best mitigate the risks of cyberwar by adopting technical and political measures to discourage cyberattacks before they happen.

The U.S. government could invest resources to reduce vulnerabilities in commercial software, encourage better management of cyber systems, and develop security tools that can quickly detect and thwart attacks in progress.

Stronger regulations and incentives in the private sector would also be crucial: Nearly all of the critical systems in the United States remain in private hands, and nearly all critical software is developed privately. Sharing intelligence with potential victims would also be useful, but not nearly as much as sharing information on vulnerabilities with those who write and maintain the software that can be exploited in an attack.

Technical capabilities can also create political deterrents. If the United States is able to better identify the sources of cyberattacks, it can give a clearer impression that it might retaliate, giving its enemies more inhibitions about attacking in the first place. Rhetoric also has a role to play in keeping potential attackers at bay. Washington should make a clear distinction between cyber-espionage (which it has not retaliated against) and cyberattacks, since doing so would leave open the possibility that the latter crosses a red line and could be matched with a disproportionately harsh response.

At the same time, the United States should leave room for operational flexibility. Leaders should avoid acting too hastily out of fear that hesitation will lead to disaster -- and, if anything, fear the opposite. Washington need not take possession of a crisis unnecessarily; otherwise, it risks backing itself into a corner where it has no choice but to respond, regardless of whether doing so is wise. In some cases, a well-crafted narrative -- for example, one that emphasizes the role of inadvertence or rogue actors -- might allow the attacker to cease attacks without losing face. Escalation, although necessary in some cases, can carry many unintended consequences.
Computers may work in nanoseconds, but the true target of any response is not cyberweapons -- it is the people who wield them. Even if a computer is destroyed, a substitute may be close at hand. Human beings, unlike computers, do not work in nanoseconds. Persuasion and dissuasion in cyberwar take as much time as in wars of any other form.

You are reading a free article.

Subscribe to Foreign Affairs to get unlimited access.

  • Paywall-free reading of new articles and a century of archives
  • Unlock access to iOS/Android apps to save editions for offline reading
  • Six issues a year in print, online, and audio editions
Subscribe Now
  • MARTIN C. LIBICKI is a Senior Management Scientist at the RAND Corporation and a Visiting Professor at the U.S. Naval Academy.
  • More By Martin C. Libicki