Kennan’s Warning on Ukraine
Ambition, Insecurity, and the Perils of Independence
Of all of the revelations about the NSA that have come to light in recent months, two stand out as the most worrisome and surprising to cybersecurity experts. The first is that the NSA has worked to weaken the international cryptographic standards that define how computers secure communications and data. The second is that the NSA has deliberately introduced backdoors into security-critical software and hardware. If the NSA has indeed engaged in such activities, it has risked the computer security of the United States (and the world) as much as any malicious attacks have to date.
No one is surprised that the NSA breaks codes; the agency is famous for its cryptanalytic prowess. And, in general, the race between designers who try to build strong codes and cryptanalysts who try to break them ultimately benefits security. But surreptitiously implanting deliberate weaknesses or actively encouraging the public to use codes that have secretly been broken -- especially under the aegis of government authority -- is a dirty trick. It diminishes computer security for everyone and harms the United States’ national cyberdefense interests in a number of ways.
Few people realize the extent to which the cryptography that underpins Internet security relies on trust. One of the dirty secrets of the crypto world is that nobody knows how to prove mathematically that core crypto algorithms -- the foundations of online financial transactions and encrypted laptops -- are secure. Instead, we trust that they are secure because they were created by some of the world's most experienced cryptographers and because other specialists tried diligently to break them and failed.
Since the 1970s, the U.S. National Institute of Standards and Technology (NIST) has played a central role in coordinating this trust, and in deciding which algorithms are worthwhile, by setting the cryptographic standards used by governments and industries the world over. NIST has done an admirable job of organizing the efforts of cryptographic experts to design and evaluate ciphers. It has also been able to harness the clout of the U.S. government to get those designs -- including such state-of-the-art technology as the AES cipher, the SHA-2 hash functions, and public-key cryptography based on elliptic curves -- adopted by industry. In turn, American industry believed that it could trust that these technologies had been designed by a competent organization with its interests at heart.
There is now credible evidence that the NSA has pushed NIST, in at least one case, to canonize an inferior algorithm designed with a backdoor for NSA use. Dozens of companies implemented the standardized algorithm in their software, which means that the NSA could potentially get around security software on millions of computers worldwide. Many in the crypto community now fear that other NIST algorithms may have been subverted as well. Since no one knows which ones, though, some renowned cryptographers are questioning the trustworthiness of all NIST standards.
If the loss of trust in NIST is permanent, the world could return to a time of fragmented national and industry standards. That would cause more work for implementers and reduce security for everyone. Computer lore is filled with examples of poor design and flawed industry standards: for example, the encryption that is supposed to prevent copying of DVDs, the A5/1 cipher that is supposed to secure calls on GSM telephones, and the WEP encryption that is supposed to keep WiFi networks private. All of these standards contain flaws that allow the encryption to be broken, exposing the industries that developed them to embarrassment and financial loss and exposing end users to security vulnerabilities. Even worse than fragmented industry standards would be if some implementers, fearful of backdoors in government-vetted crypto, turned to snake-oil ciphers that offer no real protection. That situation is, of course, entirely avoidable. The NSA knows which standards it has weakened and which ones are unadulterated. It could restore trust by coming clean.
Beyond undermining NIST standards, NSA activities have also shaken confidence in emerging cryptographic technologies that are among our best hopes for improving cybersecurity in the near term. A central example is elliptic-curve cryptography, which is a next-generation technology that, as far as we know in the unclassified world, is superior to what we use today.
For the past few years, NIST and the NSA have heavily promoted elliptic-curve cryptography, promising better performance and improved security. In light of recent revelations, though, many cryptographers are skeptical of the NSA's motives. Is the NSA backing elliptic-curve cryptography to improve security or because it has some secret way to compromise it? After reviewing the documents leaked by former NSA contractor Edward Snowden, the prominent security expert Bruce Schneier pointedly advised against adoption of elliptic-curve cryptography.
This was surprising. The common alternatives to elliptic curves are three decades old, and the methods of attacking them have improved to the point that today, the margin of safety for these algorithms is slim. In contrast, there are no publicly known major weaknesses in elliptic-curve cryptography. Still, relatively few experts outside the NSA understand it, and adopting the standardized algorithms involves placing trust both in the core technology and in opaque parameters published by NIST.
If practitioners are unable to trust new technologies like this because of the possibility of NSA tampering, the technologies are next to useless. Two decades of progress will be tossed aside.
The NSA's activities have also undermined trust in another otherwise promising new security technology: the hardware random number generator integrated into the latest generation of Intel processors. Random numbers are essential for cryptography -- for example, in selecting unpredictable secret keys for encryption. As our own research shows, poor randomness is a widespread security problem. Many computing devices simply do not have any good source of randomness with which to generate secure keys, and this leads to predictable encryption that is easy to break.
Intel's new random number generator could put an end to all that, since it introduces circuitry for producing random numbers into widely used processors. However, since random number generators are used to generate secret keys, they are an ideal place for an NSA backdoor. It is not possible to look inside the CPU and verify that it's actually picking numbers randomly, as opposed to following some predictable but random-looking method known only to the NSA. There is no evidence that the NSA actually did tamper with the Intel random number generator. However, it is not beyond the bounds of imagination: leaked documents reveal that one of the NSA's priorities was to “insert vulnerabilities into commercial encryption systems" just like this one. Many developers will simply not take the risk of using it unless the NSA comes clean about which systems it has tampered with.
The scariest consequence of the NSA's activities is that they lead to a direct loss of security for everyone who uses technology that contains an NSA backdoor. There is no way to design technology that allows only bad users to be exploited; backdoors make everyone vulnerable.
The NSA has many technical options for building a backdoor. One way is to deliberately decrease the amount of effort required to mount an attack, by reducing the entropy or keyspace available to a cryptographic system, adding subtle side-channel attacks, or introducing deliberate software vulnerabilities. An adversary could detect any of those methods -- through skillful reverse engineering or simply by getting lucky -- and exploit them. The NSA could also have designed backdoors that can only be exploited using some secret information: the NIST standard that the NSA is accused of having backdoored is a pseudorandom number generator whose state (and thus all of the cryptographic secrets it might ever generate) is predictable if you know a secret master key. Theoretically, such a method is safe as long as the secret key remains secret. But the very fact of the Snowden leaks shows that even the NSA can't always ensure that.
For the researchers who work to defend cryptographic systems against attack, the possibility that the NSA deliberately planted backdoors in NIST cryptographic standards is deeply unsettling. The agency’s meddling adds a major new cybersecurity threat to a list that was already too long. Many of the nation's most skilled security researchers will waste untold hours ferreting out the backdoors. Some of these weaknesses will be found and others won't, with the net result that we will only regain some of the security that NSA has cost us.
There are far too many threats to cybersecurity to be worrying about problems created by our own government. The one piece of good news is that, unlike most cybersecurity problems, this one can be legislated away. Congress could force the NSA to come clean about which technologies it has weakened or falsely promoted. Several members of Congress have proposed legislation reining in the NSA's mass surveillance tactics. The NSA generally operates within the bounds of the law, and strong legislation requiring NSA to cease these underhanded practices and disclose the technical details of existing backdoors would go a long way to rebuilding trust and repairing the damage that the NSA has caused to everyone's security.