Data Is Power
Washington Needs to Craft New Rules for the Digital Age
Since the early days of the Cold War, the United States has led the world in technology. Over the course of the so-called American century, the country conquered space, spearheaded the Internet, and brought the world the iPhone. In recent years, however, China has undertaken an impressive effort to claim the mantle of technological leadership, investing hundreds of billions of dollars in robotics, artificial intelligence, microelectronics, green energy, and much more. Washington has tended to view Beijing’s massive technology investments primarily in military terms, but defense capabilities are merely one aspect of great-power competition today—little more than table stakes. Beijing is playing a more sophisticated game, using technological innovation as a way of advancing its goals without having to resort to war. Chinese companies are selling 5G wireless infrastructure around the world, harnessing synthetic biology to bolster food supplies, and racing to build smaller and faster microchips, all in a bid to grow China’s power.
In the face of China’s technological drive, U.S. policymakers have called for greater government action to protect the United States’ lead. Much of the conventional wisdom is sensible: boost R & D spending, ease visa restrictions and develop more domestic talent, and build new partnerships with industry at home and with friends and allies abroad. But the real problem for the United States is much deeper: a flawed understanding of which technologies matter and of how to foster their development. As national security assumes new dimensions and great-power competition moves into different domains, the government’s thinking and policies have not kept pace. Nor is the private sector on its own likely to meet every technological need that bears on the country’s security.
In such an environment, Washington needs to broaden its horizons and support a wider range of technologies. It needs to back not only those technologies that have obvious military applications, such as hypersonic flight, quantum computing, and artificial intelligence, but also those traditionally thought of as civilian in nature, such as microelectronics and biotechnology. Washington also needs to help vital nonmilitary technologies make the transition to commercial success, stepping in with financing where the private sector will not.
In the early decades of the Cold War, the United States spent billions of dollars dramatically expanding its scientific infrastructure. The Atomic Energy Commission, formed in 1946, assumed responsibility for the wartime labs that had pioneered nuclear weapons, such as the Oak Ridge National Laboratory, the headquarters of the Manhattan Project, and went on to fund academic research centers, such as the Lawrence Livermore National Laboratory. The Department of Defense, founded in 1947, was given its own massive research budget, as was the National Science Foundation, established in 1950. After the Soviets launched the Sputnik satellite, in 1957, Washington created the National Aeronautics and Space Administration, or NASA, to win the space race, as well as what would become the Defense Advanced Research Projects Agency, which was tasked with preventing a future technological surprise. By 1964, research and development accounted for 17 percent of all discretionary federal spending.
Partnering closely with academia and companies, the government funded a large variety of basic research—that is, research without a specific end use in mind. The goal was to build a technological foundation, defined primarily as conventional and nuclear defense capabilities, to ensure the country’s security. The research proved astonishingly successful. Government investment spawned cutting-edge capabilities that undergirded the United States’ military superiority, from supersonic jets to nuclear-powered submarines to guided missiles. The private sector, for its part, got to capitalize on the underlying intellectual property, turning capabilities into products and products into companies. GPS-enabled technologies, airbags, lithium batteries, touchscreens, voice recognition—all got their start thanks to government investment.
Yet over time, the government lost its lead in innovation. In 1964, the U.S. government was spending 1.86 percent of GDP on R & D, but by 1994, that share had fallen to 0.83 percent. During that same period, U.S. corporate R & D investment as a percentage of GDP nearly doubled. The numbers tell only half the story. Whereas much of the government’s R & D investment was aimed at finding new, game-changing discoveries, corporate R & D was mostly devoted to incremental innovation. The formula for growing revenue, the private sector realized, was to expand on existing products, adding functionality or making something faster, smaller, or more energy efficient. Companies focused on nearer-term technologies with commercial promise, rather than broad areas of inquiry that might take decades to bear fruit.
Increasingly, the most innovative R & D was taking place not in the labs of large corporations but at nimbler, privately funded startups, where venture capital investors were willing to tolerate more risk. Modern venture capital firms—partnerships that invest in early-stage companies—first arose in the 1970s, leading to early successes such as Apple and Microsoft, but it wasn’t until the dot-com bubble of the 1990s that this style of investment really took off. If the first phase of R & D outsourcing was from government labs to corporate America, this was the second phase: away from big businesses and toward small startups. Large companies began to spend less on internal R & D and more on what they called “corporate development,” or acquiring smaller, venture-backed companies with promising technologies.
The rise of venture capitalism created a great deal of wealth, but it didn’t necessarily further U.S. interests. Venture capital firms were judged by their ability to generate outsize returns within a ten-year window. That made them less interested in things such as microelectronics, a capital-intensive sector where profitability arrives in decades more so than years, and more interested in software companies, which need less capital to get going. The problem is that the companies receiving the most venture capital funding have been less likely to pursue national security priorities. When the American venture capital firm Accel hit the jackpot by investing early in Rovio Entertainment, the Finnish video game company behind the mobile app Angry Birds, it may have been a triumph for the firm, but in no way did it further U.S. interests.
Over time, the U.S. government lost its lead in innovation.
Meanwhile, government funding of research continued its decline relative both to GDP and to R & D spending in the private sector. The Department of Defense retained the single biggest pot of federal research funding, but there was less money overall, and it became more dispersed across various agencies and departments, each pursuing its own priorities in the absence of a national strategy. As the best researchers were lured to the private sector, the government’s in-house scientific expertise atrophied. Once close relationships between private companies and Washington also suffered, as the federal government was no longer a major customer for many of the most innovative firms. U.S. agencies were rarely the first to buy advanced technology, and smaller startups generally lacked the lobbyists and lawyers needed to sell it to them anyway.
Globalization also drove a wedge between corporations and the government. The American market came to look less dominant in an international context, with the huge Chinese consumer market exerting a particularly powerful pull. Corporations now had to think of how their actions might look to customers outside the United States. Apple, for example, famously refused to unlock iPhones for the FBI, a decision that probably enhanced its brand internationally.
Further complicating matters, innovation itself was upending the traditional understanding of national security technology. More and more, technology was becoming “dual use,” meaning that both the civilian and the military sectors relied on it. That created new vulnerabilities, such as concerns about the security of microelectronic supply chains and telecommunications networks. Yet even though civilian technologies were increasingly relevant for national security, the U.S. government wasn’t responsible for them. The private sector was, and it was innovating at a rapid clip with which the government could barely keep pace. Taken together, all these trends have led to a concerning state of affairs: the interests of the private sector and the government are further apart than ever.
The changes in American innovation would matter less if the world had remained unipolar. Instead, they occurred alongside the rise of a geopolitical rival. Over the past two decades, China has evolved from a country that largely steals and imitates technology to one that now also improves and even pioneers it. This is no accident; it is the result of the state’s deliberate, long-term focus. China has invested massively in R & D, with its share of global technology spending growing from under five percent in 2000 to over 23 percent in 2020. If current trends continue, China is expected to overtake the United States in such spending by 2025.
Central to China’s drive has been a strategy of “military-civil fusion,” a coordinated effort to ensure cooperation between the private sector and the defense industry. At the national, provincial, and local levels, the state backs the efforts of military organizations, state-owned enterprises, and private companies and entrepreneurs. Support might come in the form of research grants, shared data, government-backed loans, or training programs. It might even be as simple as the provision of land or office space; the government is creating whole new cities dedicated solely to innovation.
China’s investment in 5G technology shows how the process works in practice. Equipment for 5G makes up the backbone of a country’s cellular network infrastructure, and the Chinese company Huawei has emerged as a world leader in engineering and selling it—offering high-quality products at a lower price than its Finnish and South Korean competitors. The company has been buoyed by massive state support—by The Wall Street Journal’s count, some $75 billion in tax breaks, grants, loans, and discounts on land. Huawei has also benefited from China’s Belt and Road Initiative, which provides generous loans to countries and Chinese companies to finance infrastructure construction.
Massive state investments in artificial intelligence have also paid off. Chinese researchers now publish more scientific papers in that field than American ones do. Part of this success is the result of funding, but something else plays a big role: access to enormous amounts of data. Beijing has fueled the rise of powerhouse companies that sweep up endless information about their users. These include Alibaba, an e-commerce giant; Tencent, which developed the all-purpose WeChat app; Baidu, which began as a search engine but now offers a range of online products; DJI, which dominates the consumer drone market; and SenseTime, which provides facial recognition technology for China’s video surveillance network and is said to be the world’s most valuable artificial intelligence company. As a matter of law, these companies are required to cooperate with the state for intelligence purposes, a broad mandate that is almost certainly used to force companies to share data for many other reasons.
Washington has monitored China’s technological progress through a military lens.
That information increasingly involves people living outside China. Chinese companies have woven a global web of data-gathering apps that collect foreigners’ private information about their finances, their search history, their location, and more. Those who make a mobile payment through a Chinese app, for example, could have their personal data routed through Shanghai and added to China’s growing trove of knowledge about foreign nationals. Such information no doubt makes it easier for the Chinese government to track, say, an indebted Western bureaucrat who could be convinced to spy for Beijing or a Tibetan activist who has taken refuge abroad.
China’s hunger for data extends to some of the most personal information imaginable: our own DNA. Since the COVID-19 pandemic began, BGI—a Chinese genome-sequencing company that began as a government-funded research group—has broken ground on some 50 new laboratories abroad designed to help governments test for the virus. China has legitimate reasons to build these labs, but it also has an ugly record of forcibly collecting DNA data from Tibetans and Uighurs as part of its efforts to monitor these minorities. Given that BGI runs China’s national library of genomics data, it is conceivable that through BGI testing, foreigners’ biological data might end up in that repository.
Indeed, China has shown great interest in biotechnology, even if it has yet to catch up to the United States. Combined with massive computing power and artificial intelligence, innovations in biotechnology could help solve some of humanity’s most vexing challenges, from disease and famine to energy production and climate change. Researchers have mastered the gene-editing tool CRISPR, allowing them to grow wheat that resists disease, and have managed to encode video in the DNA of bacteria, raising the possibility of a new, cost-effective method of data storage. Specialists in synthetic biology have invented a new way of producing nylon—with genetically engineered microorganisms instead of petrochemicals. The economic implications of the coming biotechnology revolution are staggering: the McKinsey Global Institute has estimated the value of biotechnology’s many potential applications at up to $4 trillion over the next ten to 20 years.
Like all powerful technologies, however, biotechnology has a dark side. It is not inconceivable, for example, that some malicious actor could create a biological weapon that targeted a specific ethnic group. On controversial questions—such as how much manipulation of the human genome is acceptable—countries will accept different degrees of risk in the name of progress and take different ethical positions. The country that leads biotechnology’s development will be the one that most profoundly shapes the norms and standards around its use. And there is reason to worry if that country is China. In 2018, the Chinese scientist He Jiankui genetically engineered the DNA of twin babies, prompting an international uproar. Beijing portrayed him as a rogue researcher and punished him. Yet the Chinese government’s disdain for human rights, coupled with its quest for technological supremacy, suggests that it could embrace a lax, even dangerous approach to bioethics.
Washington has monitored China’s technological progress through a military lens, worrying about how it contributes to Chinese defense capabilities. But the challenge is much broader. China’s push for technological supremacy is not simply aimed at gaining a battlefield advantage; Beijing is changing the battlefield itself. Although commercial technologies such as 5G, artificial intelligence, quantum computing, and biotechnology will undoubtedly have military applications, China envisions a world of great-power competition in which no shots need to be fired. Technological supremacy promises the ability to dominate the civilian infrastructure on which others depend, providing enormous influence. That is a major motivation behind Beijing’s support for high-tech civilian infrastructure exports. The countries buying Chinese systems may think they are merely receiving electric grids, health-care technology, or online payment systems, but in reality, they may also be placing critical national infrastructure and citizens’ data in Beijing’s hands. Such exports are China’s Trojan horse.
Despite the changing nature of geopolitical competition, the United States still tends to equate security with traditional defense capabilities. Consider microelectronics. They are critical components not only for a range of commercial products but also for virtually every major defense system, from aircraft to warships. Because they will power advances in artificial intelligence, they will also shape the United States’ future economic competitiveness. Yet investment in microelectronics has fallen through the cracks. Neither the private sector nor the government is adequately funding innovation—the former due to the large capital requirements and long time horizons involved and the latter because it has focused more on securing current supplies than on innovating. Although China has had a hard time catching up to the United States in this area, it is only a matter of time before it moves up the microelectronics value chain.
Another casualty of the United States’ overly narrow conception of security and innovation is 5G technology. By dominating this market, China has built a global telecommunications network that can serve geopolitical purposes. One fear is that Beijing could help itself to data running on 5G networks. Another is the possibility that China might sabotage or disrupt adversaries’ communications networks in a crisis. Most U.S. policymakers failed to predict the threat posed by Chinese 5G infrastructure. It wasn’t until 2019 that Washington sounded the alarm about Huawei, but by then, there was little it could do. U.S. companies had never offered an end-to-end wireless network, instead focusing on manufacturing individual components, such as handsets and routers. Nor had any developed its own radio access network, a system for sending signals across network devices that is needed to build an end-to-end 5G system like that offered by Huawei and a few other companies. As a result, the United States found itself in an absurd situation: threatening to end intelligence cooperation if close allies adopted Huawei’s 5G technology without having an attractive alternative to offer.
Digital infrastructure may be today’s battle, but biotechnology will likely be the next. Unfortunately, it, too, is not considered a priority within the U.S. government. The Department of Defense has understandably shown little interest in it. Part of the explanation for that lies in the fact that the United States, like many other countries, has signed a treaty renouncing biological weapons. Still, biotechnology has other implications for the Pentagon, from changing manufacturing to improving the health of service personnel. More important, any comprehensive assessment of the national interest must recognize biotechnology’s implications for ethics, the economy, health, and planetary survival.
Because so many of the gaps in U.S. innovation can be traced back to a narrow view of the national interest and which technologies are needed to support it, the Biden administration’s first step should be to expand that understanding. Officials need to appreciate both the threats and the opportunities of the latest technologies: the havoc that could be wreaked by a paralyzed 5G network or unscrupulous genetic engineering, as well as the benefits that could come from sustainable energy sources and better and more efficient health care.
The Biden administration’s second step should be to create a process for aligning government investments with national priorities. Today, federal funding is skewed toward military capabilities. This reflects a political reality: the Pentagon is the rare part of the government that reliably receives bipartisan budgetary support. Fighter jets and missile defense, for example, are well funded, whereas pandemic preparedness and clean energy get short shrift. But setting the right national technological priorities raises questions that can be answered only by making judgments about the full range of national needs. What are the most important problems that technology can help solve? Which technologies have the power to solve only one problem, and which might solve multiple problems? Getting the answers to such questions right requires taking a truly national perspective. The current method doesn’t do so.
A properly run process would begin with what national security professionals call a “net assessment”—in this case, an analysis of the state of global technological progress and market trends to give policymakers the information necessary to work from a shared baseline. To be actionable, the process would establish a handful of near- and long-term priorities. A compelling candidate for long-term investment, for instance, might be microelectronics, which are foundations for both military and civilian innovation but have difficulty attracting private investment dollars. Another long-term priority might be biotechnology, given its importance for the economy and the future of humanity. As for short-term priorities, the U.S. government might consider launching an international effort to combat disinformation operations or to promote 5G innovation. Whatever the specific priorities chosen, the important thing is that they be deliberate and clear, guiding the United States’ decisions and signaling its aspirations.
Supporting those priorities is another matter altogether. The current approach—with the government funding only limited research and the private sector taking care of commercializing the results—isn’t working. Too much government-funded research remains locked in the lab, unable to make the leap to commercial viability. Worse, when it manages to leave U.S. government labs, it often ends up in foreign hands, depriving the United States of taxpayer-financed intellectual property.
The U.S. government will need to take a more active role in helping research make it to the market. Many universities have created offices that focus on commercializing academic research, but most federal research institutions have not. That must change. In the same spirit, the U.S. government should develop so-called sandboxes—public-private research facilities where industry, the academy, and the government can work together. In 2014, Congress did just that when it established Manufacturing USA, a network of facilities that conduct research into advanced manufacturing technologies. A similar initiative for microelectronics has been proposed, and there is no reason not to create additional sandboxes in other areas, too.
The U.S. government could also help with commercialization by building national data sets for research purposes, along with improved privacy protections to reassure the people whose information ends up in them. Such data sets would be particularly useful in accelerating progress in the field of artificial intelligence, which feeds off massive quantities of data—something that only the government and a handful of big technology companies currently possess. Success in synthetic biology, along with wider medical research, will also depend on data. Thus, the U.S. government should increase the quantity and diversity of the data in the National Institutes of Health’s genome library and curate and label that information so that it can be used more easily.
All this help with commercialization will be for naught, however, if the startups with the most promising technologies for national security cannot attract enough capital. Some of them run into difficulties at the early and late stages of growth: in the beginning, they have a hard time courting investors willing to make high-risk bets, and later on, when they are ready to expand, they find it difficult to attract investors willing to write large checks. To fill the gaps at both stages, the U.S. government needs its own investment vehicles.
Too much government-funded research remains locked in the lab.
We work at the parent company of In-Q-Tel, which offers a promising model for early-stage investment. Created in 1999 by the CIA, In-Q-Tel is an independent, not-for-profit firm that invests in technology startups that serve the national interest. (One early recipient of In-Q-Tel’s investment was Keyhole, which became the platform for Google Earth.) Now also funded by the Department of Homeland Security, the Department of Defense, and other U.S. agencies, In-Q-Tel identifies and adapts innovative technologies for its government customers. Compared with a federal agency, a private, not-for-profit firm can more easily attract the investment and technology talent required to make informed investments. There is every reason to take this model and apply it to broader priorities. Even just $100 million to $500 million of early-stage funding per year—a drop in the bucket of the federal budget—could help fill the gap between what the private sector is providing and what the nation needs.
For the later stage, policymakers could draw inspiration from the U.S. International Development Finance Corporation, the federal agency responsible for investing in development projects abroad, which in 2018 was first authorized to make equity investments. A late-stage investment fund could be structured as an arm of that agency or as a fully independent, not-for-profit private entity funded by the government. Either way, it would provide badly needed capital to companies ready to scale up their operations. Compared with early-stage government support, late-stage government support would have to be greater, in the range of $1 billion to $5 billion annually. To expand the impact of this government investment, both the early- and the late-stage funds should encourage “sidecar” investments, which would allow profit-seeking firms and individuals to join the government in making, and potentially profiting from, technology bets.
Government-sponsored investment funds like these would not only fill critical gaps in private-sector investment; they would also allow taxpayers to share in the success of research their money has funded. Currently, most government funding for technology comes in the form of grants, such as the Small Business Innovation Research grants administered by the Small Business Administration; this is true even of some programs that are billed as investment funds. This means that taxpayers foot the bill for failures but cannot share in the success if a company makes it big. As the economist Mariana Mazzucato has pointed out in these pages, “governments have socialized risks but privatized rewards.”
Not-for-profit investment vehicles working on behalf of the government would have another benefit: they would allow the United States to play offense when it comes to technological competition. For too long, it has played defense. For example, it has banned the export of sensitive technology and restricted foreign investment that might pose a national security risk—even though these actions can harm U.S. businesses and do nothing to promote innovation. Supporting commercialization with government-sponsored equity investment will not be cheap, but some of the upfront costs would likely be regained and could be reinvested. There are also nonmonetary returns: investing in national priorities, including infrastructure that could be exported to U.S. allies, would enhance the United States’ soft power.
President Joe Biden has pledged to “build back better” and restore the United States’ global leadership. On the campaign trial, he laid out promising proposals to promote American innovation. He called for dramatically boosting federal R & D spending, including some $300 billion to be focused on breakthrough technologies to enhance U.S. competitiveness. That is a good start, but he could make this drive far more effective if he first created a rigorous process for identifying top technological priorities. Biden said he supports “a scaled-up version” of the Small Business Innovation Research grants and has backed “infrastructure for educational institutions and partners to expand research.” Even greater opportunity lies in filling the gaps in private-sector investment and undertaking a long-overdue expansion of government support for commercialization.
On innovation, if the United States opts for just more of the same, its economy, its security, and its citizens’ well-being will all suffer. The United States will thus further the end of its global leadership and the unfettered rise of China. Biden has the right instincts. Yet in order to sustain its technological dominance, the country will have to fundamentally reenvision the why and how of innovation. Biden will no doubt be consumed with addressing domestic challenges, but he has spent much of his career promoting the United States’ global leadership. By revamping American technological innovation, he could do both.