How a Great Power Falls Apart
Decline Is Invisible From the Inside
When the stingray-shaped object took off and landed lightly on the deck of the USS George H. W. Bush in July 2013, some hailed it as a moment in aviation history to rank with the first heavier-than-air powered flight, at Kitty Hawk, in 1902. The X-47B drone flew itself, decided its own flight path, and completed on its own a mission given to it by humans. The dawn of autonomous weapons systems seemed undeniable. Yet the drone was hardly independent, as humans still programmed all its possible decisions, leaving it to choose from a menu of options. Half a decade later, experts are making new claims that the future of warfare is about to change. Today, artificial intelligence (AI) is the new frontier of military competition, and with China and Russia making headway in the field, the Pentagon is starting to rush, some say belatedly, into the new era.
In a move that reflects the reliability of current machine learning capabilities, the U.S. Department of Defense recently awarded Booz Allen Hamilton a contract worth $885 million over five years to introduce the first large-scale use of AI systems to analyze the flood of data provided by drones, as well as to diagnose diseases from medical data. The Defense Intelligence Agency (DIA), meanwhile, is building the Machine-Assisted Analysis Rapid-Repository System (MARS), an information database to make the interaction between human analysts, the cloud, and automated data processing systems more efficient.
In September, the Defense Advanced Research Projects Agency (DARPA), which helped kick-start the AI revolution back in the 1960s, announced an even more ambitious initiative: a $2 billion program to foster the next era of AI technologies, or “third wave” of AI. Unlike the first two waves of AI, which made possible first narrowly defined machine-conducted tasks and later statistical pattern recognition based on large data sets, the new initiative, which DARPA is dubbing AI Next, will focus on making it “possible for machines to adapt to changing circumstances.” The goal, according to the agency, is to enable better decision-making in “complex, time-critical battlefield environments.” That could mean quicker identification of threats, faster and more precise targeting, or creating flexible options for commanders based on changing conditions on the battlefield.
Although half a century old as a concept, artificial intelligence is still at a relatively immature state of development.
Although half a century old as a concept, artificial intelligence is still at a relatively immature state of development. The Booz Allen contract focuses on the most proven level, what is known as “narrow AI,” where a computer program focuses on a particular task, often automating what humans previously did (think of a spam filter for e-mail). This type of AI has been transforming the health field, allowing for quicker diagnosis of disease, and the world of surveillance, through facial and voice recognition. The Defense Department is looking for cost- and time-saving measures that will free up humans for more complex tasks. Yet this level of AI is still dependent on much human input, what is known as “supervised learning,” where the machines use preprogrammed algorithms designed for carrying out a particular task. An example would be distinguishing from video footage a machine gun-carrying motorcycle rider from an unarmed civilian.
The DARPA project is far more ambitious. According to DARPA Director Steven Walker, AI Next seeks “to explore how machines can acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.” The goal, as explained on the AI Next website, is to achieve the “far greater levels of intelligence” that machines will need in order to allow for more autonomous weapons systems, going far beyond the types of human-controlled drones that have been part of the military arsenal for years.
The next generation of AI that DARPA proposes to develop—“contextual reasoning”—relates to what AI scientists call “unsupervised learning,” where algorithms themselves try to identify patterns in data. Neural networks (also known as “deep learning”) can carry out classification and prediction tasks by linking thousands of processing nodes that comprise machine learning programs and algorithms. The processes they follow are not pre-programmed, but allow them to learn from observation; for example, by comparing thousands of pictures of buildings with a known example, neural networks can learn what is a castle as opposed to a hut. Neuroevolution goes beyond unsupervised learning, to enable AI to develop more effective AI, as though Frankenstein’s monster were put in charge of creating his own bride. The holy grail for AI programmers is to move from correlative outcomes, which is essentially what all AI today does, to causative outcomes that in essence include intuition and cognitive insight. Indeed, DARPA’s objective is for the machines that spring from the project to become reliable colleagues to humans.
Washington is just now beginning to explore the policy implications of the next wave of AI, ranging from technological feasibility to human impact to ethical questions. Few in U.S. policymaking circles have any but the most rudimentary understanding of what AI is or how the field might develop. They are, however, keenly attuned to the threat of adversarial nations leapfrogging American AI capabilities. At the top of the list of concerns is China, followed closely by Russia. China is already considered a world leader in AI, has committed over $2 billion to building an AI industrial park, and hopes to foster a $150 billion AI industry in less than a generation; Russia, while seen as farther behind in the AI race, is beginning a comprehensive plan to increase automation throughout society and the armed forces, establish a national center for AI development, and begin a series of AI war games to understand the technology’s potential on the battlefield. Because of developments such as these, many worry that the United States is already playing a catch-up game on AI, especially with China.
As the U.S. government struggles to come up with a comprehensive national AI plan, the Pentagon is moving forward on its own accord. This summer, the Defense Department announced the establishment of the Joint Artificial Intelligence Center (JAIC), under the direction of the Department of Defense’s chief information officer. With support from the advisory Defense Innovation Board, chaired by former Google head Eric Schmidt, the center will study the role of AI and machine learning in military systems. More specifically, it will coordinate work on high-priority AI initiatives, increase collaboration with the private sector and academia, and try to develop the next generation of AI talent. The council will also likely support the work of the Defense Innovation Unit—Experimental (DIUx), which was established near Silicon Valley in 2015 to partner with civilian companies to introduce high-tech, nontraditional approaches for DOD programs, many of which employ AI processes. Examples of DIUx partnerships include one with a company to identify, track, and autonomously remove rogue drones from the sky, and another to use algorithms to predict mechanical breakdowns in Army armored fighting vehicles for preventive maintenance.
Given the rapid pace of development in the field, the key to U.S. success in AI may well lie with public-private partnerships of the kind fostered by DIUx. Yet opposition at Google and other tech firms to working with Washington could leave the U.S. government searching for willing partners. Google CEO Sundar Pichai, for example, promised this summer that Google would never work on militarized applications of AI. This pledge arose in response to the backlash to the company’s cooperation in the U.S. Air Force’s Project Maven, an initiative to automate pattern recognition from the massive amount of moving and still imagery captured by drones and satellites. There is little question that the Pentagon’s ultimate interest in AI is to be able to operate more effectively and efficiently, and that means more destructively. Although the counterculture that gave rise to Silicon Valley’s tech leaders may have mellowed into self-interested middle age, working with the U.S. military may remain a bridge too far.
Such reticence could become a devastating weakness for the United States. Silicon Valley above all knows that technology never sleeps, and the current lack of cutting-edge AI investment in the defense industry could leave Washington at a decided disadvantage in the next generation’s arms race. To avoid falling behind, the first priority for the Pentagon is to find or fund AI startups that are willing to work with the military and are doing cutting-edge research.
China may have access to the massive amounts of data needed to refine algorithms for faster targeting, pattern recognition, and decision-making, but it continues to lag on the basic technologies that power AI, including hardware development. This means that for now the United States retains a slim edge over China in the AI arms race, a period in which it can decisively integrate AI into emerging weapons systems. Thus, the second priority for the Pentagon should be to push ahead as quickly as possible on integrating well-proven AI technologies, such as in pattern recognition, into operational capabilities. A third priority, as DARPA proposes, is sponsoring basic research into the third generation of AI, so as to position the military for a potentially AI-centric future in certain areas in 20 or 30 years’ time.
Absent these steps, Washington’s edge will diminish over time, and perhaps more quickly than most anticipate. As Chinese armed forces operate more widely within the Indo-Pacific region and beyond, its future AI prowess may make it an even more formidable force than it already is thanks to decades of conventional modernization. An AI-dominant Chinese military would in turn change the calculations of nations large and small, potentially leading them to embrace accommodation, so as to avoid confrontations with Beijing.
Arms races are ugly things, but throughout history, no one has successfully contained technological advances that make for more lethal militaries. Only those nations that are unable to protect their interests fail to adapt. The age of artificial intelligence is upon us, and in a world of ever more assertive authoritarian powers, the U.S. military will have to embrace and incorporate the new technologies into its arsenal as quickly and thoroughly as it can.