America’s China Policy Is Not Working
The Dangers of a Broad Decoupling
“What,” I sometimes ask students in a class I teach on the history of terrorism, “was the name of the Islamic State’s branch in Europe?” It is a trick question: the Islamic State (also known as ISIS) never set up a full-fledged European branch. The group’s self-proclaimed caliph, Abu Bakr al-Baghdadi, knew better than to try. By 2014, when ISIS formalized its split from al Qaeda and established itself as the dominant player in the global Salafi-jihadi movement, Western security services had figured out how to make it effectively impossible for the group to establish a base of operations in Europe or North America. Like al Qaeda before it, ISIS was only ever present in the West in the form of disparate cells and sympathizers. A traditional terrorist organization—with a functioning bureaucracy, regular meeting places, and in-house propaganda production—would, Baghdadi and his henchmen understood, have had as little chance of surviving in a contemporary Western country as the proverbial snowball in hell.
In fact, it has been decades since it was possible to run a major terrorist organization, capable of mounting a sustained campaign of large-scale attacks, in Europe or North America. Even the most notorious of the separatist movements and far-right militias that have originated in Western countries, and whose rhetoric can seem menacing, are comparatively small-scale operations; they survive because they kill relatively few people and never manage to attract the authorities’ full attention. The last high-impact terrorist organizations based in the West—the Basque separatists of ETA in France and Spain and the loyalist and republican paramilitaries in Northern Ireland—effectively collapsed in the 1990s under the weight of state countermeasures.
In the wake of the 9/11 attacks, it seemed that was all going to change. And of course, the past two decades have witnessed some horrific attacks on Western soft targets: the bombing of a train station in Madrid in 2004, the attack on a concert venue in Paris in 2015, the assault on a nightclub in Orlando, Florida, in 2016, among others. But such crimes were not the work of locally based organizations, and none of the perpetrators was able to strike more than once. Although for a time such swarms of weakly connected attackers periodically outmaneuvered Western security and intelligence services, the latter have adapted and, quite definitively, prevailed.
Spectacular though the 9/11 attacks were, they did not, as many feared, indicate that large and powerful terrorist organizations had laid down roots in the West and threatened the foundations of its social order. Meanwhile, the persistent fear of that outcome—which was never likely—has blinded many to an opposing trend: the steadily growing coercive power of the technocratic state. With artificial intelligence already entrenching this advantage, the threat of a major armed rebellion, in developed countries at least, is becoming virtually nonexistent.
At the dawn of this century, the outlook was quite different. The 9/11 attacks were widely believed to portend the rise of ultra-lethal nonstate actors who, many were convinced, had well-equipped sleeper cells in scores of Western cities, with militants who blended into communities unnoticed while awaiting orders to strike. During the weeks and months immediately after 9/11, the evidence that these cells existed seemed to be everywhere: in late September and early October 2001, a series of anthrax-laced letters were mailed to U.S. Senate offices and news outlets, and on December 22, 2001, a British convert to Islam on a flight to Miami was subdued by fellow passengers after trying to ignite his shoes, which were packed with plastic explosives. A steady stream of media reports suggested that jihadis had access to weapons of mass destruction. In late 2002, policymakers were jolted by intelligence reports warning that al Qaeda planned to use a two-chambered device called “the mubtakkar” (from the Arabic word for “invention”) to release cyanide gas on New York City subways. Nobody was safe anymore, news anchors insinuated, pointing to the official U.S. threat barometer, which periodically blinked red for “severe.”
The prevailing anxiety was reflected, in a somewhat muted form, in academic and strategic thinking. Following the deadly sarin gas attacks on the Tokyo subway carried out by the extremist cult Aum Shinrikyo in 1995, scholars such as Walter Laqueur had begun speaking of “the new terrorism,” a form of political violence characterized by religious zeal, decentralized organization, and a willingness to maximize casualties. The 9/11 attacks helped popularize such ideas, as well as the notion that Western societies were particularly vulnerable to the new threat.
Militant Islamism did indeed grow in the 1990s, and al Qaeda raised the bar considerably in terms of demonstrating how much damage nonstate actors could inflict on a powerful country. At the time, national security services in most Western countries were smaller than they are today, and because those services understood less about the actors they were up against, worst-case scenarios were less easily debunked. Still, it is clear in retrospect that the horrors of 9/11 frightened many into excessive pessimism.
The bigger analytic mistake, however, was not to overestimate the enemy but to underestimate the ability of rich, developed states to adapt and muster resources against the new threats. In the wake of the 9/11 attacks, commentators often portrayed the governments of such states as lethargic bureaucracies outwitted by light-footed rebels. As the years went on, however, what emerged instead were dynamic technocracies blessed with deep pockets and highly trained investigators and operatives. For every $1 in ISIS’s coffers, there are at least $10,000 in the U.S. central bank. For every al Qaeda bomb-maker, there are a thousand MIT-trained engineers.
When faced with security threats on their own soil, most Western states bent their own rules.
Western governments have also proved to be less scrupulous about preserving civil rights than many expected in the early years of the war on terrorism. When faced with security threats on their own soil, most Western states bent or broke their own rules and neglected to live up to their self-professed liberal ideals.
One of the most widespread cognitive biases in strategic analysis is to view one’s opponent’s behavior as governed by exogenous factors, such as a cunning strategy or material resources. But terrorism is a strategic game between states and nonstate actors, and what rebels are able to do depends heavily on a state’s countermeasures. In short, it did not matter that the new terrorists were good, because the people chasing them were even better.
To understand why, one must consider the fundamentals of the contest. Terrorist groups in Western states—or in any peaceful, relatively stable country, for that matter—are usually tiny factions that control no territory. Dwarfed by the combined forces of the state, they enjoy one key advantage: anonymity. They can operate as long as law enforcement does not know who they are or where they are based. Counterterrorism is therefore fundamentally about information: security services work to identify and locate suspects, while the latter try to stay hidden. A campaign of terrorism is a race against time, in which the terrorists are betting that they can draw new recruits or defeat the state faster than the police can hunt them down.
Through investigation, intelligence analysis, and research, the state’s knowledge about the terrorists gradually increases. Unless they can attract new recruits fast enough to render such knowledge constantly out of date, the terrorists will lose the race. Most terrorist campaigns therefore follow an activity curve that starts high and then gradually decreases, sometimes with a bump at the end as the militants make a desperate last attempt to turn the tide.
Terrorist campaigns are also shaped by communications technologies. New encryption techniques, for example, can help terrorists evade detection, and new social media platforms can help them distribute propaganda and recruit new members. But terrorist groups usually have only a brief window to enjoy the fruits of each new technology before states develop countermeasures such as decryption or surveillance. For example, in 2003, al Qaeda operatives in Saudi Arabia used mobile phones to great effect, but within a year, government surveillance had made the same devices a liability.
Broadly speaking, Western states have conducted two so-called wars on terror: one against al Qaeda in the first decade of this century and another against ISIS in the 2010s. In each case, a new organization grew, largely unnoticed, in a conflict zone, before surprising the international community with a transnational offensive, only to be beaten back through a messy counterterrorism effort. In each case, the militants initially benefited from having operatives and sympathizers unknown to Western governments but lost that advantage as the latter mapped their networks. Similarly, technological innovations benefited the terrorists to begin with but became a vulnerability as time wore on.
Al Qaeda began as a small group of Arab veterans of the 1980s Afghan jihad who, in the mid-1990s, decided to wage asymmetric war against the United States to end what they saw as Western imperialism in the Muslim world. The group grew strong in the late 1990s owing in part to access to territory in Afghanistan, where it trained fighters and planned attacks in relative peace. Hundreds of volunteers from the Muslim world, Europe, and North America attended these camps between 1996 and 2001. Western governments paid little attention to them because they were not deemed a major threat to the U.S. or European homelands. On 9/11, the group benefited from the element of surprise and from the relative anonymity of its operatives.
Al Qaeda’s momentum lasted for another half decade as Western states scrambled to map the group’s networks. The Guantánamo Bay facility, which was set up in early 2002 to hold significant al Qaeda figures but ended up holding mostly low-level ones (and some people who had no connection to the group at all), stands as a monument to that early information problem. In 2002, U.S. Secretary of Defense Donald Rumsfeld referred to detainees at Guantánamo as “the worst of the worst.” In reality, the United States had little idea what role, if any, these detainees had played in al Qaeda, since authorities in Washington knew relatively little about the group’s operations or personnel.
Meanwhile, al Qaeda itself was growing and transforming from an organization into an ideological movement. It drew thousands of new sympathizers worldwide, partly from the publicity generated by the 9/11 attacks, partly from the growth in online jihadi propaganda, and partly from the outrage among Muslims generated by the U.S.-led invasion of Iraq in 2003. Between 2001 and 2006, cells trained or inspired by al Qaeda carried out multiple attacks in Europe, most famously the Madrid attacks of 2004 and the London transit bombings in 2005. There were also dozens of foiled plots, such as a 2006 plot in which a cell based in the United Kingdom planned to blow up several commercial airplanes by bringing bomb ingredients onboard in small containers and assembling the bombs after takeoff. (This plot is the reason passengers are not allowed to bring water bottles through airport security even today.)
But the capabilities of Western intelligence services were also growing. Across western Europe and North America, the number of analysts working on jihadism skyrocketed in the aftermath of 9/11. These state security services designed new systems for collecting signals intelligence and exchanged more information with one another. Many countries passed laws that effectively lowered the bar for investigating and prosecuting suspects, often by expanding the definition of terrorist activity to include providing logistical support to terrorist groups. Hard drives began filling up with data, printers churned out network graphs, and investigators studied the finer points of Islamist ideology.
The tide finally turned around 2007. By then, the networks that al Qaeda had developed in Europe prior to 9/11 had all been rounded up, and the authorities had found ways to detain a number of extremist clerics based in Western countries. The number of jihadi plots in Europe decreased, as did the amount of al Qaeda propaganda online. On jihadi online discussion forums, where users had previously felt safe enough to share phone numbers, the fear of infiltration and surveillance became palpable. Al Qaeda branches in the Middle East were also losing steam, notably in Iraq and Saudi Arabia. The United States experienced a brief upsurge in attacks in 2009 and 2010—linked in part to the influence of the Yemeni American Salafi-jihadi preacher Anwar al-Awlaki—but it was not enough to change the overall picture. By 2011, the mood in Western counterterrorism circles had become cautiously optimistic. The wave of popular uprisings in the Arab world that began in late 2010, and came to be known as the Arab Spring, promised to end the authoritarianism many considered to be the root cause of jihadism. When U.S. Navy seals killed Osama bin Laden in Abbottabad, Pakistan, on May 2, 2011, it was possible to entertain the notion that the war on terrorism was coming to an end.
In a sense, that was both true and false. In retrospect, 2011 did mark the end of al Qaeda’s war on the West. The group lives on as a set of regional militias with local agendas in places such as Somalia, but it has not successfully conducted a serious attack on the West for almost a decade. Meanwhile, another organization has taken up the mantle with arguably greater success.
ISIS was a child of the U.S.-led invasion of Iraq in 2003. In the broad Sunni insurgency that followed, a highly active al Qaeda affiliate emerged, one that would take the name the Islamic State of Iraq in 2006. In the ensuing years, U.S. and Iraqi counterinsurgency efforts weakened the group, and it likely would have remained a midsize regional al Qaeda branch were it not for two unexpected developments.
The first was the eruption of civil war in Syria 2011, which provided the Islamic State of Iraq with a safe haven in which to expand. The group initially operated in Syria under a different name, but things went so well there that in 2013 it began breaking away from al Qaeda and presenting itself as an independent, Iraqi-Syrian group named the Islamic State in Iraq and Syria, or ISIS. In mid-2014, it burst onto the world stage by capturing the western third of Iraq and casting itself as a caliphate to which all the world’s Muslims must pledge allegiance. Meanwhile, in the preceding years, the horrors of the Syrian war had captured the attention of Sunni Muslims worldwide and led thousands of the more religious and adventurous among them to go to Syria as volunteers for the rebel side. Syria emerged as the global epicenter of militant Islamism, and ISIS, being the most visible of the Syria-based groups, attracted the lion’s share of the foreign fighters.
The second development was the social media revolution. Around 2010, platforms such as Facebook, Twitter, and YouTube went mainstream and changed the online media landscape in ways that greatly empowered radical ideological actors. For one, propaganda now spread further. Until this time, jihadis had been confined to shadowy websites that people visited only if they were already at least partly radicalized. The new platforms, by contrast, had millions of users, and their algorithms could push a jihadi video onto the timeline of someone who was not searching for it.
Paradoxically, jihadis were also safer on the new platforms than on the old websites, because the National Security Agency could not hack Facebook the way it could easily penetrate an obscure jihadi website housed in, say, Malaysia. Moreover, social media offered better integration with smartphones, allowing militants to view and upload propaganda from any location. Radicals seized the opportunity. The first half of the 2010s saw a colossal increase in jihadi propaganda, as ISIS produced material on a scale and with a level of sophistication previously unseen in the history of nonstate armed groups.
Social media platforms changed the online landscape in ways that greatly empowered radical ideological actors.
Finally, the new online ecosystem offered rich opportunities for secret communication. Encrypted messaging apps proliferated, and jihadi communications spread over a wide range of platforms. It was a signals intelligence nightmare. Militants began using messaging apps extensively for bilateral and small-group communication, seemingly uninhibited by the surveillance fears of the past. An important factor behind the rapid increase in foreign fighters in Syria in 2013–14 was the ability of early recruits to message their friends back home and persuade them to follow suit.
Western states did little to stem these developments for a simple reason: ISIS had not yet launched attacks outside the Middle East. It was only in the autumn of 2014, after an international military coalition formed to combat ISIS, that the group set its operational sights on Western cities. In September of that year, it called on followers worldwide to kill Westerners by any means and began training attack teams for high-profile operations in Europe. ISIS was now at the peak of its power, and like al Qaeda in 2001, it enjoyed a key advantage: member and sympathizer networks poorly known to Western intelligence services. The group had cast itself as a more youthful and dynamic alternative to al Qaeda and had attracted a new generation of European radicals. Its propaganda spread so fast that state security services could not keep track of all its new sympathizers.
This translated into one of the most serious waves of terrorist violence in Europe’s modern history. In three years, from 2015 to 2017, jihadis in Europe killed nearly 350 people, more than the number killed in jihadi attacks in Europe during the preceding 20 years and more than the total number of people killed by right-wing extremists in Europe between 1990 and 2020. ISIS's offensive also featured the first European terrorist cell able to strike hard twice: the group that carried out the attacks in Paris in November 2015 and Brussels the following April. Its success suggested the extent to which the intelligence community was back on its heels.
But the violence triggered a state counteroffensive that was equally unprecedented. “We are at war,” French President François Hollande declared after the 2015 Paris attack, before announcing an official state of emergency. The pattern from the immediate post-9/11 era repeated itself: expanded intelligence budgets, more aggressive surveillance, and new laws lowering the bar for police intervention in cases related to jihadism. Europe found itself taking measures so strict they would have been politically impossible just a few years earlier: closing mosques, deporting preachers, stripping people of their citizenship. Some European countries sent special forces to Iraq to hunt down citizens who had joined ISIS. Beginning in 2016, governments and social media giants also began an unprecedented effort to remove the group’s propaganda from the Internet. Censorship, previously considered politically unpalatable or technically impossible, was now being implemented with the full force of Silicon Valley’s artificial intelligence machinery.
Once again, the state won. By 2018, the number of jihadi plots and attacks in Europe had been cut in half compared to 2016, and the flow of foreign fighters had dried up entirely. What is more remarkable, every jihadi assault in Europe since 2017 has been carried out by a lone individual, suggesting that it has become very difficult to plan group attacks. Similarly, no terrorist strike since 2017 has involved explosives: instead, the attackers have used simpler weapons, such as guns, knives, and vehicles. There have been some complex and ambitious plots, but they all have been foiled by the police. This is not to dismiss the current threat, which remains serious. But ISIS’s offensive of the mid-2010s was firmly rolled back.
It may not be obvious to the ordinary citizen just how powerful modern intelligence services have become. Imagine that you wanted, for whatever reason, to start a violent rebellion in a Western country. You want to launch an organization, and not just carry out a one-off attack. How would you go about it? All your Internet searches, emails, and cell phone calls are in principle accessible to the state. You can start taking precautions now, but your digital history and those of your collaborators are still available for profiling. In an economy dominated by credit card transactions, your ability to get things done without leaving a trace will be limited. Venture into a city, and you will be caught on surveillance cameras, perhaps ones armed with facial recognition software. And how will you know whom to trust, when any of your new recruits might be a police infiltrator? What will you do when some of your best people—including ones who know your organization’s secrets—are arrested?
The reason information technology empowers the state over time is that rebellion is a battle for information, and states can exploit new technology on a scale that small groups cannot. The computer allowed states to accumulate more information about their citizens, and the Internet enabled faster sharing of that information across institutions and countries. Gadgets such as the credit card terminal and the smartphone allowed authorities to peer deeper and deeper into people’s lives. I sometimes serve as an expert witness in terrorism trials and get to see what the police have collected on suspects. What I have learned is that once the surveillance state targets someone, that person no longer retains even a sliver of genuine privacy.
Given the overwhelming advantages that wealthy developed countries enjoy, it is remarkable that jihadi terrorism has managed to persist in such places even at low levels. One reason is that states’ capabilities diminish past their borders, and jihadism is an unusually transnational movement. For decades, jihadis in the West have been able to travel to conflict zones in the Muslim world for training, thereby enjoying a kind of strategic depth that other radicals in the West, such as those of the far right, do not have. Another reason is that jihadi ideology fosters a culture of self-sacrifice. Anyone contemplating terrorism in the West knows that he will not be present to enjoy the hypothetical political fruits of his efforts, because he will either die or get captured in the process. Still, with the promise of rewards in the hereafter, the jihadi movement has been able to produce hundreds of volunteers for such one-off attacks, allowing it to swarm the enemy with disposable operatives. The rate of production of such volunteers is so much higher among jihadis than in other rebel movements that ideology must be part of the explanation. Finally, the high number of armed conflicts in the Muslim world has fed grievances and offered operational space for jihadi groups to grow. The role of the U.S. invasion of Iraq and the Syrian civil war, in particular, cannot be overestimated.
For all these reasons, a third wave of Islamist terrorism in the West is conceivable, but it is nonetheless unlikely. Would-be terrorists face a far tougher operating environment than did al Qaeda and ISIS at their height. And the opportunity for state security services to hone their skills on jihadis will also make it harder for other radical movements—ones with less access to conflict zones and less of a culture of self-sacrifice—to mount major campaigns in the future.
Developed countries will also become ever more digital, and it will become harder and harder to conceal one’s identity and go off the grid. The rebels of the future will have lived their entire lives on the Internet, leaving digital traces along the way, and that information will be accessible to states. New technologies may provide digital stealth for nonstate actors, but the effect will likely be temporary. Meanwhile, the rise of artificial intelligence may speed up states’ march toward technological dominance. Until now, states have not been able to exploit all the data that are available to them. Machine learning may change that.
These technological developments will probably also make political violence more unevenly distributed around the world. Well-resourced states will be able to buy their way to order, whereas weaker states will not. Things are already very uneven, with the Muslim world having suffered vastly more than the West during the war on terrorism. The future stability divide may cut through the global South, as well-resourced autocracies leverage the power of surveillance technology.
The rise of states immune to rebellion is not a good thing. It is naive to think that states’ new powers will be used only against people plotting bomb attacks. Such powers can—and do—creep into the policing of less lethal forms of political activism. In autocracies, the same tools are being deployed in an unfettered way to silence peaceful regime opponents. They allow countries such as China and Saudi Arabia to identify activists and nip mobilizations in the bud in a way that was not possible a couple of decades ago.
The rich nations of Europe and North America are liberal democracies, but their governments are also ferociously efficient repression machines. The surveillance tools at their disposal have never been more powerful. So those countries should choose their leaders wisely.
How a Flawed “War on Terror” Eventually Yielded the Right Approach