How America Should Deal With the Taliban
Avoiding the Diplomatic Errors That Doomed the U.S. Withdrawal
For several months, millions of people in the United States have been living in an alternate reality—one in which President Donald Trump has been fighting off a coordinated effort to steal the presidency from him and give it to Joe Biden. The idea first took root in the weeks prior to the election, when Trump looked likely to lose. States were reworking their voting processes in response to the coronavirus pandemic, and purveyors of the alternate narrative portrayed those efforts as a Democratic attempt to steal the vote. Doing so preemptively delegitimized voting procedures, such that when Trump lost, suspicions quickly morphed into conspiracy theories of outright theft.
The story of a stolen election found no shortage of amplifiers, as we saw during our work with the Election Integrity Partnership. Media sources popular with Trump supporters repeated the president’s claims and flooded their audiences with news of a vast electoral heist. Some political leaders bluntly called the election stolen; others, such as Senators Ted Cruz (R-Tex.) and Josh Hawley (R-Mo.), repeated allegations of widespread voter fraud and pledged to officially challenge Biden’s Electoral College win from the floor of the Senate. Popular right-wing media properties, such as Newsmax and OAN, ran incendiary reports darkly hinting at compromised voting machines for months. Vocal influencers, such as Glenn Beck, Tucker Carlson, and assorted Twitter and YouTube firebrands, either directly alleged that an illegitimate “deep state” coup had fixed the election or hosted fringe guests who made the claim. Users on major social media networks, such as Facebook, Twitter, and YouTube, propagated the idea within their communities. When these platforms began to crack down on falsehoods, wild conspiracies, and incendiary rhetoric about the purportedly stolen vote, the president’s supporters gravitated instead to smaller, alt-platform echo chambers that took up the story unimpeded.
On Wednesday, January 6, the United States’ alternate reality came into violent conflict with its actual reality. A crowd of the president’s supporters stormed the U.S. Capitol in a destructive confrontation that left five people dead—and that removed any remaining fig leaf for those Republicans who had insisted that the president’s rhetoric was merely figurative.
Social media platforms quickly clamped down on users who stoked the baseless claims of election fraud to incite violence—including the president himself. And in the days since, researchers who study online dynamics and radicalization have been piecing together the chain of events and working to understand how the attack might have been prevented. Echo chambers, networked activism, hyperpartisan media, and peer-to-peer misinformation are not going anywhere. U.S. policymakers and social media companies must therefore seek to comprehend and address the forces at work, particularly those that appear likely to become persistent threats.
Last week’s events did not come out of the blue. The tech platforms anticipated trouble: in the months before the election, they ran exercises to game out responses to various scenarios, including some that were familiar from the last election, such as foreign interference or leaks of hacked materials. Other scenarios, however, were specific to the incumbent president—a narcissist facing a loss and in command of an online army: If Trump baselessly claimed victory, what then? If he incited his supporters to violence, how should the tech platforms handle it? The companies set extensive new moderation policies to combat false claims of victory and prepared to stanch floods of misinformation by using labels, reducing distribution, and taking down accounts.
But despite this preparation, the tech platforms were indeed the place where, in the days after the election, millions of the president’s supporters joined public and private online groups with names like “Stop the Steal 2020.” Within these online communities and message boards, three distinct (but overlapping) subcommunities planned real-world demonstrations and marches. Some consisted of ordinary people, such as Women for Trump, who were upset by the loss. Some were composed of true believers in the cult of QAnon, trying to make sense of how the loss fit into the constantly delayed “plan” in which a cabal of Democrats would be exposed as pedophiles and locked up. And some were extremists—avowed white supremacists and members of antigovernment groups or militias—who had long used violent rhetoric and whose public visibility had risen in tandem with COVID-19. Members of each of these three groups had previously demonstrated against pandemic lockdowns outside state capitol buildings. Some of those demonstrations—particularly in Michigan—had turned violent.
Within this online echo chamber, January 6 was perceived as something of a Rubicon: media and influencer messages and indeed the tweets of the president himself had baselessly suggested that Vice President Mike Pence could use his position to stop the certification of the vote in a last-ditch effort to “Stop the Steal.” The president’s supporters gathered to hear him speak near the White House that day, many with the sincere and deeply held belief that he would be vindicated and even reinaugurated. But Pence did not deliver, and the speakers began thundering. Rudy Giuliani urged, “Let’s have trial by combat.” Donald Trump, Jr., warned that “these guys had better fight for Trump,” and he threatened to “be in their backyard” if they did not. Eric Trump exhorted his father’s followers to “march on the Capitol today” in defiance of the permit they had obtained. The president tweeted. And as their trusted leaders called on them to take action, some of those gathered marched toward the Capitol to breach its defenses for the first time since 1814.
On January 6, the United States’ alternate reality came into violent conflict with its actual reality.
How foreseeable was this action? Anybody paying attention to the general media and social media environment could have assessed, with high confidence, that large, pro-Trump protests in Washington D.C., were very likely to take place on January 6. Through Twitter, YouTube, and even official statements from White House Press Secretary Kayleigh McEnany, Trump had repeatedly spread election disinformation and called for his supporters to come to Washington. Fox News and other conservative media repeatedly previewed and promoted the official, permitted rally at the White House, which was openly funded by a pro-Trump dark money group. In the first days of the year, people traveling to Washington D.C., posted on social media about being trapped on planes filled with disruptive, unmasked, and vocally pro-Trump passengers. Any cursory analysis would have predicted the likelihood of a large and boisterous crowd forming that day.
But how predictable was it that the protesters were planning for violence and would become rioters? Again, a basic study of social media and the prior protests of affiliated groups, such as the Proud Boys, would have made the likelihood clear, although the percentage of attendees who would be prone to either lead or follow violent acts might have been hard to quantify. Posters on platforms as diverse as Facebook Groups, Gab, Instagram Stories, Parler, and 8kun openly and widely discussed preparations to “take the Capitol.” The brazenness of such speech may have served to hide these plans in plain sight, and bias in favor of the conservative and mostly white instigators may have led law enforcement to dismiss the clear calls for violence as online puffery. But an unbiased analysis by a team familiar with the history of these extremist groups—and with their public social media postings in the 48 hours before the event—should have made the strong possibility of violence clear.
That the violence would be directed at the Capitol building itself instead of generalized around Washington, D.C., was also reasonably well telegraphed. Several online platforms teemed with explicit discussions of storming the Capitol, some of them including analyses of floor plans and security precautions. Trump had made clear on social media that he sought to disrupt the counting of the Electoral College votes and that he was displeased with the vice president, who was to preside over that count. Many Republican senators and members of Congress offered encouragement in plain view. The likelihood of the crowd being redirected down Pennsylvania Avenue toward the Capitol—and the possibility that it would turn violent once there—was again obvious from open sources. That the crowd would successfully breach the citadel of American democracy, despite the hundreds of billions of dollars that the government has spent on its security over the last two decades, was unfortunately made likely by the apparent failure to predict every step up to that point.
The evidence of a gathering threat was apparently sufficient for the FBI field office in Norfolk, Virginia, to send an intelligence report to its counterpart in Washington, D.C. An effective plan for January 6 would have required a coordinated group of law enforcement agencies to have acted upon that evidence. In the past, the FBI penetrated groups affiliated with the Islamic State (or ISIS) by deeply infiltrating secret communication channels. In this case, no such action was necessary: the seditious mob reveled in the privilege of being able to plan an attack on the Capitol out in the open, with little concern about operational security. That open approach is now backfiring, as federal prosecutors around the country use public postings to round up participants and charge them with crimes. The result may well be a shift to more covert methods. But the incoming administration will still need to consider the possibility that an angry and dangerous subculture will continue to operate semi-openly, with the Newsmax media ecosystem and elected QAnon Republicans continuously feeding its grievances.
The sites where the operation was planned—the tech platforms—have turned years of policy precedent upside down since January 6 in an attempt to moderate content. After the insurrection at the Capitol, Facebook and Twitter banned the president, the former at least through Biden’s inauguration, the latter permanently. The platforms simultaneously cracked down on users trafficking in election disinformation and the QAnon conspiracy theory: tens of thousands of such accounts disappeared within hours. Suppliers and consumers of pro-Trump disinformation responded by fleeing to alternative platforms, most notably Parler, the rise of which suggests that even as mainstream platforms moderate the supply of radicalizing content, they will not reduce—and may even inadvertently increase—the demand. Apple and Google have both removed Parler’s mobile app from their stores, and Amazon Web Services discontinued its hosting services. But there are tradeoffs inherent in these moves: deplatforming the insurrection’s supporters may impede the coordination of violent actions ahead of the inauguration, but it will also generate significant anger toward Big Tech, reinforcing the perception that Silicon Valley companies are biased against conservatives. The most entrenched supporters will move to less visible social channels, several of which will be hosted overseas and protected from lawful requests for data.
The decision to deplatform the president of the United States has reverberated around the world. Americans comprise only around five percent of the user base of Facebook and Twitter. Those companies have now set the precedent of stripping an elected leader of protection against moderation on the grounds that he stepped over a poorly defined redline. So far, the companies have not explained how their new standard might be applied to other elected leaders, whether in countries where the companies have extensive operations or in those that lack a single employee.
Echo chambers, hyperpartisan media, and peer-to-peer misinformation are not going anywhere.
Many populist leaders with authoritarian tendencies share Trump’s affinity for speaking directly to voters through social media. Narendra Modi of India, Rodrigo Duterte of the Phillipines, and Jair Bolsonaro of Brazil have used American Internet platforms to exploit ethnic tensions, attack independent journalists, and manipulate nationalistic sentiments. All three have run into content moderation controversies and have benefited from the deference that has traditionally been granted elected leaders, even when those leaders are not themselves respectful of democratic norms. Even German Chancellor Angela Merkel, who has in the past called on U.S. tech companies to moderate content more proactively, raised questions about the power dynamics reflected by the social media platforms’ latest steps. The precedent suggests that American tech CEOs could impose their will on foreign leaders the same way they have on the U.S. president. Any such action would not come without consequences, however, as many ruling parties have powers that are not constitutionally available to the U.S. president and that would allow them to punish American companies and their employees.
In an ironic parallel, as leaders overseas consider the deplatforming precedent, right-wing activists in the United States seem poised to adopt the communications strategy of their counterparts in Brazil and India—namely, using encrypted messengers instead of one or more public social media platforms. Signal, a popular secure messaging product, is currently the top free app in the Apple App Store, and Telegram, which was also used to plan the January 6 uprising, is the second. Public battles over end-to-end encryption, which renders the platforms unable to see the content of the messages and therefore unable to moderate them, have been fought in free and nonfree countries alike. In the United States, the debate has largely centered on the conflict between security and privacy advocates and those who wish to breach encrypted apps in order to prevent child sexual abuse and Islamist terrorism. The political battle lines over online privacy may now shift, with more Democrats seeking to grant law enforcement access to communications among domestic extremists.
The alternate reality that political leaders, media properties, and influencers have helped construct still envelops millions of Americans, and it is unlikely to dissipate any time soon. The deplatforming of hundreds of thousands of people—and the arrests of Trump supporters who openly flouted the law—may serve to deter the milder supporters of the soon to be ex-president from violence. But such actions are also likely to drive a radicalized faction underground, into the sorts of digital spaces that harbor terrorist groups around the world.
The Biden administration will soon take over a country that is not only fractured politically but also home to a sizeable movement of antigovernment insurrectionists who are armed not only with weapons of war but with propaganda and communication tools that would have been inconceivable to Nathan Bedford Forrest, the Confederate general and first grand wizard of the KuKlux Klan, or even to the Oklahoma City bomber Timothy McVeigh. Dealing with this movement, in a manner that respects the Constitution in a way that Trump’s mob never did, is one of the most difficult tasks President-elect Biden—and the tech companies—now face.
Washington Wakes Up to the Dark Reality of Online Disinformation