How America Can Shore Up Asian Order
A Strategy for Restoring Balance and Legitimacy
Two images from the January 6 riots at the U.S. Capitol will remain seared in my mind. In one, a man strides across the building’s ornate tile floors, past oil paintings and marble busts, holding aloft a billowing Confederate flag. In the other, a man wearing a T-shirt emblazoned with a red, white, and blue “Q” leads a pack of his comrades up a stairwell in the Capitol, chasing the solitary officer guarding that entryway.
The first image, while gut-wrenching, is easy to explain: racism and white supremacy course through U.S. history and remain a powerful force today. But the second image might be more perplexing to those who have not spent the last four years in the dank, dark basement of the Internet. The “Q” on the man’s shirt refers to the now infamous QAnon conspiracy, whose adherents believe that U.S. President Donald Trump is locked in a shadowy war with a cabal of Satanist pedophiles who secretly run the world. That so many of the January 6 rioters subscribed to QAnon and election-rigging conspiracy theories points to how disinformation, indoctrination, and extremism now flourish online. The government and social media platforms have for too long refused to take seriously—or worse, embraced—this underbelly of the Internet. It took violence and destruction at the seat of American power for some officials and technology companies to finally acknowledge the threat.
Facebook, Twitter, and a host of other social media companies have now banned the president and some of his closest allies—in some cases permanently—from using their platforms. Twitter’s policy team has had to follow the president as he tried to send messages from other accounts, playing a game of Whack-a-Troll previously reserved for fake foreign users seeking to sow disinformation. How did the United States come to this point? The culprit in this case, much as in that of the tardy U.S. response to foreign disinformation, is hubris.
The promise of the Internet blinded policymakers in both government and the technology sector. They were unwilling to acknowledge the societal fissures, such as abiding racism, that fuel disinformation and extremism and tended to avoid making tough decisions about speech by blandly invoking the First Amendment. As a result, authorities in both public and private sectors ignored the spread of dangerous ideas on the Internet and the growing networks of radicalized Americans willing to subscribe to notions as far-fetched as QAnon. After the storming of the Capitol, these alarming online trends are now impossible to ignore.
The day after insurrectionists attempted to stop the democratic process, Robert Contee, the chief of the Washington, D.C., Metropolitan Police Department, asserted, “There was no intelligence that suggested that there would be a breach of the U.S. Capitol.” According to The Wall Street Journal, the Federal Bureau of Investigation and the Department of Homeland Security declined to issue a joint threat assessment of the event because they determined that it did not pose a significant risk. But they should have seen the chaos coming. Through December, dark messages encouraging “patriots” to “take back our country” through “Operation Occupy the Capitol” appeared across pro-Trump pages on Facebook and Instagram. Users of “alternative” social media platforms such as Parler and Gab made their intentions more explicit. One post on thedonald.win—a spinoff of a now disabled Reddit message board—read: “The Capitol is our goal. Everything else is a distraction. Every corrupt member of Congress locked in one room and surrounded by real Americans is an opportunity that will never present itself again.” Below it, a chilling comment: “The final solution is the only solution.”
This kind of online chatter somehow didn’t provoke the authorities into more concerted action. Perhaps law enforcement officials imagined that the Internet’s public square is a messy democratic free-for-all that caters to the fringes of society but whose online screeds don’t metastasize into offline violence. They were then guilty of underestimating the threat. But a darker possibility still is that some government analysts might have ignored the brewing chaos because they were more sympathetic to the mobilization in support of Trump than they were to, for example, Black Lives Matter protests last summer, which received notably sterner treatment from police forces around the country.
To be fair, the U.S. government has grown more attentive in recent years to the violence that online movements such as QAnon can generate. The FBI named QAnon a domestic terrorism threat in 2019. That threat nevertheless grew in 2020. The COVID-19 pandemic stirred a climate of uncertainty, distrust, and fear that helped QAnon and adjacent theories reach new adherents. Social media platforms aided and abetted their growth by driving vulnerable audiences to their content. And Republican officials, including Representative Paul Gosar of Arizona and Senator Ted Cruz of Texas, legitimized the theories rather than condemned them.
A welter of conspiracies converged across the U.S. information ecosystem by the end of 2020, with QAnon interwoven with other theories about the alleged danger of 5G wireless technology, the perils of the COVID-19 vaccine and the wearing of masks, the dealings in Ukraine of President-elect Joe Biden’s son, and, tellingly, the conviction that Biden had stolen the presidential election from Trump.
Social media created this perfect storm. Recommendation algorithms on platforms such as Facebook and YouTube prioritize engagement over truth, meaning that a search for natural health remedies, for instance, could lead users in only a few clicks to far more dangerous content. While researching the reopen movement last year—the wave of protests in the United States that demanded an end to COVID-19 lockdowns—I monitored an “alternative health” group on Facebook. Soon after I browsed through the group, Facebook suggested I join groups related to white supremacy, the “plandemic” film (a documentary that argues that the COVID-19 pandemic is a manufactured crisis), and yes, QAnon. YouTube’s recommendation algorithm works similarly, setting users on a spiral of increasingly radicalizing content. Both platforms have since banned QAnon material, but the conspiracy theory is resilient: its adherents often change the language and code words they use to avoid detection, and they have set up shop on alternative platforms.
No matter what platforms decided to do during the past four years to moderate or prohibit false, dangerous narratives, they still had an enormous, Trump-shaped problem. Trump frequently tweeted messages that violated Twitter’s terms of service to his nearly 90 million followers, including misleading statements about the safety and security of mail-in balloting in the lead-up to the November election. His many online lieutenants and allies magnified his messages even further. Trump’s refusal throughout his term to disavow white supremacists and to refute the QAnon theory further strengthened those movements, whose members, unsurprisingly, helped push the president’s false allegations of a “rigged election.” Activists from across Trump’s base, all who bought into that disinformation narrative, arrived en masse at the U.S. Capitol on January 6 with the express goal of overturning the democratic process, causing mayhem, and shaking the country to its core.
Trump’s role in instigating the mob finally pushed social media platforms to take action against the president and some of his allies. In a blog post surprising for its candor and detail—two qualities normally foreign to social media companies—Twitter laid out its reasoning for permanently suspending Trump from the platform on January 8. Two of the president’s tweets sent after the Capitol insurrection were assessed to violate the platform’s “glorification of violence” policy. Twitter wrote that the content was “likely to inspire others to replicate the violent acts that took place on January 6, 2021,” and asserted there were “multiple indicators that [it was] being received and understood as encouragement to do so.”
Twitter’s ban on Trump does not in any way conclude this era; indeed, it may mark a period of intensifying discord as social media platforms struggle to find a consistent and clear line on what speech is acceptable and what isn’t. For years, the same platforms attempted to rationalize their refusal to tackle influential accounts, such as that of the president, that were very clearly in violation of their policies. Using the First Amendment as a shield, platforms argued that leaders such as Trump constituted “public interest exceptions.” Even if Trump posted content that transgressed their rules, social media companies insisted that it wasn’t their job to censor; rather, it was the job of voters to judge Trump harshly for his statements. In the process, these companies ignored the darker possibility that such content might lead to deadly violence. The platforms have now reversed course in an attempt to rewrite the narrative of their complicity. But the decisive, principled action of the past week may nevertheless discredit them in the eyes of many Americans who see the social media bans as censorship. The resulting muddle will only allow extremist movements based on disinformation to persist and grow, with the help of officials such as Representative Matt Gaetz, Republican of Florida, who asserted without evidence, mere hours after the attack, that “antifa” was responsible for the violence on January 6.
But the events at the Capitol are still a loud wake-up call. The United States has finally been shocked into understanding that the information people consume online has real-world consequences for public health, public safety, and democracy. Americans should remember that activists from countries as disparate as Myanmar and Ukraine have made this argument for years. By virtue of the United States’ size and the power of its example, its response to this crisis will have consequences beyond its borders. This is a moment that calls for nuance and introspection, not the grinding of personal political axes in the service of party over country. And should lawmakers ever again be tempted to argue that social media platforms ought to be no-holds-barred free-speech zones, they would do well to recall the fear and the heartbreak of January 6: the day the Internet came for them.