Can Putin Survive?
The Lessons of the Soviet Collapse
The video starts innocuously. A soldier in camouflage pants and a black shirt speaks to men who are mostly out of frame, punctuating his words with waves of his right hand. A pistol dangles from his left hand, and another man kneels behind him, hands behind his head. But a minute into the video, the soldier in the black shirt suddenly pivots and shoots. The kneeling figure slumps forward as the soldier strides toward him, shooting the prisoner twice more in the head.
More than three years later, this footage is a central piece of evidence in a novel case pending before the International Criminal Court (ICC). Prosecutors in The Hague issued an arrest warrant for the man in the black shirt, Libyan militia commander Mahmoud al-Werfalli, after videos documenting his role in the killing of 33 people surfaced online. The evidence against Werfalli, unlike that presented in any other case in the court’s history, is based primarily on social media documentation. Without the videos, prosecutors would have no case.
Although the Werfalli videos have grabbed headlines, they are hardly unique. Evidence of the world’s most egregious crimes, including genocide, torture, and the destruction of cultural heritage property, circulates in real time on platforms such as Facebook, Twitter, and YouTube. Understandably, these companies often remove the most graphic content from public view. But human rights advocates and reporters have long argued that destroying such evidence undermines future prosecutions and denies victims the justice they deserve.
As a result, social media companies face a dilemma: they need to prevent their platforms from becoming superspreaders of harmful content, while simultaneously preserving access to evidence of mass atrocities. A viable solution would be so-called evidence lockers—freestanding repositories for potential evidence found on social media, operated by external nonprofits or by social media companies themselves. Such archives already exist in other legal fields, but tech firms, in conjunction with human rights organizations, need to build on this tested model to create specific vaults for war crimes documentation.
Evidence lockers aren’t a new idea. Archives exist to preserve evidence of a range of criminal activities, including terrorism, child pornography, and antiquities trafficking—cataloging material that has evidentiary, historic, or research value but that social media companies or the uploader might remove from public view. Established vaults vary according to who holds the content and who submits material (social media companies or external parties), whether firms are legally compelled to preserve the content, and who can access the archive.
When designing vaults for war crimes evidence, the first priority should be allowing content to be stored for long periods of time, given the often significant lag between the commission of atrocities and the start of legal investigations. Since evidence is often highly sensitive, vaults also need to include clear and consistent safeguards around who can access and view the stored information. Those designing systems for archiving content should also consider how to prevent repositories from reflecting old colonial paradigms, where institutions echo the interests and preferences of Western backers to the detriment of other countries. Critics of the ICC, for instance, often cite its almost exclusive focus on crimes committed in Africa. To avoid that kind of bias, potential evidence lockers need to preserve content regardless of politics or geography. Ideally, laws would protect the underlying data by standardizing what content is safeguarded and also resolve tensions between access and privacy, intellectual property concerns, and national security considerations.
Social media companies can ensure that crucial material is protected.
In a departure from previous practices, war crimes evidence lockers should give human rights groups and international organizations a say in what gets preserved. Typically, governments and companies determine these matters. For instance, neither the ICC nor the existing UN mechanisms documenting atrocities in Syria and Myanmar currently have clear authority to require that companies preserve evidence.
This needs to change. War crimes are unique in that governments are often implicated in the violence and are therefore unable or unwilling to hold themselves accountable. By giving human rights and humanitarian institutions a way to request that evidence be preserved until courts or other legal actors have an opportunity to intervene, social media companies can ensure that crucial material is protected from destruction.
Absent an act of Congress requiring social media firms to establish evidence repositories, why would companies such as Facebook and Twitter do so? It is likely easier for them to simply delete content and avoid the complex bureaucracy of law enforcement, human rights organizations, and international tribunals.
International norms provide one answer. According to the United Nations Guiding Principles on Business and Human Rights, social media companies are expected to respect and protect human rights, avoid activities that cause or contribute to abuses, and prevent or mitigate violations of these norms linked to their operations. Preserving social media content and sharing that data with relevant authorities arguably falls within these guidelines. Indeed, several major tech companies—including Microsoft and Facebook—have expressed support for these principles and announced steps to better align their practices with them.
But firms also have material incentives to participate. Legislators, advertisers, company employees, and users have lobbied companies to do a better job of supporting human and civil rights. For example, civil rights groups recently launched a campaign under the hashtag #StopHateForProfit to pressure Facebook to address hate speech and misinformation on its platform. Google employees also protested the company’s involvement in Project Maven, a Pentagon-based initiative that would refine the use of artificial intelligence for a range of military purposes, including drone strikes. Google ultimately withdrew from the project. The threat of regulation, loss of ad revenue, damage to employee morale, and negative publicity are strong incentives for companies to improve their records on holding war criminals to account.
Companies and human rights organizations must devise a system for preserving content.
To their credit, several companies have already begun engaging with human rights organizations to explore the possibility of establishing war crimes evidence lockers. Social media firms, encouraged by human rights researchers, have floated the idea of contributing data to an independent repository or simply retaining the information themselves. At a minimum, many of these companies recognize that they have a moral, and in some cases a legal, obligation to protect the public from harmful content while ensuring that important information remains available for international justice efforts.
Still, challenges remain. One is identifying what content should qualify for preservation. Like “terrorism” and “hate speech,” “war crime” can be an uncomfortably slippery term. This makes it hard to automatically detect relevant content; algorithms don’t play well with ambiguity. Still, advocates and companies can solve that problem by looking at past efforts to combat illegal activities. Firms could, for example, rely on definitions used by official bodies such as the United Nations Office of the High Commissioner for Human Rights or focus primarily on clear cases of abuse, such as mass killings or the use of chemical weapons, that lend themselves well to detection by both human moderators and machines.
Another challenge is figuring out how to make content available to war crimes investigators while protecting the privacy of users who posted the data as well as those depicted in their posts—a simultaneous legal and ethical concern. One way to address these privacy issues is to err on the side of caution, preserving the data but then limiting who can access it and under what terms.
On December 1, the United Nations Office of the High Commissioner for Human Rights, in partnership with UC Berkeley School of Law’s Human Rights Center, will launch the Berkeley Protocol on Digital Open Source Investigations.
The protocol, part of a global effort to set standards for the use of online content in international criminal cases, reflects a growing recognition that online information has the power to strengthen accountability for human rights violations worldwide. The Werfalli case is just one example. On October 5, human rights groups filed a historic case in Germany accusing the Syrian regime of Bashar al-Assad of war crimes. Their argument relied on both traditional evidence and information sourced from social media. Several other cases currently before European courts rely on a similar combination of evidence.
As evidence gathered from social media becomes more widely used in war crimes cases, companies and human rights organizations must devise a thoughtful system for preserving irreplaceable content. Without such a system, even mass atrocities that are documented in real time on social media will risk going unpunished.
Evidence of War Crimes Must Be Preserved, Not Destroyed