Iran Wants the Nuclear Deal It Made
Don’t Ask Tehran to Meet New Demands
Several weeks ago, shelter-in-place orders upended the daily lives of tens of millions of people and sent the U.S. economy into a tailspin. Now, amid signs that the curve of infection with the novel coronavirus, which causes the disease called COVID-19, is indeed flattening, the logic of urgent necessity has gradually been replaced by a different logic—one that holds that the ends of public health, economic recovery, and personal privacy are incommensurable.
Privacy “could be the next victim” of the novel coronavirus, warns Fortune magazine, while a headline in The New York Times reads “As Coronavirus Surveillance Escalates, Personal Privacy Plummets.” (The content of the Times article is not nearly as reductionist as the headline—I recommend reading it.)
The choice appears to be binary: either we heed the advice of medical experts about the benefits of aggressive contact tracing, which has worked well in countries such as Taiwan and South Korea, and agree to an erosion of the right to privacy—or we double down on a defense of privacy, knowing full well that doing so is likely to undermine public health.
But privacy and public health do not have to be incommensurable goals. Assertive containment strategies do not require Americans to treat privacy as the unfortunate holdover of a pre-pandemic age, nor does a defense of privacy require epidemiological disarmament. The big threats to personal privacy will come not primarily from the measures taken over the next one to five weeks but from the institutionalization of biosurveillance technologies over the coming one to five years. Those are the repercussions to plan for.
The logic of incommensurability is not only false. It is also dangerous. Such framing allows concerns about privacy to be too easily dismissed as unjustifiable liabilities, or distractions, during an acute public health emergency.
Containing infectious diseases can require governments to collect and share personal and population data on a vast scale. This pandemic is no exception. It has intensified trends that were already gathering momentum in the United States and globally. Technology that tracks people’s movements and assesses their health was commonplace before the novel coronavirus appeared on the global radar. Governments and corporations were already intrigued by the potential value of medical data. And the public was increasingly poised to accept measures that were not long ago seen as invasive.
Surveillance capabilities that were previously used to fight crime and terrorism are now repurposed to fight disease. This repurposing is neither a big organizational nor an especially demanding technological shift. In many countries, biosurveillance has long been included in the portfolio of intelligence agencies. In the United States, the Pentagon and organizations such as DARPA and the CIA run programs focused on natural and manmade pathogens. In Israel, the government recently disclosed the existence of a large cache of cellphone data that was originally intended for counterterrorism operations but could be adapted to track the movements and social contacts of infected persons. In South Korea, authorities have used information about credit card transactions that is usually collected to combat financial fraud to implement a similar contact-tracing strategy. And in China, the government’s ability to access a vast trove of CCTV footage has allowed authorities to retrace interactions among disease carriers at a microsocial level.
At the same time, new biosurveillance capabilities are in rapid development. They are meant to identify infectious disease carriers and track their social interactions. In Singapore and Taiwan, cellphone apps and scannable QR codes in public places have allowed authorities to monitor the movement of newly arrived visitors in the wake of the COVID-19 outbreak. Dozens of cities in China now use the Alipay Health Code system (developed by a subsidiary of the e-commerce giant Alibaba), which automatically assigns each user a color-coded infectious threat level, on the basis of which local authorities can make decisions about mandatory quarantine. In Italy, cellphone providers have made location data accessible to state agencies, which now use the information to determine whether local residents are obeying strict quarantine orders in the Lombardy region. And in the United States, the White House has begun talks with the private sector about tracking people’s locations and linking that information to medical data. The proliferation of smartphones and wearable technology—which often records basic biomedical information in real time—has only increased the potential scope of these efforts. Analyzing location data from cellphone towers is the low-hanging fruit rather than the feasible limit.
Surveillance capabilities that were previously used to fight crime and terrorism are now repurposed to fight disease.
But the pandemic is not only fueling the deployment and development of surveillance technology. It is also changing how biomedical data are factored into routine government screening and monitoring. Many travelers are already familiar with thermal scanning at international airports, which allows agencies such as the Transportation Security Administration to identify passengers with fevers before they cross the border. Studies and epidemiological simulations have shown that thermal scanning is unlikely to contain the spread of COVID-19, yet demand for such scanners has increased significantly. In the wake of the pandemic, travelers, immigrants, and asylum seekers at U.S. borders will likely submit to more such screening, along with medical history questionnaires. So might homeless people and welfare recipients within the United States.
Government agencies aren’t alone in recognizing the value of personal biomedical data. Corporations see profit in such information, and private organizations may increasingly factor it into their decision-making. In the United Kingdom, several landlords, including the McDonald’s charitable foundation, have started to evict tenants who recently had contact with COVID-19 patients. In the United States, Google’s Project Nightingale was recently disclosed to have gathered and analyzed health data from millions of patients in 21 states. Other tech companies, too, are chasing this largely untapped resource. While the collection of medical data remains highly regulated, the COVID-19 pandemic is likely to result in policy changes that allow private companies more easily to analyze health-related information. Already, the Global Privacy Assembly has identified regulatory changes related to data privacy in 31 countries in response to the pandemic. That trend will likely accelerate.
Public attitudes toward privacy will partly determine the scope of these developments. At the moment, those attitudes are in flux. Such is often the case in a crisis, as the historian Sarah Igo has documented in her book The Known Citizen. The U.S. government first introduced Social Security numbers in the wake of the Great Depression, and at that time, the numbers were widely circulated and even broadcast on local radio stations. The numbers evolved into closely guarded personal identifiers only when concerns about government overreach eclipsed the preoccupation with access to welfare services and economic relief. The privacy of medical data has changed in similar fashion. When U.S. Boards of Health first mandated the reporting of infectious disease outbreaks in the late nineteenth century, local physicians and newspapers rebelled against a perceived affront to patients’ privacy and doctors’ professional autonomy. Only later did the routine reporting of vital statistics and infectious diseases become socially accepted and widely practiced. In these cases. the broad public reassessed its standard of privacy following a crisis or a change in state practice. The same dynamic is likely to hold today.
If Americans are indeed entering a brave new world of biosurveillance, better to step into it with clear conviction than to stumble unwittingly over the threshold. For this reason, discussing privacy, even in the middle of the pandemic, is important. But as worrisome as all of the above trends might seem, none suggests the need to pit privacy against public health, as if the pursuit of one should render the other infeasible. Both ends are immensely valuable, and to preserve them requires nothing more or less than long-term vigilance against the erosion of the rule of law.
As the United States and other countries develop new means of tracking and controlling the spread of disease, the key concern should be for the bureaucratic half-life of the emergency measures. Will these new technological capabilities and legal frameworks outlive the extraordinary circumstances that produced them? Without sunset clauses, such interventions can readily evolve into a new normal—in which state-sponsored biomedical monitoring becomes routine at the national or even global level.
Some legislatures have already foreseen this problem and limited the emergency measures accordingly. In the United Kingdom, the government must review its COVID-19 policies at least every 21 days. In the state of New York, a review must occur after 30 days. Similar provisions can also cover the repurposing of existing surveillance capabilities and ensure that they are used transparently and appropriately—especially since they are now applied to a very different context from the one for which they were designed. The United States developed many of its surveillance technologies and databases in the context of the “global war on terror.” These systems gather information about foreigners and are often required to screen out citizens, a distinction that is far less relevant in fighting a disease that knows no border or citizenship status.
If Americans are indeed entering a brave new world of biosurveillance, better to step into it with clear conviction than to stumble unwittingly over the threshold.
There remains a danger that the technology and logic of biosurveillance will over time be applied in contexts where they are inappropriate, unconstitutional, unnecessary, or counterproductive. Consider the militarization of police departments in the United States after 2001: technologies that were intended to increase the responsiveness of law enforcement to terror attacks migrated downstream into the routine operations of everyday policing. Similarly, aggressively monitoring people’s health data might have short-term value during a pandemic but could become inappropriate and problematic if it simply became habitual under normal conditions. Biomedical data must be protected by a regulatory framework that safeguards against leaks, minimizes the potential for abuse, and imposes limits on its commodification.
To articulate the value of privacy, and to spell out the specific exceptions that can be justified on the grounds of public health, is in fact no novel task. Since its emergence in the late nineteenth century, the right to privacy has been defined only in part by what it encompasses and equally by what it excludes. Exceptions are a remarkably unexceptional feature of privacy jurisprudence and privacy discourse in the United States. But they must still be specified, in order to give form and substance to an often vague idea.
Americans need not dismiss expert medical advice in order to defend the right to privacy, nor should they surrender the insistence on basic civil rights in a time of crisis. Nonetheless, the imperative today is to guard against misuse, mission creep, and negative downstream effects—especially given that the luxury of time is not one the country can afford.
So which steps can be taken now in order to prevent missteps in the future? Emergency legislation should clearly list the powers it unlocks and the conditions that void those powers. Both national security and public health rely on surveillance. But while national security powers are often veiled in secrecy, the same logic doesn’t apply to most biosurveillance campaigns. These must be backed by legislation that clearly articulates and explicitly authorizes them, such that political leaders cannot assume that any practices not forbidden by the Constitution or judicial precedent are likely to pass muster. The choice, then, is not between assertive emergency responses and personal privacy but between legislative blank checks and robust checks and balances. By the same token, legislators should promote the least invasive solutions available. Scaling up testing for COVID-19, for example, would allow local authorities to target their contact-tracing and quarantine measures—rather than resorting to a dragnet approach that collects everyone’s movement data.
Americans should press for technological safeguards as well as legislative ones. For example, those designing technology to use for public health tracking can introduce artificial noise into data sets, such that aggregate patterns are still discernible but the identity of individuals cannot easily be inferred from previously anonymized data. Wherever this so-called differential privacy approach is possible, it should be considered. Likewise, personalized data can often be made available case by case, only when needed, rather than by providing the government with permanent backdoor access to data sets. Italy and Germany have taken this approach, allowing cellphone providers to make location data available only for specific regions and narrow time frames. Those designing the systems to meet today’s crisis must guard above all against developing capabilities and databases that cannot be controlled or scaled back when the moment recedes but would instead persist as technological faits accomplis.
German Chancellor Angela Merkel argued in a recent televised address that the severity of the current restrictions demands that the government make its decisions with particular transparency. In the United States, state and federal agencies should address this need by immediately identifying and empowering agencies that can oversee the practice of infectious disease management. These regulatory agencies will be charged with making sure that health authorities remain within legal bounds, respond to citizens’ complaints, and communicate clearly to the public.
To lament the “death of privacy,” and to dance on its grave, has become fashionable, as if the expansion of state surveillance and the commodification of personal data have rendered involuntary disclosure an inevitable fact of modern life. But this view is misguided, as recent laws such as the European General Data Protection Regulation and the California Consumer Privacy Act demonstrate. Those laws resulted from years of coalition building and advocacy, dedicated to rearticulating the meaning of privacy in the twenty-first century and rebalancing the scales after a period of passivity. Such coalitions are vital again today.
Amid the COVID-19 pandemic and in the months and years to come, U.S. citizens must again define privacy—and spell out its justified exceptions—but this time, in relation to biomedical and geo-located data. Americans have a duty to guard liberty, in the words of Supreme Court Justice Louis Brandeis, even “when the government’s purposes are beneficent.” Doing so requires neither the surrender of privacy nor the rejection of medical science.