Afghanistan’s Moment of Risk and Opportunity
A Path to Peace for the Country and the Region
Since late March, Hong Kong has walled itself off from the outside world. Only residents returning to the city are still allowed in. Upon arrival, each is handed an electronic wristband that connects to the wearer’s smartphone. Once home, people are told to walk the perimeter of their apartments, establishing a virtual boundary that they must not cross for two weeks. If they step outside the perimeter, the wristband will send an alert to government officials. Violators face a hefty fine and up to six months in jail.
Hong Kong’s use of digital surveillance in the fight against the novel coronavirus—along with that of many other governments, including several U.S. states—has stirred a fierce debate about balancing public health and privacy. Yet the underlying problem precedes the current crisis. Corporations and governments have been amassing intimate personal data about ordinary citizens for years. Our research shows that even before the coronavirus pandemic, political campaigns, in particular, were engaged in a digital arms race to gather as much information about citizens’ whereabouts, habits, and beliefs as possible. That competition continues apace as the U.S. presidential election nears. Now, the pandemic risks normalizing and legitimizing these invasive practices.
As the COVID-19 crisis rages on, contact-tracing apps are being used in at least 28 countries with the intention of tracking infected individuals and alerting those who come into contact with them. At face value, the apps seem beneficial. But although contact tracing is an established practice, the efficacy and ethics of using digital technologies to facilitate it are heavily debated. Governments have ignored such quandaries, expanding their location-surveillance capacities in step with the global creep toward authoritarianism.
In Israel, embattled Prime Minister Benjamin Netanyahu has issued emergency regulations to bypass parliament and allow the police and Shin Bet, the country’s internal security agency, to track and contact carriers of COVID-19 and those they may have infected. The new regulations permit the authorities to access ill-defined ‘technological data’ from peoples’ phones without their consent. In Russia, some regions are experimenting with location-tracking and facial recognition systems to monitor citizens’ movements during the pandemic. The Chinese government is also expanding its system of “automated social control,” drawing on tools first developed to round up and intern an estimated one million Uighurs and other Muslim minorities in so-called reeducation camps.
Access to location data enables governments and private companies to conduct their outreach with invasive precision.
What unites these efforts is their reliance on data harvested from people’s smartphones and other devices—veritable troves of personal information ripe for collection and exploitation. Today’s electronic devices store almost everything they can about themselves and their users: location, device type, time of use, even health-care data, religious affiliation, and dating preferences, depending on what apps users download. Many apps collect this information, especially location data, for later sale to third parties. In fact, the developers of location-tracking software often pay apps to integrate their technology regardless of necessity. As a result, even flashlights, calculators, games, and photo-editing apps on our smartphones can log our whereabouts. Cell phone companies, too, have been known to sell their clients’ location data—the U.S. Federal Communications Commission recently proposed $208 million in fines for the four largest U.S. carriers for improperly doing so.
Even more precise location data can be gathered when smartphones, tablets, or laptops connect to WiFi or Bluetooth access points. This information is bound to the device’s unique identifying number, which in turn can be traced to the individual who owns it, allowing for deanonymization and the collation of diverse, rich data points.
Access to location data enables governments and private companies to conduct their outreach with invasive precision. One approach, demonstrated by Hong Kong’s strategy of coronavirus containment, is to draw a virtual boundary around a physical space and record the devices of individuals who cross the digital fence. Those who enter at a certain time of day can be sent a tailored message or advertisement, possibly nudging them toward action at a specific time and place. Those on the other end can track them. Realtors can use such geofencing to market properties to individuals walking past an open house, and restaurants can advertise happy-hour deals to people nearby. Restaurants can even combine the geofencing data with other demographic information in order to specifically target foodies or pizza lovers. Martin Sorrell, the founder of the world’s largest advertising group, WPP, has called app-based location targeting the “holy grail” for advertising.
The political use of geofencing is more controversial. During the 2016 U.S. presidential election, an unnamed Republican primary candidate paid the company Beaconstac to attach Bluetooth beacons—lightweight, wireless transmitters—to campaign yard signs. At the time, these beacons could send unprompted notifications to any Android device and some (though not all) Apple devices in the vicinity. Beaconstac said that beacon messaging has been used during elections in India and Nigeria, too.
Our team at the University of Texas at Austin calls such methods “geopropaganda”: the use of location data by campaigns, super PACs, lobbyists, and other political groups to influence political discussions and decisions. The tactic is a subset of computational propaganda—the use of algorithms, automation, and human curation (for example, state-sponsored trolls or hired social media influencers)—to manipulate public opinion and political communication. A beacon-based alert might urge you to attend a nearby political rally. A YouTube advertisement might encourage you to give your vote to a particular candidate because you have been to a gun range or an abortion clinic. On election day, location data could also reveal who has already voted at the polls and whom campaigns should encourage to do so. Some might consider this type of alert an innocuous catalyst for authentic democratic engagement. In truth, it is an underhanded violation of privacy that distorts political discourse and exacerbates political polarization. Geopropaganda enables selective, intimately targeted messaging with almost no oversight or regulation—offering ample space for conspiracy theories, smear campaigns, and even disinformation.
The 2016 Republican candidate who used electronic yard signs was not successful, but other politicians have taken note. The reelection campaign of U.S. President Donald Trump recently amended a section of legalese on its website to state: “we may also collect other information based on your location and your Device’s proximity to ‘beacons.’”
Even in the absence of digital fences, location data can provide a staggering amount of information about people’s behavior. From an anonymized data set of 50 billion location pings gathered over a few months from 12 million phones, journalists were able to identify people who visited motels at odd hours or met with bail bondsmen. They used this data to name individuals at a clash between antifascist and far-right protestors in Berkeley and to single out a man who picketed outside the Trump hotel in Washington D.C. in 2016. They discovered that a senior official in Trump’s Defense Department had attended the 2017 Washington, D.C., Women’s March with his wife and easily identified a Secret Service agent, following the agent’s every move from work to home and in between. The potential national security implications of such revelations, particularly given the risk of blackmail, are deeply concerning.
The Trump campaign stores the names of rally attendees and matches them to voter profiles, uncovering their voting histories and party affiliations.
The wealth of behavioral insights that location data offers has not been lost on political operatives. In 2018, the advocacy group CatholicVote.org used location-tracking software to identify several hundred thousand people who had recently visited a Catholic church. It then used this data to zero in on an estimated 600,000 people during five senate races. Allegedly, the group gave each person in the data set a “religious intensity score” based on how many times he or she had gone to church in a 60-day period and then shepherded them accordingly. In one case, Catholic voters in Missouri received ads claiming that the incumbent Democratic Senator, Claire McCaskill, was “anti-Catholic.” Although the impact of such messaging is difficult to quantify, the group has redoubled its efforts for 2020, hoping to use equally granular location data to target unregistered Catholic voters in battleground states.
The Trump campaign stores the names of rally attendees and matches them to voter profiles, uncovering their voting histories and party affiliations. According to Brad Parscale, Trump’s 2020 campaign manager, 15 percent of the people identified at a rally in Battle Creek, Michigan, had not voted in the last four elections and 20 percent of Trump’s audience in Hershey, Pennsylvania, were Democrats. Why not use that information to get them registered and encourage them to vote Republican? Wisconsin’s governor, Tony Evers, also gathered the location data and unique identifier numbers of people’s phones at Democratic Party meetings in Wisconsin during his run for office. Former Democratic presidential candidate Beto O’Rourke did the same at a concert he held with Willie Nelson. Based on interviews we have conducted, this practice appears to be common, but research and reporting on it are scant.
Once the unique identifier numbers of people’s phones have been logged, users can be tracked back to their homes, where political advertisements are broadcast across multiple digital devices, from desktop computers to smart TVs. The data gathered at political rallies and meetups can also lay the groundwork for nuanced relational messaging, which maps relationships between people in order to encourage seemingly organic friend-to-friend political outreach.
The use of geolocation in political campaigns is troubling because it enables “dark advertising”—the ability to send out tailored ads that are seen only by their intended recipients and no one else. The addressees are not always ordinary voters: officials could be singled out for subtle nudges, too. One interest group, the Visit U.S. Coalition, has used location data to get messages to Trump and to those close to him promoting policies that encourage tourism. The efficacy of those efforts is hard to assess conclusively, but their breadth is astounding. The coalition hired a lobbying firm to send tailored messages to IP addresses at the White House, the Trump hotel in Washington, D.C., Trump’s New Jersey golf course, and Mar-a-Lago. Other lobbying firms in D.C. have endeavored to influence Trump by geofencing his friends and advisers. They have digitally cordoned off and evaluated movement around the home of Jared Kushner and Ivanka Trump and targeted the billionaire Harold Hamm and the British politician Nigel Farage in an attempt to get within earshot of the president. They even geofenced the school of a child of a close adviser of Trump’s, sending ads specifically crafted to interest young people.
CatholicVote.org’s use of location data came to light mostly thanks to an offhand statement made by Steve Bannon in a deleted scene of The Brink, a recent documentary about the former Trump adviser. “If your phone’s ever been in a Catholic church,” Bannon can be heard saying, “it’s amazing, they got this data.” But the public cannot rely on fortuitous (or deliberate) asides like Bannon’s to be alerted to such practices. Journalists and researchers have a responsibility to illuminate who exactly is engaging in geopropaganda and how (and to separate the branding and hype from reality). So far, these efforts are lacking.
That oversight is especially glaring amid the pandemic. Prior to COVID-19, the collection of mass location data by governments, authoritarian and democratic, was not openly accepted on a large scale. Now, the frame of reference is changing, with proponents of surveillance presenting a false dichotomy: give away your location or people will die. Citizens must not give away their locations, digitally or otherwise, without carefully defined constraints, a clearly stated timeline, and strong policies to prevent misuse.
In the current state of quarantine, much of public life has come to a grinding halt. But when the time comes for people to take off their facemasks and resume their regular day-to-day activities, they may find themselves inhabiting a world overtaken by surveillance. The invisible use of tools that capitalize on geolocation data could be used to control people around the world in ways far different, and far more difficult to remove, than masks and stay-at-home orders.
But Protecting Rights Requires Constant Vigilance