At last count, the earthquake that struck Nepal on April 25 and the large aftershock that followed three weeks later had claimed more than 8,500 lives, making it the largest disaster in the country’s history. It is also a watershed in another way. It was the first time artificial intelligence was used so extensively in relief efforts to tackle the overwhelming amount of information generated by mobile phones, satellites, and social media, to name just a few, to help aid workers locate victims, identify relief needs, and help aid workers navigate dangerous terrain.
One of the most crucial first steps in disaster relief is getting a picture of what the new terrain looks like—what roads are blocked and what buildings have crumbled.
One of the most crucial first steps in disaster relief is getting a picture of what the new terrain looks like—what roads are blocked and what buildings have crumbled—and the quickest and safest way an aid worker can get from point A to point B. Digital maps that provide real-time information register change as the disaster unfolds, allowing humanitarian workers to operate safely and accurately.
In Nepal, OCHA asked the Network to identify and map all tweets related to urgent needs, infrastructure damage, and response efforts. They also wanted the Network to identify and map all pictures of disaster damage posted to Twitter and in mainstream media articles. Lastly, they asked for up-to-date street maps based on the latest available satellite imagery.
But finding relevant, useable, and credible pieces of text, imagery, and video during major disasters is like searching for a needle in a meadow. (Haystacks are ridiculously small data sets by comparison.) For starters, there is now not only more data but also a variety of data types: text, images, and video. Text-based data include mainstream news articles, tweets, text messages, and WhatsApp messages. Images can include Instagram, professional photographs that accompany news articles, satellite imagery, and increasingly, aerial imagery, captured by unmanned aerial vehicles or drones. There are television channels, Periscope, and YouTube broadcast videos, too.
That is why artificial intelligence, used alongside crowdsourcing, is needed to provide a near-instant and complete picture of a disaster. Just 72 hours after the first quake, some 3,000 volunteers across some 90 countries mobilized through the Standby Task Force, a member organization of the Digital Humanitarian Network. These volunteers—we like to call them “digital jedis”—tagged crisis-related tweets that had been automatically pushed through keywords and hashtags to MicroMappers, an experimental, free, and open source software that I developed with my team at Qatar Computing Research Institute, in partnership with OCHA. While these digital jedis labeled tweets, the parallel artificial intelligence platform picked up the patterns of the human taggers and learned to apply them automatically. That is because AIDR (Artificial Intelligence for Disaster Response), also an experimental, free, and open source prototype, is an AI engine that learns in real time; it uses the tweets tagged by digital volunteers on MicroMappers to recognize which ones belong in which categories: urgent needs, infrastructure damage, or response efforts. Its accuracy increases as digital humanitarians interact with it, and compared to human taggers, AIDR is more efficient, able to do the same amount of work in much less time. As a result, the platform was able to automatically filter through half a million earthquake related tweets to identify those of interest to the UN. These were then added to the live crisis map hosted at MicroMappers.org, allowing OCHA to quickly assess the damage in Nepal.
The digital jedis also used MicroMappers to quickly tag and map hundreds of photographs of disaster damage. We’ve been working on an AI solution to this challenge as well. Thanks to recent breakthroughs in computer vision out of Stanford University, we can start teaching algorithms to recognize certain features in photographs—like disaster damage and displaced populations. A startup called MetaMind, also at Stanford, is working on automated feature detection in pictures. The same is true with videos. WireWax, a U.K.-based startup, uses artificial intelligence to automatically detect countless features in videos—automatically finding everything from guns to Justin Bieber across millions of videos.
My team and I have also applied artificial intelligence to identify features of interest in aerial imagery captured by drones. We’ve already extended the MicroMappers platform to crowdsource the analysis of aerial imagery. In Nepal, MicroMappers could be used to automatically look for survivors among the rubble. For that, we’re experimenting with 3D models of disaster-affected areas derived from high-resolution aerial imagery like the one pictured above. A multi-dimensional view is crucial to understanding disaster damage. And a picture, understandably, restricts our view by two-thirds. The challenge now is to extend MicroMappers to crowdsourcing the analysis of 3D models and then explore how artificial intelligence might use that initial legwork to automatically detect points of interest in these 3D models.
We need to rapidly extend this technique of crowdsourcing and artificial intelligence to satellite imagery as well. For Nepal, Humanitarian OpenStreetMap, another member of the Digital Humanitarian Network, has pulled off a herculean effort in the wake of the Nepal earthquakes to crowdsource the manual tracing of satellite images. This process is tedious. It involves using the OpenStreetMap software to digitally outline the shapes of buildings and roads (and then label them as such), which generates the most up-to-date maps for the humanitarian community. But the organization is facing a large backlog of tracing tasks because human labor is not enough. This is not a criticism of the organization. They simply need and deserve the extra help artificial intelligence can provide.
Today, half of the world owns a smartphone. By 2020, according to The Economist, an astounding 80 percent of adults across the globe will. That will be a lot of data. Today, half of the world owns a smartphone. By 2020, according to The Economist, an astounding 80 percent of adults across the globe will. That will be a lot of data. And we know full well that the volume, velocity, and variety of digital data generated globally will continue to skyrocket—indefinitely. This means that future humanitarian crises will soon generate more data than all previous disasters in human history combined. And in this new digital landscape, crowdsourcing and artificial intelligence need to become the new pièce de résistance against big data.