Does a Better Future Lie in the Prehistoric Past?
“We study natural stupidity instead of artificial intelligence.” That was how Amos Tversky described his collaboration with Daniel Kahneman, a partnership between two Israeli psychologists that produced some of the twentieth century’s most important findings about how the mind works. Through a series of ingenious experiments, Kahneman and Tversky discovered systematic biases in the way humans estimate probabilities and, in so doing, revolutionized the study and practice of economics, medicine, law, and public policy. If Tversky had not died in 1996, at the age of 59, he would most likely have shared the Nobel Prize in Economics awarded to Kahneman in 2002.
Michael Lewis has written an original and absorbing account of the 20-year partnership and the ideas it generated. The author of such bestsellers as Liar’s Poker and Moneyball, Lewis discovered Kahneman and Tversky belatedly. Unbeknownst to him, they had provided the scientific basis for the phenomenon he chronicled in Moneyball—namely, how baseball scouts tended to eschew statistical indicators of a player’s past performance, relying instead on their subjective impressions of whether his look and build matched what they thought made a baseball player great. Kahneman and Tversky called this “the representativeness heuristic,” a cognitive shortcut used to assess events and individuals in terms of their fit with a preconceived notion. The problem, they found, was that this shortcut often led to errors. Moneyball told the story of how Billy Beane, the general manager of the Oakland A’s, built a winning team by doing away with intuition in favor of cold, hard statistics.
Lewis devotes a healthy chunk of The Undoing Project to detailing Kahneman and Tversky’s experiments and explaining their significance in an accessible way. His summaries of their key papers are competent, although he shies away from raising critical questions about their work, perhaps feeling that it is not his place to do so. His discussion of some of their theories can also come across as truncated. Fortunately for readers, however, it is now possible to learn about these experiments and the thinking behind them directly from the source: from Kahneman’s own bestseller, Thinking, Fast and Slow, published in 2011.
The truly novel aspect of Lewis’ book is the light it sheds on the circumstances of the Kahneman-Tversky partnership. A big part of the story concerns the role of praxis—real-world experience—in germinating great ideas. Kahneman and Tversky were deeply influenced by their experiences as Israelis; indeed, at times his account reads like a narrative of their ideas told through war, beginning with their childhoods in World War II and stretching through their involvement in four Arab-Israeli wars. But Lewis also delves into the fascinating psychological dynamics that made their partnership work. Drawing on extensive interviews with Kahneman himself and excellent access to Tversky’s papers and his wife, Barbara, Lewis was able to construct an account of the friendship that lays bare, warts and all, the emotions, intellectual intensity, and tensions behind their creativity.
Kahneman and Tversky’s work has forced us to toss out the flattering portrait of our cognitive abilities once popular among economists and political scientists.
A recurrent theme of The Undoing Project concerns how Kahneman’s and Tversky’s lives as Israelis shaped the questions they asked, many of which had real security implications. “Israel took its professors more seriously than America did,” Lewis writes. “Israeli intellectuals were presumed to have some possible relevance to the survival of the Jewish state, and the intellectuals responded by at least pretending to be relevant.” Kahneman and Tversky didn’t need to pretend, and their curiosity about how the mind works was directly relevant to important questions facing Israeli society. Their interest in the way people assess probabilities and their skepticism about human intuition, for instance, stemmed from their time in the Israeli military. Assigned to the army’s psychology unit fresh out of Hebrew University, Kahneman invented a personality test, still in use today, that successfully predicted who would make good officers. The key was to ignore the interviewers’ intuition and focus on the actual past behavior of the young recruits—just as Beane would do years later with baseball.
Similarly, Tversky’s interest in how people assess probabilities was informed by his concerns about the Israeli government’s estimates of the probability of war in the run-ups to the 1956 Sinai campaign, the 1967 Six-Day War, and the 1973 Yom Kippur War, all of which took the Israelis by some degree of surprise. While on reserve duty in the Golan Heights after the 1967 war, Lewis writes, Tversky would “gaze down upon Syrian soldiers, and judge from their movements if they were planning to attack.” After the Yom Kippur War, Kahneman and Tversky wondered why it had been so difficult for their government to return the Sinai, which Israel had seized in 1967, to Egypt—a gesture that might have removed Egypt’s motivation to launch the surprise attack that began the war. Their answer was that the psychological pain of losing something one had acquired exceeded the pain of not having it in the first place. That thesis would become a major component of their seminal paper on what they called “prospect theory.”
A second theme of Lewis’ involves the intellectual and emotional intensity of the Kahneman-Tversky partnership. They completed each other’s sentences, told each other’s jokes, and critiqued each other’s ideas. “What they were like, in every way but sexually, was lovers,” Lewis writes. Tversky’s wife agreed: “Their relationship was more intense than in a marriage.” Their brilliance, combined with their stupendous work ethic, made them academic superstars in both Israel and the United States. But the two were accorded uneven recognition. Tversky was the initial recipient of the academic accolades, a snub that hurt Kahneman, who felt, correctly, that they were equal partners in generating their ideas.
Ultimately, like many of the most creative partnerships—John Lennon and Paul McCartney, Steve Jobs and Steve Wozniak—their collaboration could not survive the envy and rivalry, and it ended in the late 1980s. Although they remained friends right to the end of Tversky’s days, Lewis reveals that as their collaboration neared its conclusion, Tversky never afforded Kahneman the respect Kahneman thought he was owed. “Danny needed something from Amos,” Lewis writes in one touching passage. “He needed him to correct the perception that they were not equal partners. And he needed it because he suspected Amos shared that perception.”
For those of us who have consumed or applied Kahneman and Tversky’s findings, including myself, this is a startling revelation. Outsiders have always assumed that the two were equal partners, but what really mattered, Lewis is saying, were the subjective perceptions of the collaborators themselves, especially that of Kahneman. Kahneman comes across as incredibly human, open, and vulnerable. One cannot help but root for him when the ultimate recognition came in the form of a Nobel Prize.
Before it collapsed, this fruitful relationship managed to overturn many existing assumptions about how the mind works. The article they published on prospect theory in Econometrica in 1979—the most cited in the journal’s history—launched a frontal assault on assumptions that had, until then, informed all economic analysis and much of political science. Kahneman and Tversky’s experiments showed that contrary to the thinking at the time, decisions made in the face of uncertainty are based less on calculations of the net expected value of an outcome and more on perceptions of gains and losses relative to a reference point. Furthermore, and again contradicting the prevailing theories, they proved that losses matter more than gains. If people perceive themselves to be in the domain of gains, they tend to avoid taking risks, fearing that they will start losing. But when they find themselves in the domain of losses, they become more willing to take them, desperate to somehow reverse their fortunes.
Like many of the most creative partnerships, Kahneman and Tverksy's collaboration could not survive the envy and rivalry.
The practical implication of this finding is that when trying to understand a given choice, one cannot focus exclusively on the decision-maker’s calculations of which alternative would maximize utility; it’s also crucial to figure out his point of reference, in order to determine whether he sees himself as operating in the domain of gains or the domain of losses. International relations scholars have applied prospect theory to explain Mao Zedong’s decision to bring a militarily weaker China into the Korean War in 1950, U.S. President Jimmy Carter’s approval of the risky operation to rescue American hostages from Iran in 1980, and U.S. President George W. Bush’s ill-fated invasion of Iraq in 2003. In all these cases, the argument goes, the leaders saw themselves as facing loss: Mao feared that a Western victory in North Korea would damage China’s national security, Carter was desperate to end the hostage crisis, and Bush felt especially vulnerable in the wake of the 9/11 attacks. Each leader was thus more willing to take the risk of using military force, even though the probability of success was far from clear.
These examples also show that applying prospect theory to foreign policy is not straightforward. For each decision, one can make the argument that the decision-maker acted rationally: Mao correctly judged that he could beat back the U.S.-UN attack on North Korea, Carter had reason to believe that the rescue operation might work, and Bush had received intelligence that made an invasion of Iraq look less risky than tolerating the slightest chance of an Iraq armed with weapons of mass destruction. Scholars must therefore take care to properly specify the reference points that decision-makers are working from, the value they place on the alternative options, and their estimates of the probability of various outcomes.
Although prospect theory is widely seen as Kahneman and Tversky’s most original contribution to social science, their earlier work on heuristics is just as noteworthy. Beginning with the assumption that cognitive processing powers are limited, Kahneman and Tversky contrived experiments showing that people resort to shortcuts to help estimate probabilities and make sense of the world. And these shortcuts, they found, tend to lead one astray.
Consider one classic experiment on the representativeness heuristic, in which Kahneman and Tversky provided subjects with a description of a person named Linda:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Then they asked their subjects to rank the probability that various statements about Linda were true. What is more likely, they asked: that “Linda is a bank teller” or that “Linda is a bank teller and is active in the feminist movement”?
If you answered the latter, you made the same mistake that 85 percent of Kahneman and Tversky’s respondents did. Simple statistics tells us that the number of female bank tellers who happen to be feminists cannot be bigger than the number of female bank tellers of all ideological persuasions. Yet because the description of Linda seems representative of an activist feminist, that assessment of fit overrides a basic mathematical fact.
Although prospect theory is widely seen as Kahneman and Tversky’s most original contribution to social science, their earlier work on heuristics is just as noteworthy.
This insight is also relevant to foreign policy. During the Vietnam War, for example, U.S. officials regularly resorted to historical analogies to make sense of the challenges they were facing. President John F. Kennedy was especially taken by an analogy to the 1948–60 communist insurgency against the British in Malaya, and he pestered his generals to study the episode. President Lyndon Johnson and his secretary of state, Dean Rusk, preferred analogies to the Munich Agreement (where appeasement abetted aggression) and the Korean War (where initial U.S. setbacks were followed by victory). Rusk’s deputy, George Ball, wrote long memos contesting the relevance of the Korean analogy and proposing his own comparison to France’s 1954 defeat in the Battle of Dien Bien Phu. In Ball’s view, the United States would lose the war and be kicked out of Vietnam, just as France was.
My own analysis of the Johnson administration’s decision-making suggests that the Korean analogy trumped all others because it was deemed most representative of the challenge in Vietnam. There, as in Korea, the United States found itself fighting in an Asian conflict against a communist north that, aided by China and Russia, was bent on taking over the South. Once chosen, this analogy shaped U.S. decision-making: it predisposed policymakers toward military intervention on the theory that it would save the South (just as it had in Korea), but with the caveat that the United States must not apply excessive force against the North (since it was U.S. forces’ crossing the 38th parallel in Korea that precipitated Chinese military intervention).
In hindsight, it’s clear that U.S. policymakers chose the wrong historical lens; had they studied the situation more carefully, and with less hubris, they might have gone with Ball’s Dien Bien Phu analogy. That would have helped them realize that defeat was almost inevitable: because the Vietnamese were fighting to rid themselves of foreign domination, they had far more willpower than foreigners facing domestic and international opposition. France, however, hardly seemed representative of the United States. As one U.S. four-star general put it, “The French haven’t won a war since Napoleon. What can we learn from them?”
There is no doubt that Kahneman and Tversky’s work, as Lewis’ subtitle puts it, “changed our minds”: it has forced us to toss out the flattering portrait of our cognitive abilities once popular among economists and political scientists. Kahneman and Tversky performed a reality check on human thought processes and found them wanting. The value of this contribution can hardly be overstated; their studies are worthy of the Nobel Prize because they challenged a fundamental tenet of economics—the notion of the rational actor—and replaced it with a more realistic description of how humans actually think.
Kahenman and Tversky’s work was instrumental in launching the field of behavioral economics and has seen wide applications in business, especially in finance and insurance. In public policy, it enabled Cass Sunstein, who served as chief of the Office of Information and Regulatory Affairs in the Obama administration, to increase the number of poor children taking advantage of public schools’ free-lunch programs. He did so by reframing the “choice architecture” their parents faced. Instead of requiring parents to submit paperwork to enroll their children in their school’s program, Sunstein automatically enrolled them. That simple change—based on the underlying idea that people usually find it easier to go along with whatever is presented as the default option—increased the number of poor children receiving free lunches by some 40 percent.
For all of Kahneman and Tversky’s achievements, however, their ideas raise a couple of follow-up questions. One is how transferable the findings of experiments performed on bright undergraduates are to the real world, where the stakes are higher and where decision-makers are more experienced. Kahneman and Tversky dealt with this objection directly: they subjected statisticians, doctors, and other professionals to their experiments and found that they succumbed to the same cognitive foibles the undergraduates had.
The second issue is more daunting: Are the heuristics that people routinely resort to really all that harmful? Or, as the psychologists Richard Nisbett and Lee Ross once put it, quoting a colleague, “If we’re so dumb, how come we made it to the moon?” Given the many errors of human thinking that Kahneman and Tversky cataloged, one might think that shortcuts tend to hurt more than they help.
Not so. In his latest work, Kahneman puts these heuristics in perspective, slotting human thinking into two different categories: what he and other psychologists call System 1 and System 2. The heuristics that he and Tversky identified are manifestations of System 1, “fast thinking”—intuitive, largely unconscious, and error-prone. System 2, or “slow thinking,” by contrast, is more deliberate and conscious. As Kahneman writes, “System 1 is indeed the origin of much that we do wrong, but it is also the origin of most of what we do right—which is most of what we do. Our thoughts and actions are routinely guided by System 1 and generally are on the mark.” System 1 serves people well because they learn from their mistakes and develop skills that are inscribed in their memory and “automatically produce adequate solutions to challenges as they arise.” Moreover, people often call on System 2 to correct the excesses of System 1.
That’s what the historians Ernest May and Richard Neustadt taught generations of students at Harvard’s Kennedy School of Government to do, before the System 1 and System 2 terminology had been invented. Conscious of how decision-makers routinely picked the wrong historical precedent when facing an unfamiliar challenge, the two professors warned against latching on to the first historical analogy that comes to mind (a System 1 attribute) and instead urged students to switch mental gears (to System 2’s territory) by expanding their repertoire of historical parallels and assessing the degree of fit of each in a systematic manner.
This picture of decision-making is more nuanced than Tversky’s quip about “natural stupidity.” Recognizing their shortcomings, humans are capable of self-correction. Perhaps that is why, for all our cognitive limitations, we still made it to the moon.
CORRECTION (April 13, 2017): This article misstated the publication date and length of The Undoing Project. It was published in 2016 and is 368 pages long.