Chances are that you will never hear a crowd at a protest rally chant, “What do we need? Regulation! When do we need it? Now!”
People want safe food, clean air, and clean water. But in the abstract, regulation is never a popular idea. In a tough economic environment, it might seem like a recipe for disaster. In the United States, businesses large and small have long argued that they are subject to excessive red tape and government oversight, and in the context of a serious recession, that concern has become acute. In light of the country’s general enthusiasm for freedom of choice, regulation is particularly vulnerable to political attack.
I learned just how intensely many Americans oppose government regulation in 2009, when President Barack Obama nominated me to become the administrator of the White House Office of Information and Regulatory Affairs. The OIRA administrator is often described as the nation’s “regulatory czar.” That is a wild overstatement. But the term does give a clue to the influence and range of the office. OIRA is the cockpit of the regulatory state. The office oversees federal regulations involving clean air and water, food safety, financial stability, national security, health care, energy, agriculture, workplace safety, sex and race discrimination, highway safety, immigration, education, crime, disability rights, and much more. In general, the nation’s cabinet-level departments -- such as the Treasury Department, the Energy Department, and the Environmental Protection Agency -- cannot issue a significant rule unless OIRA approves.
For me, heading OIRA was a dream job. I have spent much of my career writing about the proper role of regulation. Before my nomination, I had written in favor of “nudges”: simple strategies to affect behavior that do not force anyone to do anything and that maintain freedom of choice but that have the potential to make people healthier, wealthier, and happier. Examples include a requirement that automobile companies disclose the fuel economy of new cars, an educational campaign to discourage texting while driving, and an effort to encourage employers to enroll employees automatically in savings plans.
Those who favor nudges recognize the importance of freedom of choice. They respect free markets and private liberty. They allow people to go their own way. At the same time, they emphasize that people may err and that, in some cases, most of us can use a little help. They insist that choices are made against a background, created by private and public institutions. Nudges are everywhere, whether we see them or not.
I was eventually confirmed by the Senate, but only after a protracted fight. Progressives expressed disappointment that the president had chosen someone who favored cost-benefit analysis and who promoted modest, low-cost approaches to regulation. Conservatives warned that I was a radical leftist who would try to ban hunting, eliminate free speech, and steal human organs. (The Fox News Channel’s Glenn Beck called me “the most evil, most dangerous man in America.”)
Notwithstanding the charges, one of my main goals was to ensure that data and evidence, rather than intuition and interest groups, would be the foundation of regulatory policy. On the rare occasions when members of my staff pointed out the views of interest groups, my response was, “That’s sewer talk. Get your mind out of the gutter.” I was joking, of course, but the joke had a point. I was less interested in the positions of interest groups than in figuring out the answers to basic questions about regulation: What do we actually know about the likely effects of proposed rules? What would be their human consequences? What are the costs and benefits? How can the government avoid reliance on guesses and hunches? What do we know about what existing rules are actually doing for -- or to -- the American people? How can we make things simpler?
BILLY BEANE GOES TO WASHINGTON
Relying on empirical evidence might seem obvious, a little like relying on sense rather than nonsense. But the temptation to favor intuition over information is strong. Think of Moneyball, Michael Lewis’ best-selling 2003 book, which was the basis for the hit film of the same title, starring Brad Pitt. Lewis tells the story of Billy Beane, the general manager of the Oakland Athletics baseball team. With the help of his statistics-obsessed assistant, Paul DePodesta, Beane brought the once-lowly Athletics into the top tier of baseball teams, and wound up transforming professional baseball, by substituting empirical data for long-standing dogmas, intuition, and anecdote-driven judgments. Lewis makes clear the difference between the two approaches in this exchange about a particular player between Beane, DePodesta, and a veteran baseball scout:
“The guy’s an athlete, Billy,” the old scout says. “There’s a lot of upside there.”
“He can’t hit,” says Billy.“He’s not that bad a hitter,” says the old scout....
Paul reads the player’s college batting statistics. They contain a conspicuous lack of extra base hits and walks.
“My only question is,” says Billy, “if he’s that good a hitter why doesn’t he hit better?”
In the past, too many regulators have been tempted to listen a bit too much when they were told that “the public is very worried,” or that “polls show that the majority of people strongly favor protection against air pollution,” or that “the industry has strong views,” or that “the environmental groups will go nuts,” or that “a powerful senator is very upset,” or that “if an accident occurs, there will be hell to pay.” None of those observations addresses the real question, which is what policies and regulations would achieve. All over the world, regulatory systems need their own Billy Beanes and Paul DePodestas, carefully assessing what rules will do before the fact and testing them after the fact.
Some might object that debates about regulation are really about values, not facts. According to this view, when people disagree about a rule that would protect clean air or increase highway safety, it is because of what they most value, not because of disagreements about the evidence. On some of the largest issues, values and predispositions do play a critical role. At the same time, it is easy to overstate the point. For example, most people’s values do not lead to a clear judgment about whether to require rearview cameras in cars. Values alone cannot guide the decision about whether to reduce levels of ozone in the ambient air from 75 parts per billion to 70 parts per billion or, for that matter, to 20 parts per billion. To evaluate such proposals, factual evidence is indispensable.
When the evidence is clear, it will often lead people with different values to the same conclusion. If a regulation would save many lives and cost very little, people are likely to support it regardless of their party identification, and if a regulation would produce little benefit but impose heavy costs, citizens are unlikely to favor it regardless of their ideas about government. Call it “Regulatory Moneyball”: making choices about rules without relying too heavily on intuition, anecdotes, dogmas, and impressions.
NOT AS EASY AS ONE, TWO, THREE
Cost-benefit analysis does run into many challenges. Regulations might involve both costs and benefits that are difficult to quantify and monetize. How, for example, should the government value a human life? In answering that question, agencies do not, in fact, assign monetary values to lives. Instead, they ask this question: How much should we pay to avoid statistical risks? That question, while not exactly easy, is far more tractable.
Suppose that the Department of Transportation is considering a rule that would require all cars to adopt a new air-bag technology that would eliminate a one-in-100,000 chance of death in a collision. How much is it worth to eliminate that risk? One way to answer that question is simply to ask people, “How much would you pay to eliminate such a risk?” Research shows that a common answer is $50.
Of course, there are problems with posing abstract, hypothetical questions of this sort; maybe the answers are not very meaningful. So another way to answer the question is to look at the evidence from the market. Suppose that across large populations, workers who are subject to a mortality risk of one in 100,000 generally receive a wage bonus, or premium, of $90. That would suggest a “value of a statistical life” of $9 million. Relying on evidence from the labor market, then, the federal government would spend no more and no less than $90 per individual to eliminate that risk.
But these simple approaches would leave many questions unanswered. For example, should children’s lives be valued more, less, or the same as the lives of adults? More precisely, how should the government treat statistical risks faced by children? Parents would be willing to spend a lot to reduce risks to their children; shouldn’t their wishes count? The other end of the age spectrum raises similar questions. Suppose that a rule would mostly extend the lives of the elderly by a short time: for example, an air-pollution rule whose main effect would be to add a few months to the lives of people over the age of 80. Should agencies give a lower value to the lives of old people because the effect might prolong lives by only a matter of months? Would those few extra months justify a high cost if the same amount could extend a younger person’s life by decades? Or is that difference irrelevant?
KEEP IT SIMPLE
Similar problems arise when a rule or regulation would have different effects on different groups of people. Suppose that a workplace safety rule would cost $400 million and produce benefits of $350 million. At first glance, the rule fails a cost-benefit test. But suppose that the costs would be incurred by those who sell and use a luxury good (say, expensive cars) and that the benefits would be enjoyed by those who are near the bottom of the economic ladder (say, those who do manual work in producing such cars). Or suppose that a rule reducing air pollution would cost $900 million but produce benefits of only $800 million -- but with the costs imposed on polluters (and those who work for them and purchase their products) and the benefits enjoyed mostly by people who are poor or struggling economically.
Because the real world does not lack examples like these, Regulatory Moneyball needs to concern itself with more than just total costs and total benefits. Obama recognized this point in the sweeping executive order that he issued on regulation in 2011, which directed agencies to quantify costs and benefits “as accurately as possible” but also allowed them to “consider (and discuss qualitatively) values that are difficult or impossible to quantify, including equity, human dignity, fairness, and distributive impacts.”
Factors that are hard to quantify played a role in some important decisions. For example, in 2010, the Centers for Disease Control and Prevention eliminated the long-standing ban on the entry of HIV-positive people into the United States. This was an extremely important decision, fulfilling a promise that Obama had made during his 2008 campaign. In supporting its decision, the CDC presented a lot of numbers in a detailed quantitative analysis of the expected costs and benefits. These numbers included the economic gains that would result if more people could enter the country and engage in economic activity and also the health effects, including some risk that the decision would lead to a modest increase in HIV infections as a result of the new entrants.
At the same time, the CDC emphasized that some of the most important benefits of the rule could not be turned into monetary equivalents. “Although we are unable to quantify all of the benefits of this change in policy,” the CDC wrote in issuing the rule, “we believe it will help reduce stigmatization of HIV-infected people [and] bring family members together who had been barred from entry (thus strengthening families). . . . There are also ethical, humanitarian, distributional, and international benefits that are important to consider but difficult to quantify.” With explicit reference to these unquantifiable benefits, the CDC concluded that the benefits of the rule justified its costs.
It is perfectly appropriate for agencies to take unquantifiable factors into account. But excessive regulation is a genuine concern, and agencies should not use their authority to consider qualitative factors as a license to do whatever they like. While I was at OIRA, the Obama administration took a number of steps to ensure a disciplined approach. The first step was to promote accountability by recommending that all significant regulations be accompanied by a simple table that offered three things: first, a clear statement of both the quantitative and the qualitative costs and benefits of the proposed or final action; second, a presentation of any uncertainties; and third, similar information for reasonable alternatives to the action. In a related step, OIRA required agencies to include a clear, simple executive summary of any new rules, explaining what they were doing and why and offering a crisp account of the costs and benefits, both quantitative and qualitative. Many federal rules are extremely long and complex, and it is hard for people to know what they are trying to do and why. A clear summary can help a great deal.
TRUST THE NUMBERS, BUT VERIFY