Economic numbers have come to define our world. Individuals, organizations, and governments assess how they are doing based on what these numbers tell them. Economists and analysts loosely refer to statistics measuring GDP, unemployment, inflation, and trade deficits as “leading indicators” and subscribe to the belief that these figures accurately reflect reality and provide unique insights into the health of an economy. Taken together, leading indicators create a data map that people use to navigate their lives. That map, however, is showing signs of age. Understanding where the map came from should help explain why it has become less reliable than ever before.
None of today’s leading indicators existed a century ago. They were invented to measure the economies of the industrial nation-states of the mid-twentieth century. In their time, they did so brilliantly. The twenty-first century, however, is proving more challenging to measure. Industrial nation-states have given way to developed economies rich in services and to emerging industrial economies exporting goods made by multinational companies. The statistics of the twentieth century were not designed for such a reality, and despite the assiduous efforts of statisticians, they cannot keep up.
These shifts have created a temptation to find new formulas, better indicators, and new statistics. And that search, like the quest for new technologies, is certainly worthwhile. But the belief that a few simple numbers or basic averages can capture today’s multifaceted national and global economic systems is a myth that should be abandoned. Rather than seeking new simple numbers to replace old simple numbers, economists need to tap into the power of the information age to figure out which questions need to be answered and to embrace new ways of answering them.
The king of contemporary economic indicators is, of course, gross domestic product (GDP). Given how central that statistic has become to economics, it is striking to discover just how recently it was invented. It also turns out that its creators understood what it covered (and what it did not) far better than most people do today.
GDP measures the goods and services produced by a single country. Governments adopt policies designed to maximize GDP by boosting their countries’ output. Indeed, GDP has effectively become a proxy for national success or failure. It has the power to decide elections, overthrow governments, and launch popular movements. A GDP that is growing in sync with expectations can enhance a country’s reputation and thus its strength and power. A GDP that is contracting or failing to meet expectations, on the other hand, can lead to disaster. Yet a hundred years ago, the concept of GDP did not exist; history unfolded without it. The United States, for example, managed to win its independence, fight a civil war, and conquer a continent without any measure of national income.
GDP’s origins lie in the 1930s, when economists and policymakers in the United States and the United Kingdom struggled to understand and respond to the Great Depression. The onset of World War II solidified the metric’s standing, as the Allies tried to keep tabs on the war’s effect on their economies. It is not terribly surprising that economists and policymakers came to favor a statistical technique that helped the United States survive a depression and win a war. But not even the economists who invented this metric imagined that GDP would become so central to every state in the world within a few short decades.
In the United States, much of the credit for developing the concept of GDP goes to the Russian American economist Simon Kuznets, who would later win the Nobel Prize for his work in crafting national accounts, comprehensive recordings of a country’s income, spending, finances, and assets. Kuznets’ work provided the foundation on which economists and statisticians later built gross national product (GNP) and its successor, GDP, which by the end of the twentieth century had become the more widely cited figure. (The differences between the two metrics are not huge. GDP includes all production within a country regardless of the national origins of the individuals or companies generating it. GNP, on the other hand, includes the production of any citizen or domestic company regardless of where it is located.)
Kuznets was an early proponent of economics as a science grounded in formulas and rigorous testing. He was joined and supported in that effort on the other side of the Atlantic by the British economist John Maynard Keynes. Although there had been sporadic efforts to measure national income since the seventeenth century, nobody had used rigorous methods to formalize its measurement until Kuznets and his colleagues at the National Bureau of Economic Research, a nonprofit organization in Cambridge, Massachusetts, began to do so in the late 1920s and 1930s. They were prompted by policymakers who needed to figure out what was actually happening during the economic crisis and whether any New Deal policies were doing any good. Without any baseline sense of what the country was producing, it was impossible to know whether any of the government’s innovative and controversial New Deal measures were actually helping boost output or employment.
In trying to establish such a baseline, Kuznets and others made several fateful decisions. The most crucial was to leave out domestic work -- cooking, cleaning, child rearing, and so on -- because it was hard to assign market values to it. As a consequence, GNP and GDP ended up ignoring a huge realm of economic activity. But what they did measure conveniently supported the theories promulgated by Keynes and others: namely, that governments should spend more in times of duress in order to stimulate demand.
World War II gave proponents of the new metrics another opportunity to demonstrate their value. Both U.S. and British officials needed to know how much domestic production could be given over to the war effort without imperiling the availability of basic goods. GNP provided a way to calculate precisely how much the government could spend and how much it could increase taxes to pay for defense without triggering dangerous inflation or eroding the domestic economy. Understandably, the Allies’ ultimate victory in the war overshadowed Kuznets’ nearly simultaneous conquest of the economy. But in terms of how people came to view the present and the future, and how they defined power and success, the invention of these key economic indicators was almost as important.
In the years after the war -- as Washington’s ideological battle with communism heated up and as the Cold War pushed direct military conflict to the sidelines -- economists and policymakers wove indicators such as GDP into every nook and cranny of economic life and popular culture. This process occurred not just in the United States and the United Kingdom but also in the world at large, thanks to the globalizing impulses of the United Nations and the proselytizing nature of U.S. capitalism.
Yet from the outset, national accounts, GNP, and GDP were limited in what they measured. They were designed to assess prosperity, but with the understanding that multiple aspects of life were being left out or not fully valued. These metrics not only omitted domestic work and hobbies; GDP and its predecessors were also overly reductive because they counted all production and consumption as a net positive, regardless of its nature.
Thus, as Alan Greenspan, an early champion of the new indicators in the postwar period, observed in the 1990s, when he was chair of the U.S. Federal Reserve, if residents of the southern United States buy lots of air conditioners to offset the crushing heat of summer, that will show up as a positive for GDP (assuming those air conditioners are made in the United States, which was the case until late in the twentieth century). So, too, will the money that people spend on electricity bills. Vermont, with less arduous heat, likely sees fewer such purchases and therefore will show a lower GDP than Alabama, at least insofar as air conditioners are concerned. But such numbers say nothing about the relative prosperity of the two states or about the overall quality of life in either place.
GDP distorts in other ways, as well. If a steel mill produces pollution that then requires a cleanup, both the initial output (the steel) and the cost of addressing its byproduct (the cleanup) add to GDP. So, too, would the cost of health care for any workers or residents injured or sickened by the pollution. Conversely, if a company replaces its conventional light bulbs with long-lasting LED bulbs and, as a result, spends less on lighting and electricity, the efficiency gains would detract from GDP. Yet few would argue that the pollution example represents a positive development or that the lighting example constitutes a negative one.
Kuznets and his cohort, for their part, understood these limitations well. As Kuznets wrote in 1934, “The valuable capacity of the human mind to simplify a complex situation … becomes dangerous when not controlled in terms of definitely stated criteria.” He warned that numbers and statistics were particularly susceptible to the illusion of “precision and simplicity” and that officials and others could easily misuse them. But as GDP became a touchstone of public policy, such subtleties were lost on subsequent generations of policymakers.
Something similar happened with the inflation statistic. The U.S. Bureau of Labor Statistics first set out to devise a measure of prices in 1917 in order to learn what it cost an American family to meet its basic needs. In the 1920s, that effort morphed into a larger pursuit to measure how much those prices increased over time. In these years, the bureau drew on the research of two men in particular: the Yale economist Irving Fisher and the head of the National Bureau of Economic Research, Wesley Mitchell. Both men were fascinated by price movements and worked on methodologies to systematically measure changes in prices. That meant more than just sending surveyors across the country to record the cost of a specified basket of goods, as the government had done in 1917: it meant figuring out how prices shaped consumption and how new goods pushed out old ones. Without that, the consumer price index (CPI) used to measure inflation today might still include horsewhips and the IBM Selectric typewriter.
Until the 1970s, ordinary people were generally not particularly interested in or aware of inflationary measurements -- with the exception of union members, whose leaders demanded that wage increases be pegged to inflation. But the so-called Great Inflation of the 1970s, when official inflation levels exceeded ten percent, saw the index propelled to the center of public debate. Although no one questioned that inflation was high in those years -- everyone could see prices going up -- many wondered about its true extent and causes. And the Bureau of Labor Statistics muddied the waters further by publicly wondering whether the CPI was actually overstating inflation. That directly contradicted the experiences of ordinary Americans, who were feeling the pinch and were certain that official statistics were understating prices. Nevertheless, in 1977, insisting that the traditional methods of measurement were making things seem worse than they really were, government statisticians introduced the “core CPI,” which measures inflation without taking into account goods such as gasoline and food, whose prices change frequently. Of course, for most people, those are the goods that matter most. Yet the core CPI became the preferred gauge for policymakers precisely because it removed goods with volatile prices, which could easily skew perceptions.
In the 1990s, the question of whether official estimates overstated the inflation rate emerged once again. Greenspan suggested that if the true rate were calculated, it would be as much as 1.5 percent lower than the official figure, which would lead to a lowering of government spending by tens of billions of dollars, since much of it, especially cost-of-living increases for Social Security payments, was pegged to inflation. In response, Congress authorized a commission to investigate the problem. The commission concluded that, indeed, official inflation numbers were overstating the real rate.
But rather than settle the controversy, the constant tinkering and rethinking only stoked it. Official keepers of economic numbers have always turned a critical eye on their own methods and looked for ways to improve them, but by inventing a new way to assess inflation, they created a credibility gap. Partly as a result, few Americans trust official inflation figures because they believe the numbers purposely understate the rise in prices. Their skepticism is shared by many experts: in the early years of the last decade, economists such as Austan Goolsbee, who would later become a top White House economic adviser, and influential investors such as William Gross of the multitrillion-dollar investment firm PIMCO cast doubt on the accuracy of official inflation statistics. In 2004, Gross alleged that such figures were essentially a government “con job.”
MAKERS OR TAKERS?
And then there is trade. As divided as Americans have been on almost all issues in recent years, most can agree on at least one thing: China represents a threat to the United States. Americans are deeply concerned about the huge amount of U.S. debt (more than $1 trillion) held by the Chinese government and about the U.S. trade deficit with China, which grows almost every year and currently stands at around $300 billion. Companies such as Apple have added dramatically to that deficit by outsourcing their production overseas.
The trade deficit with China began widening after 2001, when Beijing joined the World Trade Organization (WTO). At first, the deficit was seen as a byproduct of China’s rapid emergence as a low-cost manufacturer and a burgeoning economic power. In short order, however, the deficit became a symbol of U.S. economic decline and a symptom of dangerous global imbalances. Some pundits began warning that deepening trade deficits could lead to the eventual collapse of the U.S. economy.
The truth, however, is much less ominous. If trade numbers more accurately accounted for how products are made, it is possible that the United States would not have any trade deficit at all with China. The problem, in short, is that trade figures are currently calculated based on the assumption that each product has a single country of origin and that the declared value of that product goes to that country. Thus, every time an iPhone or an iPad rolls off the factory floors of Foxconn (Apple’s main contractor in China) and travels to the port of Long Beach, California, it is counted as an import from China, since that is where it undergoes its final “substantial transformation,” which is the criterion the WTO uses to determine which goods to assign to which countries. Every iPhone that Apple sells in the United States adds roughly $200 to the U.S.‑Chinese trade deficit, according to the calculations of three economists who looked at the issue in 2010. That means that by 2013, Apple’s U.S. iPhone sales alone were adding $6–$8 billion to the trade deficit with China every year, if not more.
A more reasonable standard, of course, would recognize that iPhones and iPads do not have a single country of origin. More than a dozen companies from at least five countries supply parts for them. Infineon Technologies, in Germany, makes the wireless chip; Toshiba, in Japan, manufactures the touchscreen; and Broadcom, in the United States, makes the Bluetooth chips that let the devices connect to wireless headsets or keyboards.
Analysts differ over how much of the final price of an iPhone or an iPad should be assigned to what country, but no one disputes that the largest slice should go not to China but to the United States. That is where the design and marketing of such devices take place -- at Apple’s headquarters in Cupertino, California. And the real value of an iPhone, of course, along with thousands of other high-tech products, lies not in its physical hardware but in its invention and the work of the individuals who conceived, designed, patented, packaged, and branded the device. That intellectual property, along with the marketing, is the largest source of the iPhone’s value.
Taking these facts into account would leave China, the supposed country of origin, with a paltry piece of the pie. Analysts estimate that as little as $10 of the value of every iPhone or iPad actually ends up in the Chinese economy, in the form of income paid directly to Foxconn or other contractors.
These issues are no secret to economists immersed in the world of trade and statistics. There is, however, a big difference between identifying this problem and doing something about it. The Organization for Economic Cooperation and Development and the WTO have begun to develop a database to measure what they call “trade in value added.” Using an early version of the new database, economists have found that the real trade deficit between the United States and China may be as much as 25 percent smaller than current calculations. Although such estimates do a better job of capturing the supply chain and including services as part of the mix, they are still very rough, for the simple reason that no one has the resources, people, or systems in place to accurately attach the value of every component of every single manufactured product in the world -- let alone the relevant services -- to one country or another.
Shifting to a more accurate set of indicators would be no small task. It was complicated enough to get the 159 member countries of the WTO to agree to the current measures of the value of exports and imports. Therefore, a wide gap remains between what goes on in the real world and the picture that trade figures present. In the meantime, Americans continue to fret that China’s rise as a low-cost manufacturing power has undermined the U.S. economy, lowered wages, and otherwise worsened the struggles of the U.S. working class. Such fears are not unfounded: there is no question that U.S. workers, especially in manufacturing, have seen their wages fall and unemployment rise. But the fact that trade numbers miscalculate the size of the imbalance between China and the United States suggests that the causes of the negative changes to the U.S. economy have also been wrongly identified. There is thus no reason to believe that if Beijing simply revalued its currency or Washington took a harder stance against Chinese imports and against China’s filching of intellectual property, the domestic U.S. economy would improve. If China is not the primary cause of U.S. economic decline, then punishing China will not help matters.
ONE SIZE DOES NOT FIT ALL
Not one of today’s leading economic indicators was designed to carry the weight it now does. These measurements were not invented to serve as absolute markers of national success or failure or to indicate whether some governments were visionary and others destructive. But the transformation of these numbers from statistics used by bureaucrats and managers into markers of national success happened so quickly over the course of a few decades that no one quite noticed what was happening. These numbers were invented to give policymakers tools to derive the best policies to remedy the most egregious economic problems of their time. In the 1930s, the results appeared creative and innovative by default, since there was no existing legacy of governments attempting to ameliorate systemic economic ills using data and statistics. Indicators such as GDP helped policymakers navigate the many policy experiments called for by desperate times. But today, leading indicators are not used that way. Instead, national statistics often deter policy innovations in the United States rather than facilitate them.
It would be rhetorically satisfying to unveil a new framework and a new set of statistics that would better serve present needs. All indicators, however, are simple numbers -- which is precisely the problem. Any one number will have shortcomings, even if those shortcomings are different for different numbers. GDP does not account for happiness, contentment, or domestic work. It also does not -- and cannot -- account for nonmarket leisure activities. It cannot encompass activities that exist beyond the reach of the state, such as the so‑called invisible economy of cash transactions, cash remittances from immigrant workers delivered by wire, and the informal trade of services, all of which certainly adds up to many trillions of dollars globally. But if economists simply replaced GDP with another number, it, too, would leave something out. No one statistic will suffice. All indicators suffer from the same flaw: they try in vain to distill complicated, ever-changing economic systems into a single, simple figure.
To be useful, a new generation of indicators would have to answer particular, well-defined questions. But they cannot look like new versions of the old numbers. They cannot be one-size-fits-all generalizations. Instead of a few big averages, officials and ordinary people need a multiplicity of numbers that seek to answer a multitude of questions. In the era of “big data,” such an ambition is well within reach, thanks to powerful computing tools that can quickly process quantities of information that would have been unimaginable decades ago. In short, we do not need better leading indicators. We need bespoke indicators, tailored to the specific needs of governments, businesses, communities, and individuals -- and we have the technology to provide them.
“Bespoke” is a word rarely used today. It comes from a time when people of means would go to a tailor and have clothes made to fit them -- and them alone. Unlike for a custom suit, however, the cost of bespoke indicators would be minimal. Anyone with a computer can be his or her own tailor and create bespoke data maps. And in a world that is ill served by one-size-fits-all economic statistics, crafting bespoke indicators is not a luxury; it is a necessity.
The search for the right numbers should begin with one question: What do you need to know in order to do whatever you need to do? GDP figures in the United States, Europe, and China should matter much less to companies such as Caterpillar or General Electric or Google than the specific dynamics of the markets in which they operate. Government spending on infrastructure in Brazil and China should matter more to Caterpillar than GDP. And global spending on online advertising should be a more crucial metric for Google; after all, even if inflation and GDP growth rates were flat and employment numbers weak, companies might still spend more money advertising online this year than they did last year.
Because there are as yet no global indicators of inflation, employment, wages, or anything else, any company with a global reach needs to develop its own metrics to answer its own questions. Otherwise, it will find itself increasingly at sea, making the wrong decisions and not even realizing why. Small businesses and individuals are even less well served by the leading indicators of the twentieth century. Using the national unemployment rate or national housing numbers to decide whether or not now is a good time to start a business or buy a home is a mistake. For someone thinking of opening a clothing boutique or a restaurant, the national CPI reveals little and could badly mislead. Such entrepreneurs should pay attention, instead, to the dynamics of the local market and the trends in their industries. Gleaning that information would have been difficult 30 years ago; today, accessing it takes mere hours on a computer.
As for governments, they invented the primary indicators, and they remain the only institutions that have good reason to continue using them. The major macrostatistics can still usefully measure economic systems, and economists should keep trying to refine them to catch changes in those systems. However, governments also need to recognize the limitations of their cherished leading indicators.
Global trends in labor and the cost of goods are more important than ever, but national indicators do not accurately capture them. So policymakers should be careful not to undertake initiatives that assume that a national economy is some sort of closed loop.
Governments need to do a better job addressing the specific trends that are sometimes obscured by indicators that rely on averages. For instance, treating unemployment as a national problem is almost always a mistake. Employment trends vary dramatically by race, geography, gender, and level of education. But none of that is reflected in the all-encompassing unemployment rate, and hence policies informed by only that number are bound to fall short.
Governments should make their own productive use of big data and tailor their policies more precisely. Economic policies should take into account whether output is weak in one part of the country but robust elsewhere and if prices are rising in one region but falling in another. The politics of such decision-making might be difficult, but now the data make it possible.
How societies solve certain problems; how governments determine their policies or multinationals decide on their strategies; how entrepreneurs run effective businesses; how individuals buy homes, pay for college, or retire -- none of those decisions should be based on the leading indicators of the last century. Old attachments to those indicators, and to the myth that there is something called “the economy” that affects all people equally, poses a major obstacle to progress.
The indicators invented in the twentieth century were among the most important innovations of their time. But in a world where anyone with a smartphone can access more data than a team of statisticians could in 1950, governments, businesses, and individuals must embrace the power to design their own bespoke indicators. The questions need to be specific, and the answers must take into account the limits of any data. But the result would be a welcome liberation from abstract and misleading notions about the economy.