A Fight Over Taiwan Could Go Nuclear
Wargaming Reveals How a U.S.-Chinese Conflict Might Escalate
The fight over who will succeed Ben Bernanke as chair of the U.S. Federal Reserve rolls on, getting bigger and more tangled as it goes. It is easy to get caught up in the debate about the merits of different candidates, but doing so misses a larger point. The real story here is the intensity of the fight itself, which is evidence of a shift of power toward central bankers that began under U.S. President Ronald Reagan and has been aggravated since the financial crisis of 2008.
There was a time, years ago, when the appointment of a new Fed chair wasn't such a big deal. In the summer of 1969, there was nothing like today’s fuss when President Richard Nixon announced the appointment of Arthur F. Burns, who then headed the Council of Economic Advisers, to the top job at the Federal Reserve. Likewise, President Jimmy Carter's 1977 decision to oust Burns in favor of G. William Miller was a sedate affair by current standards.
But that was a different era, when the Federal Reserve played a less prominent role. The principle of Fed autonomy was less firmly established, and many people still believed that the real authority over monetary policy resided in the White House. (A 1969 New York Times analysis actually called it "the myth of Federal Reserve independence.") International financial markets were less developed, and not as effective in punishing countries whose monetary policies seemed to go astray. And in any case, fiscal policy was still regarded as an equally important tool for economic management, which remained firmly in the hands of the U.S. Congress. Under these circumstances, it did not matter as much who the next Fed chair would be.
The world changed after 1980. Economists grew skeptical about the merits of using fiscal policy to manage the overall economy, partly because of their doubts about the competence of politicians in using that tool wisely. They were also convinced of the need to place monetary policy in the hands of technocrats who could ignore political pressures and fight inflation more rigorously. Accordingly, many countries, including the United Kingdom, gave their central banks formal autonomy. In the United States, the principle of Fed independence was also taken more seriously. That space allowed Paul Volcker, chair of the Federal Reserve from 1979 to 1987, to launch the modern era of the powerful central banker. Alan Greenspan, chair from 1987 to 2006, perfected that role, attaining, as the economist Mark Zandi said in 2005, "rock star status."
Some analysts expected the financial crisis of 2008 to bring an end to all that. The number of professional economists employed by the Federal Reserve and other major central banks had grown significantly in the years before the crisis, even as other parts of government dealt with cutbacks. But all that brainpower was not enough to see disaster coming. In the United Kingdom, a group of economists admitted to Queen Elizabeth II that the crisis "was principally a failure of the collective imagination of many bright people." In the United States, End the Fed, by the libertarian politician Ron Paul, hit the New York Times bestseller list.
Five years later, however, the power of the Federal Reserve is greater than ever before. Congressional dysfunction and partisan warfare have made the possibility of economic recovery through fiscal measures or other legislative initiatives remote; monetary policy and the Federal Reserve have become the last hope. The Fed has responded energetically with initiatives such as quantitative easing (buying bonds in large amounts to push down long-term interest rates), which was an unprecedented and massive exercise in policy innovation. As the financier Mohamed El-Erian observed in 2012, the Fed and other major central banks were "neck deep in extreme policy experimentation mode."
As a result, the two salient features of the economic crisis have been political gridlock and technocratic entrepreneurship. Compare this to the nation's response to last major economic crisis, the Great Depression. In those days, it was the political class that took the initiative, while the Federal Reserve took a secondary role. President Franklin D. Roosevelt himself set the government’s tone, working with Congress to pass a battery of legislative initiatives aimed at restoring confidence. "The country needs bold, persistent experimentation,” Roosevelt said. “Take a method and try it. If it fails, admit it frankly and try another. But above all, try something." Today, the same mantra applies -- but it applies to central bankers, not to politicians.
In that sense, the last few years have upended our understanding of the role of central bankers and the reason for central bank independence. Before the crisis, during the years when countries were beginning to take the idea of central bank autonomy more seriously, many people asked how it could be justified in a democratic society. The response from some advocates of central bank independence was straightforward. Banks had a simple goal -- price stability -- and well-established techniques for achieving that goal. They did not engage in much policy innovation and, above all, they were not in the business of picking winners and losers in the economy. In other words, the power that was being given to central banks was limited, so the threat to democratic principles was not substantial.
But the game has changed. The objectives of central bank policymaking are no longer so simple: for example, there is an active debate among central bankers about the relative importance of job creation and inflation control. At the same time, the techniques for achieving those goals are less certain. Finally, the Federal Reserve and other major central banks are now unambiguously in the business of picking economic winners and losers. Recent studies have highlighted the extent to which such central bank policies as quantitative easing have conferred big rewards on some groups while penalizing others. A 2012 study by the Bank of England conceded that the benefits of its quantitative easing program "have not been shared equally," with wealthy households benefiting disproportionately.
But the critical point is this: although the premises on which U.S. politicians and the public initially accepted the delegation of authority to independent central banks have been blown apart, that delegation persists. In practice, central bankers’ power has broadened, while legislative power has atrophied. And this is true in other countries as well. This is a troubling shift, and it has not gotten the attention it deserves. The people who advocated for central bank independence in the 1980s and 1990s had to make their case explicitly, since, in many countries, they were calling for legislative change. But the current shift has happened in an ad hoc way, under the pressure of the moment, without a compelling explanation of how it can be squared with democratic principles.
Perhaps this will be a transitional phenomenon -- one that will survive only so long as the country feels itself to be in the midst of an economic crisis, and Washington suffers from political gridlock. Or maybe not. A longer-term view suggests that influence has tended to flow toward the Federal Reserve and other central banks since the liberalization that began with Reagan, in the United States, and Margaret Thatcher, in the United Kingdom. By this view, developments since the financial crisis are just one step further along a familiar path. One result of this trend is more intense debate about who will head the Federal Reserve. Today, Americans argue intensely about this appointment in the same way they have always argued about nominees for chief justices of the Supreme Court, and for the same reason: because that is where the power lies.