For cash-strapped countries these days, credit is king. And sovereign credit ratings, or independent assessments of a state’s risk of default, are often helpful in accessing it.
The potential advantages of a strong rating are widely known: the ability to borrow more money, on better terms. And the downsides of a poor one—less credit, higher costs—are equally so. Yet the path to a top rating is less clear. Economists and political scientists have spent decades trying to understand how governments can secure better sovereign credit ratings, principally by focusing on a handful of economic indicators, such as a country’s GDP per capita, real GDP growth, default history, and the like. Such indicators, however, are incomplete guides on their own. The “big three” credit rating agencies—Fitch Ratings, Standard & Poor’s, and Moody’s Investors Service—rely on more than quantitative factors, which is why their conclusions about the same numbers sometimes differ.
Indeed, that fact, combined with some recent damaging downgrades, has led some experts, such as Daniel Vernazza and Jonathan Portes, to conclude that the rating process is too subjective or ill thought out and that political leaders should dismiss credit rating agencies as a result. But adopting such an approach risks missing a valuable opportunity. Subjectivity, after all, is a two-way street, since it can work in a country’s favor as well as to its disadvantage. Governments that understand how ratings are made can take steps to hold or improve their position; those that don’t may end up more vulnerable. And with new rating agencies now emerging alongside the old guard, knowing the rules of the game matters more than ever.
HOT OR NOT
Rating agency critics often point to Standard & Poor’s decision to downgrade the United States’ credit rating from AAA (its highest) to AA+ (its second highest) in 2011. “It’s hard to think of anyone less qualified to pass judgment on America than the rating agencies,” the economist Paul Krugman argued in The
Loading, please wait...