Silver’s analysis is impressive for an industry outsider, but it suffered some deficiencies. For example, he applied a linear scale (AAA = 9, AA = 8, A = 7, etc.) when mapping ratings to decimal values. Given the structure of historic default rates, some sort of geometric scaling would have been more appropriate.
Our criticism, however, should not compromise Silver’s core point: that statistical analysis can probably do a better job of telling us about sovereign default probabilities than the traditional rating agency approach.
An Argument for a Model-Based Approach
Certainly, an intensive statistical analysis avoids several of the pitfalls facing sovereign ratings, as currently implemented.
First, a rating methodology that relies heavily on qualitative techniques is vulnerable to bias. The serial correlation Silver found stems from a natural bias at rating agencies against extreme actions. Rather than imposing a large-scale, multiple-notch downgrade, rating committees may be predisposed to implementing a lesser, often single-notch change in the hope that subsequent events will obviate further action. While the biased rater might thus apply a rating inconsistent with the methodology, a computer model, lacking the capacity to “hope,” reverts directly to the honest, brutal truth.
There’s also a more fundamental objection to the rating agency model. Their qualitative approach, requiring the human touch, is labor intensive. But for-profit rating agencies are oddly notorious for understaffing their sovereign rating groups.
A model-based approach enables more frequent, more intensive analysis – as opposed to the infrequent reviews sovereigns now receive. New data can be loaded into the model at regular intervals and new results calculated. Analysts should still oversee the model parameters and check any results that may look suspicious.
The Ingredients of a Sovereign Debt Model
Model-based approaches to sovereign risk often involve credit default swap spreads, as the independent or dependent variable. A model can either extract default probabilities from CDS spreads, or attempt to predict those spreads on the grounds that they are a proxy for actual risk. Silver takes this latter approach in his piece.
The use of market inputs in credit models is fairly common. Such a modeling choice often implicitly or explicitly relies on the Efficient Market Hypothesis – the idea that market prices incorporate all relevant information and are thus the best available estimate of value.
Since the financial crisis, critics of EMH have sharpened their attacks. But whether or not you subscribe to rational expectations, the use of sovereign CDS is hard to defend. Most EMH advocates recognize that only liquid markets are efficient. Since liquid markets have numerous participants, their equilibrium prices incorporate substantial amounts of information.
This is not the case with sovereign CDS markets. Kamakura Corporation examined sovereign trading volumes reported by DTCC for late 2009 and 2010, and found that the vast majority of sovereign CDS contracts were traded fewer than five times per day (excluding inter-dealer trades). Five transactions per day falls well short of a liquid market, and thus the information content of sovereign CDS spreads is doubtful at best.
Absent meaningful CDS spread data, what else can a government credit model rely upon?
While one might look at Corruption Perception Indices, per capita GDP and/or terms of trade, it is not clear that these inputs will differentiate between advanced economy sovereigns and sub-sovereigns. Fortunately, government issuers produce reams of actual and projected fiscal data. This information, combined with demographic inputs and economic forecasts, can take us a long way.
When we suggest that budget forecasts can be employed in government credit modeling, skeptics point out accuracy issues with government forecasters.
The most famous forecasting error is attributed to the US CBO, which predicted trillions of surpluses for the first decade of the 21st century, instead of the trillions in deficits that actually appeared.
CBO forecasts are usually published in the form of point estimates. To be reliable, they have to reflect accurate forecasts of interest rates, GDP, tax levels and a host of other macroeconomic and policy variables. Given the number of variables and our (collective) limited capacity to predict, the point estimate is bound to be wrong. That notwithstanding, we can be pretty certain that these variables will fall within a given range. For example, it is almost certain that US GDP growth will be somewhere between -3% and +6% next year (2013). If we run a large number of scenarios with different GDP growth rates within this range, it is likely that some of the trials will closely approximate the ultimate fiscal outcome.
We can run a large number of budget scenarios by using a Monte Carlo simulation – in which scenarios are created by generating random numbers. Budget simulation forms the basis for PF2’s Public Sector Credit Framework that we will release next week. The tool allows the user to enter a default threshold in the form of a fiscal ratio; create macroeconomic series that vary with each trial through linkages to random numbers; and design fiscal series that rely on one or more of these macroeconomic elements. If you would like to learn more about this technology, please contact us at info@pf2se.com, or call +1 212-797-0215.
------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study. This is the last of four blog posts introducing PF2’s Public Sector Credit Framework. Previous posts on this topic may be found here, here and here.