Opponents of quantitative credit models and their use in bond ratings contend that the decision to default is a human choice that does not lend itself to computer modeling. These critics often fail to realize that the vast majority of ratings decisions are already model-driven.
Recently, Illinois’ legislature failed to pass a pension reform measure. Two of the three largest rating agencies downgraded the state’s bonds citing leadership’s inability to shrink a large unfunded actuarial liability. The downgrades were a product of a mental model that connects political inaction on pensions to greater credit risk.
Indeed most rating actions and credit decisions are generally supported by some statement of a cause and effect relationship. Drawing a correlation between some independent driver – like greater debt, less revenue or political inaction – and the likelihood of default is an act of modeling. After all, a model is just a set of hypothesized relationships between independent and dependent variables. So ratings analysts use models – they just don’t always realize it.
The failure to make models explicit and commit them to computer code leads to imprecision. Going back to the Illinois situation, we can certainly agree that the legislature’s failure to reform public employee pensions is a (credit) negative, but how do we know that it is a negative sufficient to merit a downgrade?
In the absence of an explicit model, we cannot. Indeed, the ratings status quo has a couple of limitations that hinder analysts from properly evaluating the bad news.
First, ratings categories have no clear definition in terms of default probability or expected loss, so we don’t know what threshold needs to be surpassed to trigger a downgrade.
Second, in the absence of an explicit model, it is not clear how to weigh the bad news against other factors. For example, most states, including Illinois, have been experiencing rapid revenue growth in the current fiscal year. Does this partially or fully offset the political shortcomings? And, all other things equal, how much does the legislature’s inaction affect Illinois’ default risk.
In the absence of an explicit model, analysts make these calculations in an intuitive manner, creating opportunities for error and bias. For example, a politically conservative analyst who dislikes public pensions may overestimate their impact on the state’s willingness and ability to service its bonds. Likewise, a liberal sympathetic to pensions may underestimate it.
A news-driven downgrade like the one we saw last week may also be the result of recency bias, in which human observers place too much emphasis on recent events when predicting the future. Worst of all, an implicit mental model can easily be contaminated by commercial considerations unrelated to credit risk: what will my boss think of this proposed downgrade, or how will the issuer react? In an ideal world, these considerations would not impact the rating decision, but it is very difficult for us mortals to compartmentalize such information. Developing and implementing a computer rating model forces an analyst to explicitly list and weight all the independent variables that affect default risk.
Computer models are also subject to bias, but they provide mechanisms for minimizing it. First, the process of listing and weighting variables can lead the analyst to identify and correct her prejudices. Second, to the extent that the model is shared with other analysts, more eyes are available to find, debate and potentially eliminate biases.
Once the model is in place, news developments can be tackled and analyzed without recency effects. In the case of Illinois pensions, the legislative development would have to be translated into an annuity of expected future state pension costs (or a distribution of same) so that it can be analyzed with reference to the state’s other fiscal characteristics.
Computerized rating models – like any human construction – are subject to error. But the process of developing, debating and iterating explicit models tends to mitigate this error. We all apply models to credit anyway; why not just admit it and do it properly?
No comments:
Post a Comment