Thursday, June 20, 2013

AAAs still Junk -- in 2013!

Breaking news: Several securities, boasting ratings higher than France and Britain as of two weeks ago, are now thought quite likely to default.

Moody's changed its opinion on a number of residential mortgage-backed securities (RMBS), with about 13 of them being downgraded from Aaa to Caa1.

The explanation provided: "Today's rating action concludes the review actions announced in March 2013 relating to the existence of errors in the Structured Finance Workstation (SFW) cash flow models used in rating these transactions. The rating action also reflects recent performance of the underlying pools and Moody's updated expected losses on the pools."

In short: the model was wrong - oops!  ($1.5bn, yes that's billion, in securities were affected by the announced ratings change, with the vast majority being downgraded.)

Okay, so everybody gets it wrong some time or other.  What's the big deal?  The answer is there's no big deal.  You probably won't hear a squirmish about this - nothing in the papers.  Life will go on.  The collection of annual monitoring fees on the deal will continue unabated and no previously undeserved fees will be returned. Some investors may be a little annoyed at the sudden, shock movement, but so what, right?  They should have modeled this anyway, they might be told, and should not be relying on the rating.  (But why are they paying, out of the deal's proceeds, for rating agencies to monitor the deals' ratings?)

What is almost interesting (again no big deal) is that these erroneously modeled deals were rated between 2004 and 2007.  So roughly six or more years ago.  And for the most part, if not always, their rating has been verified or revisited at several junctures since the initial "mis-modeled" rating was provided.  How does that happen without the model being validated?

A little more interesting is that in many or most cases, Fitch and S&P had already downgraded these same securities to CCC or even C levels, years ago!  So the warning was out there.  One rating agency says triple A; the other(s) have it deep in "junk" territory.  Worth checking the model?  Sadly not - it's probably not "worth" checking.  This, finally, is our point: absent a reputational model to differentiate among the players in the ratings oligopoly, the existing raters have no incentive to check their work. There's no "payment" for checking, or for being accurate.

Rather, it pays to leave the skeletons buried for as long as possible.


For more on rating agencies disagreeing on credit ratings by wide differentials, click here.
For more on model risk or model error, click here.

Snapshot of certain affected securities (data from Bloomberg)

Monday, June 17, 2013

Outdated Ratings

Two Bloomberg reporters wrote a thought-provoking piece late last week (see Lost AAA Brings Falling Yields-to-Deficits on Downgrade) on the forthcoming ratings downgrade for the US.  In some ways they explore the age-old question of the (odd) relationship between a ratings change and the market price, or yield, of a bond.
"Yields on Treasuries are lower, the dollar is stronger and the S&P 500 Index (SPX) of stocks reached a record high since Aug. 5, 2011, when S&P said the U.S. was less creditworthy than Luxembourg and 17 other sovereigns."

But their article really tests the case of whether a US downgrade can be implemented despite an improving economic picture.
"... the unemployment rate has fallen, household wealth has reached a record and the budget deficit is shrinking. More downgrades may be coming, anyway."

Stepping back, it's fair to debate whether the economy is in better shape.  There are likely economists on both sides of the table on this one.  And certainly the statement that "the budget deficit is shrinking," if read on its own, can be misleading: the estimated budget deficit this year is expected to be lower than that of last year, but the overall federal deficit is expected to increase (or in more proper US fiscal parlance, the debt is expected to increase).

According to the Bloomberg article, "Moody’s Investors Service said it’s awaiting lawmakers’ budget decisions this year as it weighs reducing America’s Aaa."

But given the US GDP is growing, while Europe is in a recession, can it make any sense to downgrade the US?  Remember, the rating agencies claim their ratings are RELATIVE measures of risk.  In other words, the rating agencies are ranking each country relative to other countries.  So for the US to be downgraded, it would need to be getting worse relative to its competition.

Our guess is that what's happening here is the result of a fair share of awkwardness surrounding a situation in which many of the rating agencies' outstanding ratings may not reflect their current opinions.  If they delayed implementing the downgrade since the US first failed to meet the relevant criteria necessary to maintain the AAA rating, they would now look a little silly downgrading so long after the fact, now that the economy has stabilized, or turned the corner.

According to the article, "Fitch Ratings, which has a “negative” outlook on the U.S., said in February that the debt trajectory isn’t consistent with a AAA borrower."  Even if you agree with Fitch on this, okay, the US may still not be back at AAA if the rating agencies were to strictly adopt their criteria. But, as the Bloomberg article forces us to ask, how can a downgrade be appropriate now?

Tuesday, June 11, 2013

There's Always a Model

Opponents of quantitative credit models and their use in bond ratings contend that the decision to default is a human choice that does not lend itself to computer modeling. These critics often fail to realize that the vast majority of ratings decisions are already model-driven.

Recently, Illinois’ legislature failed to pass a pension reform measure. Two of the three largest rating agencies downgraded the state’s bonds citing leadership’s inability to shrink a large unfunded actuarial liability. The downgrades were a product of a mental model that connects political inaction on pensions to greater credit risk.

Indeed most rating actions and credit decisions are generally supported by some statement of a cause and effect relationship. Drawing a correlation between some independent driver – like greater debt, less revenue or political inaction – and the likelihood of default is an act of modeling. After all, a model is just a set of hypothesized relationships between independent and dependent variables. So ratings analysts use models – they just don’t always realize it.

The failure to make models explicit and commit them to computer code leads to imprecision. Going back to the Illinois situation, we can certainly agree that the legislature’s failure to reform public employee pensions is a (credit) negative, but how do we know that it is a negative sufficient to merit a downgrade?

In the absence of an explicit model, we cannot. Indeed, the ratings status quo has a couple of limitations that hinder analysts from properly evaluating the bad news.

First, ratings categories have no clear definition in terms of default probability or expected loss, so we don’t know what threshold needs to be surpassed to trigger a downgrade.

Second, in the absence of an explicit model, it is not clear how to weigh the bad news against other factors. For example, most states, including Illinois, have been experiencing rapid revenue growth in the current fiscal year. Does this partially or fully offset the political shortcomings? And, all other things equal, how much does the legislature’s inaction affect Illinois’ default risk.

In the absence of an explicit model, analysts make these calculations in an intuitive manner, creating opportunities for error and bias. For example, a politically conservative analyst who dislikes public pensions may overestimate their impact on the state’s willingness and ability to service its bonds. Likewise, a liberal sympathetic to pensions may underestimate it.

A news-driven downgrade like the one we saw last week may also be the result of recency bias, in which human observers place too much emphasis on recent events when predicting the future. Worst of all, an implicit mental model can easily be contaminated by commercial considerations unrelated to credit risk: what will my boss think of this proposed downgrade, or how will the issuer react? In an ideal world, these considerations would not impact the rating decision, but it is very difficult for us mortals to compartmentalize such information. Developing and implementing a computer rating model forces an analyst to explicitly list and weight all the independent variables that affect default risk.

Computer models are also subject to bias, but they provide mechanisms for minimizing it. First, the process of listing and weighting variables can lead the analyst to identify and correct her prejudices. Second, to the extent that the model is shared with other analysts, more eyes are available to find, debate and potentially eliminate biases.

Once the model is in place, news developments can be tackled and analyzed without recency effects. In the case of Illinois pensions, the legislative development would have to be translated into an annuity of expected future state pension costs (or a distribution of same) so that it can be analyzed with reference to the state’s other fiscal characteristics.

Computerized rating models – like any human construction – are subject to error. But the process of developing, debating and iterating explicit models tends to mitigate this error. We all apply models to credit anyway; why not just admit it and do it properly?