Thursday, May 19, 2011

A Telling Tale of Two Tables

Just how difficult is it to measure ratings performance, and how useful are the measures, really?

There are certain difficulties we all know about – having to rely on the ratings data being given to you by the parties whose performance you’re measuring. So there is naaturally the potential for the sample to be biased or slanted, and that’s very difficult to uncover. (We’ve discussed this hurdle at great length here.)

The next issue we’ve written about is the inability to separate the defaulting assets from those that didn’t default. In our regulatory submission, we called for transparency as to what happened to each security BEFORE its ratings was withdrawn (see Transparency of Ratings Performance).

Let’s have a look at a stunning example today that brings together a few of the challenges, and leaves one with a few unanswered questions as to the meaningfulness of ratings performance, as it's being currently displayed, and the incentives rating agencies have to update their own ratings.

Here’s a Bloomberg screenshot for the rating actions on deal Stack 2006-1, tranche P.

Starting from the bottom up, it seems Moody’s first rated this bond in 2006 whereas Standard & Poor’s first rated it in 2007. If we try to verify this on S&P’s website for history, we come to realize how difficult it can be to verify:

So let’s suppose everything about the Bloomberg chart is accurate.

As verified on their website, Moody’s rated this bond Aaa in August ’06 and acted no further on this bond until it withdrew the rating in June 2010. (They don’t note whether the bond paid off in full, or whether it defaulted.)

S&P, meanwhile, shows its original AAA rating of 2007 being downgraded in 2008 (to BBB- and then to B-) and in 2009 to CC and again in 2010 to D, which means it defaulted (according to S&P).

For Moody’s, the first year for which the bond remains in the sample for a full year is 2007; thus, it wouldn’t be included in 2006 performance data. For S&P, the first complete year is 2008.

So if we consider how Moody’s would demonstrate its ratings performance on this bond, it would say:

Year 2007: Aaa remains Aaa
Year 2008: Aaa remains Aaa
Year 2009: Aaa remains Aaa
Year 2010: Aaa is withdrawn (WR)

No downgrades took place, according to Moody’s … while at the same time S&P shows it as having defaulted:

Year 2008: AAA downgraded to B-
Year 2009: B- downgraded to CC
Year 2010: CC downgraded to D

Here’s what their respective performance would look like, if one were to apply their procedures (at least as far as we understand them):

Friday, May 13, 2011

Adverse Selection? No Problem!

A section from their rating methodology piece "Moody’s Approach To Rating U.S. Bank Trust Preferred Security CDOs" describes its procedures for ensuring the quality of bank preferreds being bought by CDO managers (S&P / Fitch have something similar). The section reads:

"In order to control for [adverse selection of banks by the arranger of CDOs], Moody's takes a four-step approach for banks that are not rated by Moody's.

First, each bank should satisfy the following prescreening attributes:
  • Financial Institution insured by the Bank Insurance Fund or Saving Association Insurance Fund.
  • Five years minimum of operating history.
  • Minimum asset size of $100 million.
  • Not under investigation by any regulatory body.
  • No restrictions placed on its operations by any regulatory body."
[additional steps omitted in the name of brevity]


Tuesday, May 10, 2011

Pricing Transparency

Professor Allan Meltzer argues, in yesterday’s WSJ article BlackRock's 'Geeky Guys' Business, that BlackRock Solutions’s pricing process “should all be open” and that “[they] may be doing things honestly and above board, but we won’t see that unless we see how they got the numbers.”

While legislators and supervisors scurry to plug the holes created by the absence of both balance sheet and asset transparency, the final piece of the puzzle – pricing transparency – remains largely unattended to.

It is this final element, the lack of pricing transparency, that concerns Prof. Meltzer. Right now, hedge funds, banks and insurance companies can all carry the same asset at a different price. In illiquid markets, the price differential between two price providers can be extraordinary, creating an opportunity for lesser-regulated financial institutions to profit handsomely from the regulatory arbitrage available, at the expense of their more heavily-regulated counterparts.

As with “ratings shopping” where market participants seek the highest ratings on their securities, investors are financially incentivized to seek out the highest value they can find for each security. Funds’ performance (and often their managers' bonuses) is directly determined from the valuations of their assets. Stronger performance, whether real or artificial, can even help a fund or company raise new capital.

Thus, there remains significant potential for derivatives mispricing. One could even argue that the potential for mispricing is heightened when the price provider offers additional advisory services to the client. Given the substantial fees and margins that may be earned on the advisory side, a conflicted price provider may be more open to accommodating a client’s price haggling to win or maintain it as a client.

Prof. Meltzer’s goal for pricing transparency would hone in on, perhaps eliminate, numerous possible sources for deliberate mispricing (see list of contested pricings here). While we fear it may be prove an insurmountable hurdle to require pricing providers to share transparency as to their methods, we feel strongly that an opportunity exists now for market regulators to ensure the consistency of prices used. (See Central Pricing Solution here.)

Absent complete pricing transparency, the usage of consistent prices would serve to increase investor confidence as to the adequacy of financial institutions’ balance sheets. A requirement for all constituents supervised by the same regulatory body to apply the same price can discourage price haggling, or price shopping.

Monday, May 2, 2011

The Data Reside in the Field

In the New York Times' “Needed: A Clearer Crystal Ball” (Sunday Business pg. 4) professor Robert Shiller claims that if we sharpen our risk measurement tools we will better understand the risk of another financial shock. He argues that improved data collection can substantially increase the predictive power of our financial models.

As mathematicians we must agree with his sentiment: more complete data is always useful. But as market participants we wonder if professor Shiller has missed out on a valuable lesson to be learnt from this financial crisis.

One key lesson was that the source of the failure was neither data-driven nor model-driven, but rather a direct result of the expected behavior of poorly incentivized parties.

In fact, one could argue that for the large part the data were comprehensive. The models were highly sophisticated, perhaps too sophisticated. But what caused the crisis was that originating parties were financially rewarded for structuring and selling low quality mortgage loans. The incentive was clear and by mid-2005 the FBI was already commenting on the pervasive and growing nature of mortgage fraud.

Misrepresented financial documentation skews the data and cannot be spotted simply by poring over ever more abundant realms of data: you have to go into the field itself to follow the incentives.

The financial downturn was made worse by financial institutions’ lack of confidence in the creditworthiness of their counterparts. Absent a level of certainty as to the true nature of others’ balance sheets, a lending freeze precipitated an illiquidity crisis. But a more thorough examination of data won’t tell you what resides off-balance sheet: you have to understand the prevailing accounting environment (that led to the mechanical reproduction of the negative basis trade) and the fundamental nature of the opaque shadow banking system.

It is a concern that intellectuals and academics risk being lulled into a false sense of security based on their access to copious amounts of statistical data.

Copious analysis of imperfect data is unlikely, alone, to help our regulators hone in on an inevitable crisis – much less prevent it. Let's not strive to build complex economic models whose success hinges on sensitive data. Let us rather encourage a keener appreciation of the limitations of data, the intentional proliferation of informational asymmetries, and the incentives that can cause a meltdown.