Pages

Showing posts with label Model Error. Show all posts
Showing posts with label Model Error. Show all posts

Wednesday, April 9, 2014

Are "Dark Pool" Probes Pending?

Banks' so-called "dark pool" trading venues are all the rage these days.

The media jumped when dark pools were cited in federal authorities' and investor class action complaints against SAC Capital and its executives. The focus was on how the anonymity associated with dark pools, and the levels of secrecy they provide, allowed SAC and/or its members to avoid detection and potential losses on its sale of stock. (See also Gazing into 'dark pools,' the tool that enables anonymous insider trading)

One quote from a complaint reads:
“We executed a sale of over 10.5 million ELN for [various portfolios at CR Intrinsic and SAC LP] at an avg price of 34.21. This was executed quietly and efficiently over a 4 day period through algos and darkpools and booked into two firm accounts that have very limited viewing access.”
Next Goldies brought its dark pool, Sigma X, to the fore.  Having discovered pricing errors within the opaque pool, Goldman reportedly decided to send refund checks to customers to compensate them for the mistakes.

Michael Lewis didn't make matters any easier for Goldman or dark pools, giving them a hard time in his new book, Flash Boys.  Among other things, he casts doubt on whether investors got "best execution" through the dark pools:
“A broker was expected to find the best possible price in the market for his customer. The Goldman Sachs dark pool—to take one example—was less than 2 percent of the entire market. So why did nearly 50 percent of the customer orders routed into Goldman’s dark pool end up being executed inside that pool—rather than out in the wider market.”
That quote, alone, might not be altogether convincing: it's not clear whether he's looking at scenarios in which Goldman's clients have requested execution through the Sigma X, or whether Goldman's clients, requesting best execution, were oddly quite regularly executed through Sigma X, despite the potential for sub-optimal execution through that platform.  It's probably fair to say that those requesting execution through the dark pool would agree that they were foregoing "best execution" in the market - which is something one typically foregoes even with "hidden" orders submitted to an (open) exchange.

But now Goldies is back in the spotlight.  According to today's WSJ, they're considering shutting down their dark pool.

But why?

According to the Journal article, Goldman executives are weighing the benefits of the revenues it produces, against the burdens dealing with trading glitches and negative press.  Some burdens those must be, given Sigma X is purportedly one of the largest bank dark pools, and is likely producing significant flow.

Perhaps there's another theory...

Consider these stories:

In January 2014 Barclays decided to shut down its retail / margin Foreign Exchange business, Barclays Margin FX; in February, NY regulator Lawsky opened a currency markets probe.

In January 2014 Deutsche Bank AG (DBK) announced that it will withdraw from participating in setting gold and silver benchmarks in London;  in March, the CFTC announced that it is looking at issues including whether the setting of prices for gold—and the smaller silver market—is transparent. 

In July 2013, the CFTC put metals warehouses on notice of a possible probe. By November 2013 Goldman was resuming talks to sell its metals warehouses and seeking a buyer for its uranium trading unit; and by March 2014 we had various notices of JP Morgan's intent to sell its physical commodities divisions.

Probe and Sale

We could go on and on, but ultimately these are all anecdotal and we aren't wanting or looking to prove statistical significance at this stage.  We're also not too concerned about what comes first: the probe or the sale.  There's certainly no one-to-one mapping.  Not every regulatory probe is followed by a sale, or vice-versa.  We're only wondering if there's a pattern. And if there's a pattern, could there be an explanation as to why there's a pattern?

Here's one theory.  (We welcome yours.)  Might it be that, pending a likely or imminent (and embarrassing or expensive) enforcement action, banks may take preemptive action in selling "problematic" divisions ... to enable the negotiation of a more lenient settlement as they're (now) less likely to be repeat offenders of whatever activity was the subject of the probe?

In other words, is a dark pool probe pending?

Friday, April 4, 2014

High Frequency (Non) Trading

This week's release of Michael Lewis' new book, Flash Boys, has renewed focus on a little understood area of the market, an area that has garnered the recent attentions of market regulators, New York's Attorney General, and more recently the FBI -- but never as much attention as it garnered from Michael Lewis' interview on 60 Minutes on Sunday, with his book pending release the following day.

Without going into too many specifics, one of the central themes that Lewis discusses is the potential for high frequency traders (or HFTs) to take advantage of certain market information -- like bids and offers -- that are unknown to many other market players.

Defenders of HFTs have come out aggressively, with claims that HFTs increase market activity and liquidity, and have lowered trading costs.  The WSJ published an extensive opinion editorial by hedge fund guru Cliff Asness and his colleague Michael Mendelson of AQR, which energetically claims that much of what HFTs do is "make markets" and that they do it best because "their computers are much cheaper than expensive Wall Street traders, and competition forces them to pass most of the savings on to us investors."

Of course this sounds altogether too convincing.  Unfortunately, Asness and Mendelson provide little or no evidence (although their business as long term traders relies heavily on evidence, and they claim in the article to spend considerable energies looking into their trading costs) and they admit that they actually don't have too much conviction in the premise of their exposition:
"We think it helps us. It seems to have reduced our costs and may enable us to manage more investment dollars. We can't be 100% sure. Maybe something other than HFT is responsible for the reduction in costs we've seen since HFT has risen to prominence, like maybe even our own efforts to improve." (emphasis ours)
But this aside, no doubt all forms of HFTs bring liquidity.  They're a good thing.  Let's focus our attention elsewhere.  

Or not?

Might there be another type of HFT, that doesn't always bring liquidity for the greater good of the market ...  perhaps a type that uses obscure mechanisms to change the look and feel of the market -- to make people think there is a bid, think there is an offer, without there being one?  

This is what Flash Boys, and the interest it has invigorated in HFTs, really concerns itself with -- understanding market maneuvers like spoofing or pinging: the submission of phantom orders, immediately cancellable, that have the potential to create a false impression of market levels.

Are we creating a whole lot of (potentially fictitious) orders, but not a whole lot of activity?  Are there high-frequency non-traders?  Are we mis-marking our portfolios as a result? We continue to investigate.  But we couldn't help but bring you back to a 2013 chart from Mother Jones, which highlights the growing contrast between actual trades (in orange) and quotes/orders (in red).


Thursday, June 20, 2013

AAAs still Junk -- in 2013!

Breaking news: Several securities, boasting ratings higher than France and Britain as of two weeks ago, are now thought quite likely to default.

Moody's changed its opinion on a number of residential mortgage-backed securities (RMBS), with about 13 of them being downgraded from Aaa to Caa1.

The explanation provided: "Today's rating action concludes the review actions announced in March 2013 relating to the existence of errors in the Structured Finance Workstation (SFW) cash flow models used in rating these transactions. The rating action also reflects recent performance of the underlying pools and Moody's updated expected losses on the pools."

In short: the model was wrong - oops!  ($1.5bn, yes that's billion, in securities were affected by the announced ratings change, with the vast majority being downgraded.)

Okay, so everybody gets it wrong some time or other.  What's the big deal?  The answer is there's no big deal.  You probably won't hear a squirmish about this - nothing in the papers.  Life will go on.  The collection of annual monitoring fees on the deal will continue unabated and no previously undeserved fees will be returned. Some investors may be a little annoyed at the sudden, shock movement, but so what, right?  They should have modeled this anyway, they might be told, and should not be relying on the rating.  (But why are they paying, out of the deal's proceeds, for rating agencies to monitor the deals' ratings?)

What is almost interesting (again no big deal) is that these erroneously modeled deals were rated between 2004 and 2007.  So roughly six or more years ago.  And for the most part, if not always, their rating has been verified or revisited at several junctures since the initial "mis-modeled" rating was provided.  How does that happen without the model being validated?

A little more interesting is that in many or most cases, Fitch and S&P had already downgraded these same securities to CCC or even C levels, years ago!  So the warning was out there.  One rating agency says triple A; the other(s) have it deep in "junk" territory.  Worth checking the model?  Sadly not - it's probably not "worth" checking.  This, finally, is our point: absent a reputational model to differentiate among the players in the ratings oligopoly, the existing raters have no incentive to check their work. There's no "payment" for checking, or for being accurate.

Rather, it pays to leave the skeletons buried for as long as possible.

---------------

For more on rating agencies disagreeing on credit ratings by wide differentials, click here.
For more on model risk or model error, click here.

---------------
Snapshot of certain affected securities (data from Bloomberg)



Tuesday, June 11, 2013

There's Always a Model

Opponents of quantitative credit models and their use in bond ratings contend that the decision to default is a human choice that does not lend itself to computer modeling. These critics often fail to realize that the vast majority of ratings decisions are already model-driven.

Recently, Illinois’ legislature failed to pass a pension reform measure. Two of the three largest rating agencies downgraded the state’s bonds citing leadership’s inability to shrink a large unfunded actuarial liability. The downgrades were a product of a mental model that connects political inaction on pensions to greater credit risk.

Indeed most rating actions and credit decisions are generally supported by some statement of a cause and effect relationship. Drawing a correlation between some independent driver – like greater debt, less revenue or political inaction – and the likelihood of default is an act of modeling. After all, a model is just a set of hypothesized relationships between independent and dependent variables. So ratings analysts use models – they just don’t always realize it.

The failure to make models explicit and commit them to computer code leads to imprecision. Going back to the Illinois situation, we can certainly agree that the legislature’s failure to reform public employee pensions is a (credit) negative, but how do we know that it is a negative sufficient to merit a downgrade?

In the absence of an explicit model, we cannot. Indeed, the ratings status quo has a couple of limitations that hinder analysts from properly evaluating the bad news.

First, ratings categories have no clear definition in terms of default probability or expected loss, so we don’t know what threshold needs to be surpassed to trigger a downgrade.

Second, in the absence of an explicit model, it is not clear how to weigh the bad news against other factors. For example, most states, including Illinois, have been experiencing rapid revenue growth in the current fiscal year. Does this partially or fully offset the political shortcomings? And, all other things equal, how much does the legislature’s inaction affect Illinois’ default risk.

In the absence of an explicit model, analysts make these calculations in an intuitive manner, creating opportunities for error and bias. For example, a politically conservative analyst who dislikes public pensions may overestimate their impact on the state’s willingness and ability to service its bonds. Likewise, a liberal sympathetic to pensions may underestimate it.

A news-driven downgrade like the one we saw last week may also be the result of recency bias, in which human observers place too much emphasis on recent events when predicting the future. Worst of all, an implicit mental model can easily be contaminated by commercial considerations unrelated to credit risk: what will my boss think of this proposed downgrade, or how will the issuer react? In an ideal world, these considerations would not impact the rating decision, but it is very difficult for us mortals to compartmentalize such information. Developing and implementing a computer rating model forces an analyst to explicitly list and weight all the independent variables that affect default risk.

Computer models are also subject to bias, but they provide mechanisms for minimizing it. First, the process of listing and weighting variables can lead the analyst to identify and correct her prejudices. Second, to the extent that the model is shared with other analysts, more eyes are available to find, debate and potentially eliminate biases.

Once the model is in place, news developments can be tackled and analyzed without recency effects. In the case of Illinois pensions, the legislative development would have to be translated into an annuity of expected future state pension costs (or a distribution of same) so that it can be analyzed with reference to the state’s other fiscal characteristics.

Computerized rating models – like any human construction – are subject to error. But the process of developing, debating and iterating explicit models tends to mitigate this error. We all apply models to credit anyway; why not just admit it and do it properly?

Friday, March 15, 2013

Are Credit Ratings Opinions?

The Justice Department's lawsuit against S&P made us reconsider whether the ratings provided by S&P were "opinions."

Let's quickly look at the concept of an opinion: one often relates the word "opinion" to a judgment (which may be factually supported) but may not be "provable."  A fact, however, is closer to being provable.

Of course, the rating agencies have long argued that their ratings are opinions, but there's probably more to it than that.

At the very beginning, many rating agencies have the ability (and they exercise it) to assign a rating of "D" to defaulted issuers or assets.  Many (but perhaps not all) issuer or asset defaults, relative to their defined terms, are directly provable - in other words, at least certain ratings may be more fact, and less opinion.

Now let's dig deeper into the DoJ's argument.  One core argument alleged throughout the complaint was that S&P maneuvered its models/methodology to achieve the rating levels desired by the structuring bankers (the issuers).
"As set forth in detail … S&P's competition for ratings business, that is, its desire to maintain and increase market share and profits, and its resulting desire to maintain its relationships with issuers who drove its ratings business, improperly influenced S&P to favor issuers in its ratings of RMBS and CDOs. In particular, as alleged in detail … to maintain and increase its market share and profits, S&P limited, adjusted, and delayed updates to the ratings criteria and analytical models S&P used to assess the credit risks posed by RMBS and CDO tranches, thereby weakening those criteria and models from what S&P analysts believed was necessary to make them more accurate."
The argument could therefore be that the resulting rating wasn't the "opinion" formed as part of the ratings process: the required rating, known upfront, caused the determination as to which ratings process/model to apply.  In such a case, the rating would be the fact, and the process (to be used) is the judgment.

In other words, an argument may be that the resulting ratings were not the result of the ratings process.  The ratings processes used were the result of the rating required!  

(Think of this relative to the structuring process itself, one of the goals of which is to achieve a certain rating.  Thus it may be inferred -- if the DOJ's allegations hold true -- that both the banks and S&P were playing the role of engineering their analyses in such a way as to achieve the desired rating.)

Friday, July 27, 2012

Agency Shortcuts and Shortfalls

Investors in certain "AAA" resecuritizations won't be happy. Late last night, Moody's downgraded a bunch of securities, even though they are supported by Agency-guaranteed RMBS.

Many of these were downgraded from Aaa to junk (some at Ba1, others all the way to B1) in one fell swoop, while others went only to A1.  (It looks like S&P still carries most of these securities at AA+, which is lower than Moody's Aaa as S&P has downgraded the United States to AA+.)

What's most interesting here is the reason.  It's not the case that either Fannie or Freddie hasn't paid up on their guarantees, but it looks like the deals may not have been modeled (possibly ever!) - or at least may not have been modeled correctly.  According to their press release, the resecuritization vehicles seem not to have the necessary protections in place to support the bonds issued, or the ratings provided.  Some of these deals were structured in 2007 and even late 2008.  Many of these deals are already suffering shortfalls.

From Moody's press release:
"The downgrade rating actions on the bonds are a result of continual interest shortfalls or lack of adequate structural mechanisms to prevent future interest shortfalls should the deals incur any extraordinary expenses."

... and ...

"Interest due on the resecuritization bonds is not subject to any net weighted average coupon (WAC) cap whereas interest due on some of the underlying bonds backing these deals is subject to a net WAC cap."

... and ...

"Since the coupon on the resecuritization bonds is currently higher than that of the underlying bonds, the resecuritization bonds are experiencing interest shortfalls which on a deal basis are accruing steadily."

Total issuance of $483mm affected, according to Moody's. Deals are of Structured Asset Securities Corp. and Structured Asset Mortgage Investments shelves.

Relevant CUSIPs Downgraded Last Night
86363TAA4
86363TAC0
86363TAD8
86363TAF3
86363TAG1
86363TAB2
86363TAH9
86363TAJ5
86365HAA8
86365HAB6
86365HAD2
86365HAG5
86365GAA0
863594AA5
86359LPA1
86359LPB9
86359LPC7

Wednesday, April 11, 2012

Credit Rating Agency Models and Open Source

When S&P downgraded the US from AAA to AA+, the US Treasury accused the rating agency of making a $2 trillion mathematical error. S&P initially denied this accusation, but adjusted some of its estimates in a subsequent press release. Economist John Taylor defended S&P, contending that its calculations were based on a defensible set of assumptions, and thus could not be categorized as a mistake. S&P’s model, which projected future debt-to-GDP ratios, has not been made public. As a result, it is difficult for outside observers to decide whom to believe: the rater or the rated.

There are at least three ways a model’s results can be wrong: if the model’s code itself doesn’t function as intended; if the known inputs are incorrectly entered, and if the assumptions are misapplied. In cases as important as the evaluation of US sovereign debt, we think rating agencies and the investing public would be better off if the relevant models were publicly available. Some may argue that the inputs to the models are proprietary or that they reflect qualitative assumptions valuable to the ratings agencies – i.e., that they are a “secret sauce.” But, even if rating agencies want to keep their assumptions proprietary, making the models themselves available would decrease the likelihood of rating errors arising from software defects.

Keeping one’s internal processes internal is the traditional way. Manufacturers assume that consumers don’t want to see how the sausages are made. In the internet era, it is now much easier to produce the intellectual equivalent of sausages in public – and, as it happens, many consumers are interested in the production process and even want to get involved. Wikipedia provides an excellent example of the open, collaborative production of intellectual content: articles are edited in public and the results are often subject to dispute. Writers get almost instantaneous peer review and the outcome is often rapid iteration moving toward the truth. In their books, Wikinomics and Macrowikinomics, Dan Tapscott and Anthony Williams suggest that Wikipedia’s mass collaboration style is the wave of the future for many industries – including computer software.

Many rating methodologies, especially in the area of structured finance, rely upon computer software. At the height of the last cycle, tools that implemented rating methodologies such as Moody’s CDOROMTM, were popular with both issuers and investors wondering how agencies might look at a given transaction. While the algorithms used by these programs are often well documented, the computer source code is usually not released into the public domain.

Over the last two decades, the software industry has seen a growing trend toward open source technology, in which all of a system’s underlying program code is made public. The best known example of open source system is Linux, a computer operating system used by most servers on the internet. Other examples of popular open source programs include Mozilla’s Firefox web browser, the WordPress content management system and the MySQL database.

In financial services, the Quantlib project has created a comprehensive open source framework for quantitative finance. The library, which has been available for more than 11 years, includes a wide array of engines for pricing options and other derivatives.

Open source allows users to see how programs work and with the help of developers, fully customize software to meet their specific needs. Open source communities such as those hosted on GitHub and SourceForge, enable users and programmers from all over the world to participate in the process of debugging and enhancing the software.

So how about credit rating methodologies? Open source seems especially appropriate for rating models. Rating agencies realize relatively little revenue from selling rating models; they are more likely to be used to facilitate revenue generation through issuer-paid ratings.

Open source enables a larger community to identify and fix bugs. If rating model source code were in the public domain, investors and issuers would have a greater chance to spot issues. Rating agencies would be prevented from covering up modeling errors by surreptitiously changing their methodologies. In 2008, The Financial Times reported that Moody’s errantly awarded Aaa credit ratings to a number of Constant Proportion Debt Obligations (CPDOs) due to a software glitch. The error was fixed, but the incorrectly rated securities were not immediately downgraded according to the FT report. Had the rating software been open source, it would not have been much more difficult to conceal this error, and it would have offered the possibility for a positive feedback loop – an investor or other interested party could have found and fixed the bug on Moody’s behalf.

Not only do open source rating models promote quality, they may also reduce litigation. The SEC issued Moody’s a Wells Notice in respect of the above mentioned CPDO issue, and may well have brought suit. (A Wells Notice is a notification of intent to recommend that the US government pursue enforcement proceedings, and is sent by regulators to a company or a person.) Investors have brought suit against the rating agencies to the extent they felt the ratings were inappropriate, for model-related errors or otherwise. By unveiling the black box, the rating agencies would be taking an active approach in buffering against litigation, and enjoy the material defense that, “yes we may have erred, but you were afforded the opportunity to catch our error – and didn’t.”

Unlike the CPDO model employed by Moody’s, the S&P US sovereign "model" likely took the form of a simple spreadsheet containing adjusted forecasts from the Congressional Budget Office. In contrast to the structured and corporate sectors, there are relatively few computer models for estimating sovereign and municipal default probabilities. While little modeling software is available for this sector, accurate modeling of government credit can be seen as a public good. Bond investors, policy makers and citizens themselves could all benefit from more systematic analysis of government solvency.

Open source communities are a private response to public goods problems: individuals collaborate to provide tools that might otherwise appear in the realm of licensed software. Thus open source government default models populated with crowd-sourced data maybe the best way to fill an apparent gap in the bond analytics market.

On May 2nd, PF2 will contribute an open source Public Sector Credit Framework, which is aimed at filling this analytical gap, while demonstrating how future rating models can be distributed and improved in an iterative, transparent manner. If you wish to participate in beta testing or learn more about this technology please contact us at info@pf2se.com, or call +1 212-797-0215.

--------------------------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study. This posting is the second in a series of posts leading up to May 2nd. The prior piece can be accessed by clicking here.

Tuesday, June 14, 2011

An Aversion to Mean Reversion

Last Wednesday’s Financial Times hosted a scathing column by Luke Johnson, which questions the usefulness of economists, as a whole (see “The dismal science is bereft of good ideas.”)

The column’s title is misleading: Johnson focuses his frustrations only on economists – not economics. Importantly, it is the application of the science, not the science itself, which seems to have caused Johnson's concern.

Indeed the purity of all mathematical sciences can be spoiled by its application. Johnson comments that he “[fails] to see the point of professional economists,” that economists “pronounce on capitalism for a living, yet do not participate in private enterprise, which is its underlying engine.” He ends off his piece by prescribing “[the] best move for the world’s economists would be to each start their own business. Then they would experience at first hand the challenges of capitalism on the front line.”

To be fair, the direct application of economic theory was never intended to satisfy the depths of the dynamic puzzle we put before them, a puzzle for which the answer lay not in the data but in the incentives. [1]

We cannot pretend not to have known that economic models work best in reductionist environments, and that the introduction of complications (like off-balance sheet derivatives) tend to reduce the effectiveness of economic models. Conceptually, once models start to consider too many inter-related variables, or degrees of freedom as statisticians call them, they become so rich and sensitive that no empirical observation can either support or refute them. And so any failures of economists to spot the housing bubble or predict the credit crisis, as Johnson mentions, become our failures too. We would have done better to equip our economists (or academics) with the tools necessary to perform the “down and dirty” analyses that take into account the complex and changing nature of our economy. [2]

Seeing no reason why they ought to have succeeded, we’re perhaps a little we’re more forgiving (than Mr. Johnson) of economists’ shortcomings. But we share his concerns that mathematical sciences are being too directly applied, that the practices and the incentives are being largely ignored.

On Endlessly Assuming “Mean Reversion”

Rather, we ought to encourage our researchers to go into the proverbial field – and to learn to think, and study dynamics, differently.

We can no longer allow ourselves to be informed purely by static analyses of historical data and trends, without seeking a keener appreciation for the underlying dynamics at play. The lazy assumption of mean reversion is simply an assumption, not a rule. When the fundamentals are out of whack – and a direct analysis of data alone cannot tell you that – the market can and will act very differently from a mean-reverting economic model.

Thinking Differently

Given that many market participants have emotions (one could argue that computer algorithms are to an extent emotionless), the tendency for panic or at least the capacity for panic ought to make the direct application of mean reversion models less appealing – and their results less informative, predictive or meaningful.

Ask not “is this a buying opportunity” based on a simple historical trend. But what are the underlying fundamentals? If the game changed based on underlying issues, have they been resolved? Or were they underestimated or overestimated. If the latter is determined after sufficient exploration, one could recommend a "buy." If the former, initiate a "sell." To do otherwise - to simply present a graph and suggest an idea, is folly – it's simply a guess.


Mean reverting economic forecast models continue to be constructed to this day without the thought necessary to support their assumptions (despite the realization that we’re in a very different world).

The outputs, unfortunately, are never better than the inputs.


---------
[1] See our earlier commentary “The Data Reside in the Field

[2] In light of this fact, it is perhaps troublesome that when questioned by JP Morgan CEO Jamie Dimon as to the extent of the government's investigation of the effect of its banking regulations, Bernanke purportedly responded "has anybody done a comprehensive analysis of the impact on -- on credit? I can't pretend that anybody really has," ... "You know, it's -- it's just too complicated. We don't really have the quantitative tools to do that." Source

Wednesday, April 13, 2011

Split Ratings

Given the high correlation between security prices and their ratings, we wanted to follow up on some of our prior pieces that contemplated the wide discrepancies between ratings opinions provided on certain securities (see for example here and here). Split ratings, of course, present trading opportunities.

Our analysis considered securities that were acted upon by a single rating agency between June and August of 2009. We then had a look at the average ratings split as of March 28 this year: one and a half years later. The outcome was quite astonishing.

While at inception the rating agencies seem typically to achieve the same rating, down the line they tend to substantially disagree with one another. (We have broken the differential down depending on how many rating agencies rated each security. If all three of Moody's, Fitch and S&P rated the security, we'll show both a max split and a minimum split. If only two raters rated the security as of March 28, 2011, the max split equals the min split.) The average max differential: 4.23 rating subcategories (or "notches"). The median differential - 3 notches. One rating subcategory would be the difference between a AAA and a AA+.

This table shows examples of the 748 structured finance securities considered in our database at each ratings split level, including one of the 20 securities on which there was a ratings differential of between 14 and 18 ratings subcategories.


For the purposes of this analysis, securities were only considered to the extent they had ratings outstanding from at least two of the Big Three credit rating agencies as of March 28, 2011.

Thursday, March 17, 2011

Looking for a Mis-Rated Subprime Bond?

As we combed through the latest list of subprime RMBS ratings downgrades, we found some astonishing actions, like AA ratings being reduced to CCC in one fell swoop. (Source: S&P's March 16 press release, entitled: S&P Lowers 172 Ratings On 39 '03-'04 US Subprime RMBS Deals, see for example Fremont Home Loan Trust 2004-3 Class M4.)

But one in particular caught our eye:

Welcome to Morgan Stanley ABS Capital I Inc. 2004-NC5 Class B4

Rated C by Moody's and CC by S&P (the lowest and second lowest ratings respectively), the bond paid off in full.








Click to enlarge


As the chart shows, this bond didn't suddenly pay off in full, catching the raters by surprise. Rather, it paid down ever so slowly, but consistently, especially over the period from February through November 2010. (The factor describes the percentage of the bond's par value outstanding at any time, as a percentage of its original face value. In this case, it went from 100%, or 1, down to approximately 20% in mid '07, and then down to 0%, ultimately, in November 2010.)

But wait, this gets better: while S&P simply denotes the CC rating as being withdrawn (as opposed to being paid off in full), as per their procedure, Moody's skips over the B4 class in its March 15, 2011 press release entitled: Moody's downgrades $2.75 billion of Subprime RMBS issued by Morgan Stanley in 2000 through 2004
 
We did a little checking. First we wanted to verify from the deal's trustee report that class B4 really did get all its payments. Turns out that not only did the B4 pay off in full (four months ago), but the C-rated B3 notes, still outstanding, are paying off handsomely too. Since Moody's downgraded the B3 notes (CUSIP 61746RGA3) to C in Feb. 2009, they've paid down more than 56% of their balance!

The bond continues to pay (as of Feb. distribution), and now has a factor of just under 11% remaining. It's currently rated C, CC and C by Moody's, S&P and Fitch, respectively.





Click to enlarge

Update: Feb. 2012: tranches B, C, D and E of Parkridge Lane Structured Finance Special Opportunities CDO I Ltd. were withdrawn after paying down in full. They paid down slowly over time. At the time of final payments, the B notes were rated Caa1 by Moody's and CCC+ by S&P. The E notes were rated Ca by Moody's and CCC- by S&P.
----------------------------

For similar funky findings, click here or here or here. For our coverage of weird (usually structured finance) model errors, click here.

Wednesday, January 5, 2011

Deferred 4 Ever

One of the problems we come across when we examine the models our clients rely on is the incorrect modeling of the payment of deferred, or capitalized interest.

What do I mean?

If you model enough CDO deals, you'll notice that it's not always clear when a deal should be paying the deferred interest on a PIK-able bond. Obviously, in good times, this isn't a big worry but, in bad times this could make the difference between having a noteholder receiving some return vs. no return on his investment.

This is especially meaningful in TruPS CDO world where a good portion of mezzanine bonds are currently in deferral.

I pulled the following steps from a TruPS CDO's Priority of Payments (that section is found in the deal’s Offering Memorandum and defines how the liabilities are paid on each distribution date):

The B1s are entitled to receive interest here:
“SIXTH: to pay Periodic Interest on the Class B-1 Notes at the Applicable Periodic Rate and the Class B-2 Notes at the Applicable Periodic Rate, pro rata based on the amounts of Periodic Interest due;”
And principal here:
“EIGHTH: to pay an aggregate amount equal to the Optimal Principal Distribution Amount, in the following order, (a) principal of the Class A-1 Notes until the Aggregate Principal Amount of such Notes has been reduced to zero, and then (b) principal of the Class A-2 Notes until the Aggregate Principal Amount of such Notes has been reduced to zero, and then (c) principal of the Class B-1 Notes and Class B-2 Notes, pro rata, until the Aggregate Principal Amount of such [B-1 and B-2] Notes has been reduced to zero;”
Where does deferred interest fit in, in the above steps?

Well not under the definition of Periodic Interest:
“With respect to the Class A-1 Notes, the Class A-2 Notes, the Class B-1 Notes and the Class B-2 Notes, in each case interest payable on each Payment Date on such Notes and accruing during each Periodic Interest Accrual Period at the Applicable Periodic Rate.”
Nor the definition of Aggregate Principal Amount:
“With respect to any date of determination, (a) when used with respect to any Pledged Securities, the aggregate Principal Balance of such Pledged Securities on such date of determination; (b) when used with respect to any class of Notes, as of such date of determination, the original principal amount of such class reduced by all prior payments, if any, made with respect to principal of such class; and (c) when used with respect to the Notes, the sum of the Aggregate Principal Amount of the Senior Notes, the Aggregate Principal Amount of the Senior Subordinate Notes and the Aggregate Principal Amount of the Income Notes.”
Why does any of this matter? Can’t deferred interest be paid in either step?

Sure but the problem here is that choosing to pay deferred interest in one step over the other could have a huge impact on the cash flow to various notes.

Imagine the B-1s have $30 million in deferred interest. If that amount is paid under step SIX then the B-1s’ deferred interest is prioritized over the senior notes’ principal. If you go with step EIGHT, the opposite occurs. It’s a zero sum game, but either way, someone loses a good chunk of change based on the adopted interpretation of this vague language.

P.S. this is a common issue that you’ll find in CDOs backed by all types of assets (not just TruPS) so make sure your forecasting models are tuned to this properly.

Tuesday, November 10, 2009

Marathon, or Just a Quickie?

Friday’s Asset-Backed Alert describes the (mildly fascinating) behind-the-scenes activities of Marathon CLO I, a 2005-vintage CLO managed by Marathon Asset Management.

According to the article, Marathon itself recently purchased most or all of its deal’s senior-most tranche from Bank of America, at prices purported to be in the 85 cents on the dollar range. To turn a quick profit on their senior note investment, Marathon swiftly sold off roughly two-thirds of the collateral underlying the CDOs, with the proceeds being diverted towards substantially paying down their senior tranche, at par. By our back of the envelope calculations, if Marathon purchased the entire tranche, they would have made a profit on this trade of roughly $26.75mm already, with potentially more to follow.

As far as we’re aware all of the deal’s par coverage tests (“OC tests”) declined between mid-September and mid-October, despite continued improvement in the market for leveraged loans, which support these CLOs.

Why is this Interesting?

(1) While Marathon may have benefited greatly from its extensive trading activity, all other noteholders are, at least in our opinion, worse-positioned for it: in a month in which most CLOs’ OC test ratios improved, all OC ratios of Marathon CLO I suffered, arguably purely as a result of their aggressive trading during this period.

(2) The substantial paydown of the Class A1 notes (CUSIP 565763AA7) might encourage Moody’s to upgrade the tranche from its current rating of A1, with a possible Aaa rating in sight.

(3) Managers are typically disincentivized from any earlier-than-necessary unwinding of their deals: the longer their deals run, the longer they continue to collect management fees for managing the collateral; however, in this situation, the upfront profit of say $26mm would vastly outweigh the potential additional revenue stream of less than $2mm per year that a manager may hope to earn in fee from managing a deal such as this. (Managers earn fees based on the size of the portfolio, so a paydown decreases future fee generation.)

(4) The spirit of the deal is that managers are supposed to manage “across the capital structure.” In other words, though very difficult, they’re supposed to make managerial decisions that are in the best interests of all investors of the deal, certainly not only the senior-most tranche holders. At the same time, the dynamic is that rating agencies are trying to protect their rated noteholders, but not only the senior-most holders. Though it’s an imperfect system, the collateral manager is often required to purchase some of the equity of its own deal, to ensure that it manages across the structure, thereby sending proceeds down the waterfall as far as the equity notes (see here for examples of how this structural nuance can be manipulated).

(5) In this scenario, largely as a result of the early liquidation of assets, most if not all of the other rated noteholders will suffer, which could bring the rating agencies’ ratings on these notes into question, and the equity holders likely lose any potential upside they might otherwise have hoped to gain on their investment.

(6) Aside from allowing “Credit Risk” sales, rating agencies try to protect against aggressive management by limiting the amount of trading activity permissible by a manager (see example language below). With Marathon posturing such a large proportion of these sales as “Credit Risk Sales” it really calls into question the definition of a “Credit Risk Sale” and the question of a manager’s ability to arbitrarily designate a sale as a Credit Risk Sale simply to allow for its effectuation. Given that they were able to sell these assets at, on average, over 90 cents on the dollar, can they really have been Credit Risky? Does Marathon know something about all of these loans that the rest of the market does not? Did they all just suddenly become Credit Risks, encouraging Marathon to liquidate them in the best interests of all holders, or is Marathon acting in its own capitalistic interests?

Example Indenture Language

ARTICLE XII

SALE OF UNDERLYING ASSETS; SUBSTITUTION

Section 12.1. Sale of Underlying Assets and Eligible Investments.

(a) Except as otherwise expressly permitted or required by this Indenture, the Issuer shall not sell or otherwise dispose of any Underlying Asset. Subject to satisfaction of all applicable conditions in Section 10.8, and so long as (A) no Event of Default has occurred and is continuing and (B) each of the conditions applicable to such sale set forth in this Article XII has been satisfied, the Asset Manager (acting pursuant to the Asset Management Agreement) may direct the Trustee in writing to sell, and the Trustee shall sell in the manner directed by the Asset Manager (acting as agent on behalf of the Issuer) in writing:


(i) any Defaulted Obligation, Credit Improved Obligation or Credit Risk Obligation at any time; provided that during the Reinvestment Period and, with respect to Defaulted Obligations and Credit Risk Obligations, at any time, the Asset Manager (acting as agent on behalf of the Issuer) shall use its commercially reasonable efforts to purchase, before the end of the next Due Period, one or more additional Underlying Assets having an Aggregate Principal Amount (A) with respect to Defaulted Obligations and Credit Risk Obligations, at least equal to the Disposition Proceeds received from the sale of such Underlying Asset (excluding Disposition Proceeds allocable to accrued and unpaid interest thereon), and (B) with respect to Credit Improved Obligations, at least equal to the Aggregate Principal Amount of the Underlying Asset that was sold; and provided further, that the Downgrade Condition is satisfied;

(ii) an Equity Security at any time (unless earlier required herein); provided that during the Reinvestment Period, the Asset Manager (acting as agent on behalf of the Issuer) will use its commercially reasonable efforts to purchase, before the end of the next Due Period, one or more additional Underlying Assets with a purchase price at least equal to the Disposition Proceeds of such Underlying Asset (excluding Disposition Proceeds allocable to accrued and unpaid interest thereon) received from such sale;

(iii) any Underlying Asset which becomes subject to withholding or any other tax at any time; and

(iv) in addition, during the Reinvestment Period, any Underlying Asset not described in clauses (i), (ii) or (iii) above, if (x) no Downgrade Event has occurred and (y) with respect to any sale after the Payment Date occurring in September 2012, the Aggregate Principal Amount of all such sales for any calendar year does not exceed 25% of the Portfolio Investment Amount; provided that the Asset Manager (acting as agent on behalf of the Issuer) will use its commercially reasonable efforts to purchase, before the end of the next Due Period, one or more additional Underlying Assets having an Aggregate Principal Amount at least equal to the Aggregate Principal Amount of the Underlying Asset sold (excluding Disposition Proceeds allocable to accrued and unpaid interest thereon).

UPDATE, November 20, 2009: This morning's Asset-Backed Alert edition suggests, quite disturbingly, that Fortress and TCW may be considering similar moves to that of Marathon, in their Fortress Credit Funding CLO and Pro Rata Funding Ltd. deals, respectively.

Tuesday, April 14, 2009

From Lemmings to Lemons

"The market-sensitive risk models used by thousands of market participants work on the assumption that each user is the only person using them." - Avinash Persaud, April 2008.

This quote came to my attention via Felix Salmon's Market Movers via Reuters, and it encouraged me to develop our thought process from an earlier piece we put out, entitled Static Measures for a Dynamic Environment.
The point: in a changing environment, one has to proactively adapt modeling assumptions (such as recovery rates and correlations) to reflect those changes.

As Operation Securitization got underway, escalated and then came to an abrupt, sudden halt, each input into the model needed to have been updated due to the gargantuan size of the market -- and its subsequent influence and impact on trading levels -- and the systemic risk is brings with it. For example, the growth of the collateralized loan obligation market (CLOs) from 2001 through 2006 continued hand-in-hand with the growth of the leveraged loan market. With CLOs constituting the majority of demand for these (typically broadly-syndicated) bank loans (roughly 60-65%) the demand base grew in tandem with the supply source. But we saw no adjustment in either recovery rate assumptions (for loans or CLO-issued notes) or in correlation (between loans and CLOs or between loans or between CLO tranches) on the basis of, or necessitated by, this dual, dependent growth.

Surely if the CLOs stop buying, with the demand source halted, loan recovery rates must plunge downwards. And that's what's happened. Indeed performing leveraged loans have recently oscillated between trading levels of 50% and 65%, well below historically realized recovery rate levels for defaulted corporate loans! (70-80%)

We've described this phenomenon in more detail in The Corporate Loan Conundrum. Also, The Elephant in the Room describes our astonishment that certain recovery rate estimates to this day remain unchanged.

The system-wide (systemic) mass-production of securitized tranches helped undermine the value of each in the crisis. The greater the supply, the lower the recovery when things don't work out, and the more correlated they become. And so the banks -- the lemmings -- acting in unison for the most part, created lemons (there are notable exceptions who are still around).

Separately, while my "lemons" are securitized tranches, Brad Setser took the initiative back in 2007 of Turning lemons into lemonade. His lemons are different: they are mortgages; his lemonade being securitized notes.

His article is thought-provoking for many reasons. Here are two: (1) it brings to the fore the economic principle of lemons (think second-hand cars), a principle which relates nicely to the government's purchasing of "toxic assets," and (2) it reminds us of the correlation question: increased correlation improves the quality lower tranches. Why, then, in this market of increased correlation, are the lower tranches of securitized notes not being upgraded? Well, it's a loss-loss scenario for them: correlation, like volatility, increases precisely in the tough times, during which defaults are high. During these times the lower tranches die a quick or slow death in any event, depending on the deal. Superfluous then?

Wednesday, February 18, 2009

Issuer vs. Investor-pay Model

In CDO land, it’s long been considered a conflict of interest for issuers to pay rating agencies to have their own bonds rated (i.e., the “issuer-pay” model).

Indeed, both parties have the same initial goal: to close the deal and get paid. If the deal doesn’t close, neither party gets paid. (If the deal closes, but you’re not the rating agency used due to too stringent criteria, you similarly won’t get paid.)

This “mis-alignment” of interest is thought to encourage rating agencies towards leniency on certain rating considerations that may otherwise hinder deals from being done.

Excerpt from (now Chief Credit Officer at S&P) Mark Adelson’s Sept. 2007 speech before the House’s Committee on Financial Services, on the role of credit rating agencies in the structured finance market:

…rating agencies that had tougher standards become invisible, and, once more, they don’t make any money, because the way you make money rating a deal is you rate the deal and charge the issuer. So it puts pressure on the rating agencies to loosen their standards…

The “quick fix” solution - an “investor-pay” model - proposes that rating agencies are compensated by the investor and hence align their compensation structure with the party whose interests they’re trying to serve: after all, they are an “investor’s service.”

Firstly, we’re not convinced that an investor pay model removes this conflict-of-interest. Secondly, importantly, we show that investors may already be paying these rating agency fees.

Example:
A CDO’s flow-of-funds details how funded note issuance proceeds are used to purchase collateral and account for the deal’s closing expenses. When issuance proceeds prove insufficient, one solution is to issue a loan that covers the remaining expenses. Payments to this loan will be made through the deal’s priority of payments “waterfall” senior to any noteholder’s.

Under the issuer-pay model, some investors pay rating agency fees over time.


Click to enlarge

In the above table, the loan covers the $800K owed to rating agencies. On each distribution date, this loan will be partially amortized though a payment of both interest and principal. This payment represents proceeds that would have otherwise made their way to investors. In low default scenarios, this effects equity noteholder’s excess spread. In higher default scenarios, this effects rated noteholder’s ultimate receipt of interest and/or principal.

Under the investor-pay model, all investors pay rating agency fees upfront.


Click to enlarge

In the above table, investors purchase the deal’s notes at an additional 20 bps. This additional cost covers rating agency fees. The loan will be smaller and will therefore represent less of a burden to investors on each future distribution date.

At the end of the day, issuer-pay vs. investor-pay may just be the difference between having some investors pay fees over a few years and having all investors paying agency fees right now.

Keep watching this page for updates on where we’re going with this…

Tuesday, January 13, 2009

Static Measures for a Dynamic Environment

It is with a touch of disappointment that we read S&P's announcement yesterday that they're downgrading certain leveraged super senior (LSS) notes based on a change to their model.

What's particularly disappointing is that they're applying a (new) static measure for volatility -- a variable which we all know is certainly not static -- to long-term transactions. We call this a model "plug." Essentially, each time volatility changes, S&P could legitimately recalibrate its model and either upgrade or downgrade tranches (typically downgrade) on the basis of this change. From their release...


Today's rating actions reflect a recalibration of the model we use to rate these transactions. Specifically, we have increased the volatility parameter we use when simulating spread paths.

In future, we will use CDO Evaluator v4.1 and a volatility parameter of 60% to model all LSS transactions that have spread triggers. Before this recalibration we used a volatility parameter between 35% and 40%.


Given the rating agencies typically hide behind the mantra that their ratings are long-term "opinions" (expected loss or default probability measures, as the case may be) it's odd that they would adapt their model to incorporate short term "plugs." Each time volatility changes going forward, can we expect a reciprocal adjustment to be made to rating levels?

The ratings are being given based on a static volatility measure, and updated at S&P's whim to recognize the inherent flaw with the model... and that's not to mention the elephant in the room, the static correlation assumptions for all environments.

Let's look at this from another viewpoint: when trading stock options using Black-Scholes, one could legitimately argue that you're trading volatility. The price is out there in the market - it's a given. The Black-Scholes formula allows you to solve for volatility, and you could then buy or sell the option based on your assessment of the current and future volatility of the stock's price. Thus volatility is assumed to change over time, and if it were assumed to be constant (such as S&P is assuming), options wouldn't trade. Black Scholes suffers from having a static volatility measure (sigma), but if you're going to have a model where price is not known (from the market) and volatility is a huge unknown, it can't suffice to apply a static measure which then gets updated with time, at the agency's discretion, potentially adversely affecting your supposedly long-term ratings.

What do we recommend?
If rating consistency is important (which we believe it is, especially when pension funds and other endowment funds are substantial investors and when institutions place regulatory capital -- and hedge funds post margin -- against assets based on their ratings), then it's suboptimal to apply static, long-term measurements for both correlation and volatility, as these are key inputs to the model. We're not suggesting it's easy to model each variable as path-dependent, or time-dependent, but if you can't accurately estimate the key parameters in your model the only solution is to stop rating the instrument until you feel you can accurately estimate those variables.

Summary
Volatility, like correlation, is best understood as a measure that's dependent on time. Having said that, if S&P legitimately believes that the new (60%) measure is forever good, we would like to see them admit that the original measure was flawed (and show us how and why they reached the incorrect level), and reimburse all note issuers who paid and continue to pay for these flawed ratings. Ah, the responsibilities that come with collecting fees for your assumptions and modeling capabilities!