––– a weblog focusing on fixed income financial markets, and disconnects within them
Saturday, February 8, 2020
Leveraged Loan CLOs and Rating Agencies - Policy Solutions
Wednesday, March 15, 2017
Can Deregulation and Open Data Solve the Credit Ratings Problem?
Tuesday, February 3, 2015
S&P Settles: Now How About that US Bond Rating?
Thursday, June 20, 2013
AAAs still Junk -- in 2013!
Moody's changed its opinion on a number of residential mortgage-backed securities (RMBS), with about 13 of them being downgraded from Aaa to Caa1.
The explanation provided: "Today's rating action concludes the review actions announced in March 2013 relating to the existence of errors in the Structured Finance Workstation (SFW) cash flow models used in rating these transactions. The rating action also reflects recent performance of the underlying pools and Moody's updated expected losses on the pools."
In short: the model was wrong - oops! ($1.5bn, yes that's billion, in securities were affected by the announced ratings change, with the vast majority being downgraded.)
Okay, so everybody gets it wrong some time or other. What's the big deal? The answer is there's no big deal. You probably won't hear a squirmish about this - nothing in the papers. Life will go on. The collection of annual monitoring fees on the deal will continue unabated and no previously undeserved fees will be returned. Some investors may be a little annoyed at the sudden, shock movement, but so what, right? They should have modeled this anyway, they might be told, and should not be relying on the rating. (But why are they paying, out of the deal's proceeds, for rating agencies to monitor the deals' ratings?)
What is almost interesting (again no big deal) is that these erroneously modeled deals were rated between 2004 and 2007. So roughly six or more years ago. And for the most part, if not always, their rating has been verified or revisited at several junctures since the initial "mis-modeled" rating was provided. How does that happen without the model being validated?
A little more interesting is that in many or most cases, Fitch and S&P had already downgraded these same securities to CCC or even C levels, years ago! So the warning was out there. One rating agency says triple A; the other(s) have it deep in "junk" territory. Worth checking the model? Sadly not - it's probably not "worth" checking. This, finally, is our point: absent a reputational model to differentiate among the players in the ratings oligopoly, the existing raters have no incentive to check their work. There's no "payment" for checking, or for being accurate.
Rather, it pays to leave the skeletons buried for as long as possible.
---------------
For more on rating agencies disagreeing on credit ratings by wide differentials, click here.
For more on model risk or model error, click here.
---------------
Snapshot of certain affected securities (data from Bloomberg)
Monday, May 13, 2013
An Open Source Alternative to No Bid Contracts
So federal regulators, who can no longer use credit ratings for evaluations of the municipal bond holdings of the commercial banks that they regulate, just gave a no bid contract to MMA, a relatively small firm with four principals. In essence the OCC will be substituting the opinions of MMA for those of the credit ratings agencies. Federally chartered banks held $363 billion of municipal securities as of 4th quarter 2012 according to the Federal Reserve ... The federal bank regulator will essentially be substituting the work of credit rating agencies, which issue over 1 million individual municipal ratings, with “research” from a small private shop. Is this wise? I think restricting themselves to such limited information is short-sighted given that muniland has over 80,000 issuers with $3.7 trillion of municipal debt outstanding. High quality credit analysis for even the debt of 50 states requires a shop bigger than MMA. Let alone all the other issuers. … Of course all the folks at MMA are nice, informed market professionals. But this process of hiring independent municipal research is ridiculous. No bid contracts have no place in our new, more transparent, post Dodd-Frank regulatory framework. The municipal bond market is facing its toughest challenges since the Great Depression and this BPD/OCC process needs more public input and openness.
Saturday, January 26, 2013
The Weak Underbelly of Capitalism
An earlier version of this post appeared on The Progress Report.
Thursday, December 27, 2012
Sovereign Debt Ratings - How High is High?
An Example
In placing the US rating on negative watch, Dagong cited five factors:
1. The US is at an impasse in budget negotiations
2. With no plan for maintaining solvency, the US is monetising its debt
3. US government debt is growing much faster than fiscal revenue
4. The fiscal cliff could lead to a US recession in 2013
5. Frequent emergencies such as the fiscal cliff and debt ceiling deadlines add to the risks
Friday, December 14, 2012
Monitoring Ratings Monitoring
An excerpt from the Japanese FSA's Administrative Release reads as follows (emphasis added):
"[S&P failed] to properly confirm important information that affects Synthetic CDO ("SCDO") credit ratings
...
[S&P] did not properly take stock of the cumulative loss amount pertaining to the reference obligations that affects the credit rating of SCDOs. The Company did not take measures such as confirming with arrangers of SCDOs whether there had been any credit events relating to the reference obligations.
Therefore, some cases were identified where incorrect credit ratings had been assigned to certain SCDO products for a significant period of time until just before the withdrawal of the credit ratings due to the redemption of those SCDO products.
...
The Company has continued publishing the credit ratings of the relevant SCDOs without confirming whether there were any credit events.
...
The Company incorrectly maintained until October 2010 a credit rating of a SCDO product that should have been downgraded in January and further in February of that year. This is due to input to the Company’s system of incorrect notional amount data in relation to the reference obligations in the monitoring process of the SCDO credit rating.
The Company has not implemented a verification process whereby a second person checks the accuracy of the data input."
Wednesday, April 11, 2012
Credit Rating Agency Models and Open Source
There are at least three ways a model’s results can be wrong: if the model’s code itself doesn’t function as intended; if the known inputs are incorrectly entered, and if the assumptions are misapplied. In cases as important as the evaluation of US sovereign debt, we think rating agencies and the investing public would be better off if the relevant models were publicly available. Some may argue that the inputs to the models are proprietary or that they reflect qualitative assumptions valuable to the ratings agencies – i.e., that they are a “secret sauce.” But, even if rating agencies want to keep their assumptions proprietary, making the models themselves available would decrease the likelihood of rating errors arising from software defects.
Keeping one’s internal processes internal is the traditional way. Manufacturers assume that consumers don’t want to see how the sausages are made. In the internet era, it is now much easier to produce the intellectual equivalent of sausages in public – and, as it happens, many consumers are interested in the production process and even want to get involved. Wikipedia provides an excellent example of the open, collaborative production of intellectual content: articles are edited in public and the results are often subject to dispute. Writers get almost instantaneous peer review and the outcome is often rapid iteration moving toward the truth. In their books, Wikinomics and Macrowikinomics, Dan Tapscott and Anthony Williams suggest that Wikipedia’s mass collaboration style is the wave of the future for many industries – including computer software.
Many rating methodologies, especially in the area of structured finance, rely upon computer software. At the height of the last cycle, tools that implemented rating methodologies such as Moody’s CDOROMTM, were popular with both issuers and investors wondering how agencies might look at a given transaction. While the algorithms used by these programs are often well documented, the computer source code is usually not released into the public domain.
Over the last two decades, the software industry has seen a growing trend toward open source technology, in which all of a system’s underlying program code is made public. The best known example of open source system is Linux, a computer operating system used by most servers on the internet. Other examples of popular open source programs include Mozilla’s Firefox web browser, the WordPress content management system and the MySQL database.
In financial services, the Quantlib project has created a comprehensive open source framework for quantitative finance. The library, which has been available for more than 11 years, includes a wide array of engines for pricing options and other derivatives.
Open source allows users to see how programs work and with the help of developers, fully customize software to meet their specific needs. Open source communities such as those hosted on GitHub and SourceForge, enable users and programmers from all over the world to participate in the process of debugging and enhancing the software.
So how about credit rating methodologies? Open source seems especially appropriate for rating models. Rating agencies realize relatively little revenue from selling rating models; they are more likely to be used to facilitate revenue generation through issuer-paid ratings.
Open source enables a larger community to identify and fix bugs. If rating model source code were in the public domain, investors and issuers would have a greater chance to spot issues. Rating agencies would be prevented from covering up modeling errors by surreptitiously changing their methodologies. In 2008, The Financial Times reported that Moody’s errantly awarded Aaa credit ratings to a number of Constant Proportion Debt Obligations (CPDOs) due to a software glitch. The error was fixed, but the incorrectly rated securities were not immediately downgraded according to the FT report. Had the rating software been open source, it would not have been much more difficult to conceal this error, and it would have offered the possibility for a positive feedback loop – an investor or other interested party could have found and fixed the bug on Moody’s behalf.
Not only do open source rating models promote quality, they may also reduce litigation. The SEC issued Moody’s a Wells Notice in respect of the above mentioned CPDO issue, and may well have brought suit. (A Wells Notice is a notification of intent to recommend that the US government pursue enforcement proceedings, and is sent by regulators to a company or a person.) Investors have brought suit against the rating agencies to the extent they felt the ratings were inappropriate, for model-related errors or otherwise. By unveiling the black box, the rating agencies would be taking an active approach in buffering against litigation, and enjoy the material defense that, “yes we may have erred, but you were afforded the opportunity to catch our error – and didn’t.”
Unlike the CPDO model employed by Moody’s, the S&P US sovereign "model" likely took the form of a simple spreadsheet containing adjusted forecasts from the Congressional Budget Office. In contrast to the structured and corporate sectors, there are relatively few computer models for estimating sovereign and municipal default probabilities. While little modeling software is available for this sector, accurate modeling of government credit can be seen as a public good. Bond investors, policy makers and citizens themselves could all benefit from more systematic analysis of government solvency.
Open source communities are a private response to public goods problems: individuals collaborate to provide tools that might otherwise appear in the realm of licensed software. Thus open source government default models populated with crowd-sourced data maybe the best way to fill an apparent gap in the bond analytics market.
On May 2nd, PF2 will contribute an open source Public Sector Credit Framework, which is aimed at filling this analytical gap, while demonstrating how future rating models can be distributed and improved in an iterative, transparent manner. If you wish to participate in beta testing or learn more about this technology please contact us at info@pf2se.com, or call +1 212-797-0215.
--------------------------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study. This posting is the second in a series of posts leading up to May 2nd. The prior piece can be accessed by clicking here.
Wednesday, April 4, 2012
Multiple Rating Scales: When A Isn’t A
Philosophers from Aristotle to Ayn Rand have contended that “A is A.” Apparently none of these thinkers worked at a credit rating agency - in which “A” in one department may actually mean AA or even BBB in another. While the uninitiated might naively assume that various types of bonds carrying the same rating have the same level of credit risk, history shows otherwise.
During the credit crisis, AAA RMBS and ABS CDO tranches experienced far higher default rates than similarly rated corporate and government securities. Less well known is the fact that municipal bonds have for decades experienced substantially lower default rates than identically rated corporate securities – and that the rating agencies never assumed that a single A-rated issuer ought to carry the same credit risk in both sectors. This discrepancy was noted in Fitch’s 1999 municipal bond study and confirmed by Moody’s executive Laura Levenstein in 2008 Congressional testimony on the topic. Later in 2008, the Connecticut attorney general sued the three major rating agencies for under-rating municipal bond issues relative to other asset categories. (The suit was recently settled for $900,000 in credits for future rating services, but without any admission of responsibility). Last year, three economists – Cornaggia, Cornaggia and Hund – reported that government credit ratings were harsher than those assigned to corporates, which, in turn, were more severe than those assigned to structured finance issues.
One might ask why it is important for ratings in different credit classes to carry the same expectation in terms of either default probability or expected loss? Perhaps we should accept the argument that ratings are intended to simply provide a relative measure of risk among bonds within a given asset class.
There are at least two problems with this approach. First, it is unnecessarily confusing to the majority of the population that is unaware of technical distinctions in the ratings world. Second, it creates counterproductive arbitrage opportunities.
If an insurer is rated AAA on a more lenient scale than insurable entities in another asset class, the insurer can profitably "sell" its AAA rating to those entities without creating any real value in the process.
Municipal bond insurance is a great example. Monoline bond insurers like AAA-rated Ambac, FGIC and MBIA insured bonds issued by states, cities, counties and other municipal issuers for three decades prior to the 2008 financial crisis. In some cases, the entities paying for insurance were of a stronger credit quality than the insurers. As it happened, the insurers often failed while the issuers survived, leaving one to wonder why the insurance was necessary.
During this period, general obligation bonds had very low overall default rates. According to Kroll Bond Rating Agency’s Municipal Default Study, estimated annual municipal bond default rates by issuer count have been consistently below 0.4% since 1941. Similar findings for the period 1970-2010 are reported in The Bloomberg Visual Guide to Municipal Bonds by Robert Doty. This 0.4% annual rate applies to all municipal debt issues, including unrated issues and revenue bonds. The annual default rate for rated, general obligation bonds is less than 0.1%.
Given this long period of excellent performance, one might reasonably expect that most states and other large municipal issuers with diversified revenue bases to be rated AAA. No state has defaulted on its general obligation issues since 1933, and most have relatively low debt burdens when compared to their tax base. Despite these facts, the modal rating for states is typically AA/Aa with several in the A range. (This remains the case despite certain rating agencies’ claims that they have recently scaled up their municipal bond ratings to place them on a par with corporate ratings).
The depressed ratings created an opportunity for municipal bond insurers to sell policies to states that did not really need them. For example, the State of California paid $102 million for municipal bond insurance between 2003 and 2007. Negative publicity notwithstanding, the facts are that single A rated California has a Debt to Gross State Product ratio of 5% (in contrast to a 70% Debt/GDP ratio for the federal government) and that interest costs represent less than 5% of the state’s overall expenditures. While pension costs are a concern, they are unlikely to consume more than 12.5% of the state’s budget over the long term – not nearly enough to crowd out debt service.
California provides but one example. The Connecticut lawsuit mentioned above also cited unnecessary bond insurance payments on the part of cities, towns, school districts, and sewer and water districts.
Meanwhile, AAA-rated municipal bond insurers carried substantial risks, evident to many not working at rating agencies. For example, Bill Ackman found in 2002 that MBIA was 139 times leveraged. As reported in Christine Richard’s book Confidence Game, Ackman repeatedly shared his research with rating agencies – to no avail.
This imbalance between the ratings of risky bond insurers and those of relatively safe municipal issuers essentially created the monoline insurance business – a business that largely disappeared with the mass bankruptcy and downgrading of insurers during the 2008 crisis.
Inconsistent ratings across asset classes thus do have real world costs. In the US, taxpayers across the country paid billions of dollars over three decades for unneeded bond insurance. Individual municipal bond investors, often directed by their advisors to focus on AAA securities only, missed opportunities to invest in tens of thousands of bonds that should credibly have carried AAA ratings, but were depressed by the raters’ inopportune choice of scale.
We believe that one reason for the persistent imbalance between municipal, corporate and structured ratings is the dearth of analytics directed at government securities. Rating agencies and analytic firms offer models (and attendant data sets) that estimate default probabilities and expected losses for corporate and structured bonds. Such tools are relatively rare for government bonds. Consequently, the market lacks independent, quantitatively-based analytics that compute credit risks for these instruments. This lack of alternative, rigorously researched opinions allows the incorrect rating of US municipal bonds to continue, without the alleviation of a positive feedback loop.
Next month, PF2 will do its part to address this gap in the marketplace with the release of a free, open source Public Sector Credit Framework, designed to enable users to estimate government default probabilities through the use of a multi-period budget simulation. The framework allows a wide range of parameterizations, so you may find it useful even if you disagree with the characterization of municipal bond risk offered above. If you wish to participate in beta testing or learn more about this technology please contact us at info@pf2se.com, or call +1 212-797-0215.
--------------------------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study.
Thursday, January 26, 2012
Illinois Attorney General Sues S&P – Initial Thoughts
Yesterday’s filing of the complaint against S&P (MHP) centers, in essence, on the allegation of false advertising. Stepping in to this issue for a moment, one of the key defenses offered by the raters is that their ratings are protected under the First Amendment rights to express an opinion (their “speech”). But as law professor Eugene Volokh opines in his letter to the House Committee (May 2009) it is within the framework of commercial advertising that “speech aimed at proposing a commercial transaction – is much less constitutionally protected than other kinds of speech.”
In this case, the AG is not really focusing wholly on whether the ratings were wrong, as much as it’s saying that S&P advertised that it was following a certain code in ensuring the appropriate levels of independence and integrity were being brought to the ratings process.
A former SEC enforcement official, Pat Huddleston, once explained that "[when] I say the [financial] industry is dirty, I don't mean to imply everyone in the industry is dirty," … "[only] that the industry typically promises something it has no intention of delivering, which is a client-first way of operating." This is essentially what the complaint argues: that S&P “misrepresented its objectivity” while offering a service that was “materially different from what it purported to provide to the marketplace.”
This goes back, really, to the key reform measure Mark proposed before the Senate in 2009 – that rating agencies would do well to separate themselves from commercial interests, by building a formidable barrier around the ratings process.
First, put a “fire wall” around ratings analysis. The agencies have already separated their rating and non-rating businesses. This is fine but not enough. The agencies must also separate the rating business from rating analysis. Investors need to believe that rating analysis generates a pure opinion about credit quality, not one even potentially influenced by business goals (like building market share). Even if business goals have never corrupted a single rating, the potential for corruption demands a complete separation of rating analysis from bottom-line analysis. Investors should see that rating analysis is virtually barricaded into an “ivory tower,” and kept safe from interference by any agenda other than getting the answer right. The best reform proposal must exclude business managers from involvement in any aspect of rating analysis and, critically also, from any role in decisions about analyst pay, performance and promotions.Two other elements jump out immediately from the complaint:
First, the complaint specifically argues that the rating agency “misrepresented the factors it considered when evaluating structured finance securities.” Next, the complaint tries to tie S&P’s actions to its publicly-advertised code of conduct, arguing that its actions were inconsistent with the advertised code.
In respect of actions being inconsistent with the code, certain of these arguments are common-place, such as the contention that the rating agencies did not allocate adequate personnel, in opposition to what’s advertised in the code. This of course becomes a contentious issue – you can see S&P coming back with copious evidence of situations in which they did “allocate adequate personnel and financial resources.” But the complaint hones in on the factors considered in producing a rating, and it focuses on two parts of the code:
This brings back to mind, disturbingly, a recent New York Times article (Ratings Firms Misread Signs of Greek Woes) which focuses on the deliberations within Moody’s (MCO) and their concerns about the deeper repercussions of downgrading Greece – rather than the specifics of credit analysis:
Section 2.1 of S&P’s Code states: “[S&P] shall not forbear or refrain from taking a Rating Action, if appropriate, based on the potential effect (economic, political, or otherwise) of the Rating Action on [S&P], an issuer, an investor, or other market participant.”
and…
Section 2.1 of S&P’s Code states: “The determination of a rating by a rating committee shall be based only on factors known to the rating committee that are believed by it to be relevant to the credit analysis.”
“The timing and size of subsequent downgrades depended on which position would dominate in rating committees — those that thought the situation had gotten out of control, and that sharp downgrades were necessary, versus those that thought that not helping Greece or assisting it in a way that would damage confidence would be suicidal for a financially interconnected area such as the euro zone,” Mr. Cailleteau wrote in an e-mail.”
The question then, is whether rating committees were focused on credit analysis, or whether other concerns were at play, aside even from typical business interests. The concerns for rating agencies, from a legal perspective, can become quite real when the debate centers not on ratings accuracy, but on whether the rating accurately reflected their then-current publicly available methodology. There may be substantial risks, therefore, in delaying a downgrade of a systemically important sovereignty or institution (such as a too-big-to-fail bank or a key insurance company) if such downgrade is appropriate per the financial condition of the company or sovereignty, or in providing favorable treatment to certain companies or sovereignties based on the relative level of interconnectedness.
The allegations of misrepresenting factors considered in their analysis opens another can of worms for rating agencies, as they’ll subsequently be increasingly focused on disclosing the sources of the information relied upon. There’s substantial concern, to the extent they’re relying on the issuing entity (in cases in which the issuing entity is itself the paying customer), that such reliance becomes a disclosure issue to the extent the investor may otherwise have assumed the rating agency was independently verifying such information. This was a frequent problem in the world of structured finance CDOs such as those described in the AG’s complaint.
Last, but not least, the complaint focuses on the effectiveness of ratings surveillance. This is a topic of importance to us, as we feel that proper surveillance, alone, may have substantially diminished the magnitude of the crisis. At the very least, certain securitizations that ultimately failed may not have been executed had underlying ratings been appropriately monitored, and several resecuritizations may have become impossible, limiting the the proliferation of so-called toxic assets. See for example: Barriers to Adequate Ratings Surveillance
That’s all for now. There’s a lot more to this complaint, so we suggest you check it out here.
Monday, October 10, 2011
Will S&P's Wells Notice Change Their Behavior?
The various forms of media speculation have focused on Delphinus CDO, a crisis-era structure backed by subprime mortgage bonds, as the focal point of the SEC’s investigation. This transaction was highlighted by Senator Levin’s PSI report as a “striking example” of how banks and ratings firms branded mortgage-linked products safe even as the housing market worsened in 2007.
The implications of S&P’s internal emails, made public through the Senate’s investigative process, are that the rater may have known, at the time it was issuing its ratings on Delphinus, that the ratings being provided were inconsistent with its then-current methodology. In essence, S&P’s model and ratings were contingent on certain preliminary information made available to them by the underwriting bank. When that information changed at a late stage of the deal’s construction, S&P’s model was no longer able to produce results consistent with the desired rating.
That S&P nevertheless issued the originally-requested rating, despite its being incompatible with the new information, opens a line of questioning into whether S&P’s ultimate rating adequately reflected its own analysis. In issuing the Wells Notice, the SEC may at this juncture reasonably suspect S&P of committing a Rule 10b-5 violation.
The SEC would be pressed to show that S&P knew, at the time it was providing the rating, that the rating was ill-deserved (and thus misleading to investors). If the SEC is able to show that S&P intentionally provided a misleading rating, they would distinguish this case from several of the other rating agency complaints that have been dismissed.
Importantly, the SEC’s case would almost certainly survive the rating agency’s preferred defense motion that invokes their First Amendment rights to express an opinion. Floyd Abrams, S&P’s external counsel on First Amendment issues, himself confesses in his September 2009 testimony before the House that “[the] First Amendment provides no defense against sufficiently pled allegations that a rating agency intentionally misled or defrauded investors,” … “nor does it protect a rating agency if it issues a rating that does not reflect its actual opinion.” (emphasis added)
Aside from the abovementioned Wells Notice, the SEC is showing a keen focus on each rater’s application of its own public methodology: their annual rating agency examination report, released late last week, cites as an “essential finding” that “[one] of the larger NRSROs reported that it had failed to follow its methodology for rating certain asset-backed securities.”
The result of the Delphinus investigation notwithstanding, the mere threat of legal action alters a rater’s approach to issuing ratings and maintaining current ratings. We have already seen the fruits borne of pressure instilled by the new regulatory landscape. In late July of this year, S&P stunned the market by pulling away from rating a commercial mortgage-backed securities (CMBS) transaction, led by Goldman Sachs and Citigroup, on the evening prior to the deal’s closing. S&P reportedly needed to adjust its model to reflect “multiple technical changes,” ultimately leading to the deal being shelved.
The manner in which S&P dealt with a controversial and costly methodological change suggests a new-found sensitivity towards violating the 10b-5 legal standard. Such a drastic action, bringing significant embarrassment to S&P in addition to the loss of market share, would have been inconceivable in the prior, revenue-centric, competitive landscape.
With raters seeking at all costs to avoid a 10b-5 violation, we foresee them increasingly turning away bankers who pressure them with “last-minute” demands. Bankers, risking their deals falling through, will be driven to accommodate the rater’s requirements in providing their supporting data in a more timely fashion, well in advance of a deal’s closing. Perhaps we’ll have fewer deals done as a result; but perhaps those deals will be safer, supported by less-hurried analyses.
As we have seen before, it continues to be legal risk, and not reputational risk, that has encouraged oligopolistic rating agencies to re-focus their attentions on the quality of the product being provided. With Damocles’ sword swinging over-head, we can only hope for more objective ratings going forward – and fewer stale ones.
Thursday, May 19, 2011
A Telling Tale of Two Tables
There are certain difficulties we all know about – having to rely on the ratings data being given to you by the parties whose performance you’re measuring. So there is naaturally the potential for the sample to be biased or slanted, and that’s very difficult to uncover. (We’ve discussed this hurdle at great length here.)
The next issue we’ve written about is the inability to separate the defaulting assets from those that didn’t default. In our regulatory submission, we called for transparency as to what happened to each security BEFORE its ratings was withdrawn (see Transparency of Ratings Performance).
Let’s have a look at a stunning example today that brings together a few of the challenges, and leaves one with a few unanswered questions as to the meaningfulness of ratings performance, as it's being currently displayed, and the incentives rating agencies have to update their own ratings.
Here’s a Bloomberg screenshot for the rating actions on deal Stack 2006-1, tranche P.

Starting from the bottom up, it seems Moody’s first rated this bond in 2006 whereas Standard & Poor’s first rated it in 2007. If we try to verify this on S&P’s website for history, we come to realize how difficult it can be to verify:

So let’s suppose everything about the Bloomberg chart is accurate.
As verified on their website, Moody’s rated this bond Aaa in August ’06 and acted no further on this bond until it withdrew the rating in June 2010. (They don’t note whether the bond paid off in full, or whether it defaulted.)
S&P, meanwhile, shows its original AAA rating of 2007 being downgraded in 2008 (to BBB- and then to B-) and in 2009 to CC and again in 2010 to D, which means it defaulted (according to S&P).
For Moody’s, the first year for which the bond remains in the sample for a full year is 2007; thus, it wouldn’t be included in 2006 performance data. For S&P, the first complete year is 2008.
So if we consider how Moody’s would demonstrate its ratings performance on this bond, it would say:
Year 2007: Aaa remains Aaa
Year 2008: Aaa remains Aaa
Year 2009: Aaa remains Aaa
Year 2010: Aaa is withdrawn (WR)
No downgrades took place, according to Moody’s … while at the same time S&P shows it as having defaulted:
Year 2008: AAA downgraded to B-
Year 2009: B- downgraded to CC
Year 2010: CC downgraded to D
Here’s what their respective performance would look like, if one were to apply their procedures (at least as far as we understand them):

Friday, November 19, 2010
Meredith Whitney and the Future of Credit Rating Agencies
Rather, the regulatory proposals in both the U.S. and Europe have been quite severe on the rating agencies, demanding both improved transparency and enhancing transparency, which increasing the potential for legal liability; and the very reason that players like Kroll and Meredith Whitney are entering the rating environment is that the established agencies are particularly vulnerable to competition.
To be fair, it is always a challenge to compete against a well-established company. But Kroll and Whitney are seizing the opportunity while the raters are weakened by poor ratings performance, and distracted by the significant increase in both the “volume and cost of defending such [related] litigation.” - from Moody’s (MCO) 10Q
Ratings, as we all know, are interwoven throughout our financial framework. It takes time to remove references to them and there remains a modicum of inertia among market participants in moving away from the Big Three or away from credit raters in general. But there has been a tangible change in momentum, with raters like Canada’s DBRS having already secured a large (majority) share of the U.S. residential mortgage-backed securities (RMBS) market — hardly a sign of “difficulty competing.” (WSJ May 2010)
Rather than being anxious about seeing immediate changes despite the lengthy history, and deeply embedded nature, of credit ratings, we urge the media to rather applaud the substantial regulatory improvements that have been made in respect of reducing reliance on ratings and heightening the integrity of the ratings process. We caution, however, that a material increase in the number of rating agencies leads to greater competition and not to higher quality ratings. More accurately, the readier the supply of ratings, the higher the inflation of ratings provided.
Notes:
(1) Aside from the 11 SEC approved NRSROs, there are already according to our calculations 108 other debt rating companies worldwide, 18 of which are affiliated in some way or other with one of the NRSROs.
(2) To visit submissions to the SEC, including our submission, on the credit rating reform proposals put forth in the Dodd-Frank Act, click here.
Monday, August 9, 2010
FDIC Esoterica – It's Capital
[among] the options being discussed is a greater use of credit spreads, having supervisors develop their own risk metrics and a reliance on existing internal models...The other option likely to be mulled is the use of service providers and rating agencies outside of the Big Four (Moody's, S&P, Fitch and DBRS). We have contemplated some of the elements of rating operational agency due diligence, here, but ultimately this process would require an understanding of the varying levels of expertise within each rating agency, it's level of accuracy and stability, and the various limitations of its model. (For example, this morning's WSJ reports on Morningstar research that concludes oddly that "using low fees as a guide [to a mutual fund's future success] would give investors better results than even Morningstar's own star-rating system...[because while] the stars system has typically guided investors to better results, it isn't as effective in predicting future returns at times of big market swings.")
The movement towards relying fully on credit default swaps, as contemplated above, seems to us a distant longing: first, CDS liquidity (not to mention maturity) isn't always comparable to that of the securities being referenced resulting in an imperfect spread-to-risk mapping; next, we are yet to witness substantial research evidencing the predictive content of CDS spreads on unrated securities; last, it remains questionable to what extent CDS spreads directly mimick the underlying's fundamental credit risk. (See Credit Ratings vs. Credit Default Swaps for more on this.)
There are at least three reasons why we would welcome the FDIC's direct participation by analyzing the securities interally:
1 - If the FDIC decides to create its own models, they'll be better equipped to appreciate the risks inherent in the securities being purchased by the banks they're to an extent insuring. Not only will they be less reliant on credit rating agencies, but also on broker-dealers' and third-party providers' differing opinions.
2 - With investor sophistication levels having (unfortunately) softened from a legal perspective, it augurs well to have a regulator play an active role in overseeing the investments being allowed by its underlying members. They would be better positioned to push back on riskier investment activities, or to encourage improved hedging or risk mitigation techniques on the bank or system level.
3 - Aside from the obvious advantages of treating all banks equally, the other benefit of having regulators play a more active role in examining capital adequacy reserves is the informational advantage they bring to the table:
For a real-life example, consider the $60bn world of trust preferred securities CDOs (or TruPS CDOs), whose performance depends on the performance of its underlying banks. Who would be better positioned than the FDIC to have an accurate handle on future bank defaults? With a model to support them, the FDIC can estimate the effect of default of all banks they consider to be poorly-positioned or undercapitalized. As it happens, in somewhat circular a fashion, banks (like Zions Bancorp) often also hold TruPS CDOs, which are themselves supported by other banks. Thus, the FDIC will be able to model the exact public utility of allowing a bank to prepay on its preferred securities at a discount, as they'll be able to measure the overall affect of that bank's prepayment on all TruPS CDOs holding that bank's preferreds; and they'll know how much each other bank holds of those TruPS CDOs which hold the original bank's preferreds. Ah, the beauty of information!
As an alternative, banks holding TruPS CDOs have been subjected to abysmal ratings performance which has come in tremendous waves of downgrades by rating agencies that in some cases are not rating the underlying banks and in other cases are guessing at how the models will perform.
From a recent American Banker article entitled "TruPS Leave Buyers in CDO Limbo:"
According to Fitch's Derek Miller, the agency is "in the process of reviewing all our assumptions" on trust-preferred CDO defaults and deferrals. For recent rating actions, he said, Fitch did not do precise cash-flow modeling, because it felt that the nuances of the capital structure have been drowned out by the sheer volume of defaults and deferrals that determine payouts.Knowing just how sensitive some banks are to ratings changes en masse, we're excited at the prospect of the FDIC becoming more hands-on, and potentially smoothing the shocks. In this way, not every ratings failure need precipitate a liquidity crunch, a lending freeze, or a public-sector intervention.
Wednesday, April 28, 2010
Credit Ratings vs. Credit Default Swaps
I’m not convinced.
Firstly, with ratings being so deeply embedded throughout our financial structure, the ratings of the assets themselves become an integral component of the market-implied risk assessment. For example, even when analyzing securitized products Vink and Fabozzi (2009) show credit ratings to be a major factor accounting for the movement of primary market spreads. Thus, for any proposal to be convincing it would have to test the accuracy and reliability of CDS spreads on unrated bonds or companies. Alternatively, a study would need to compare the performance of traded securities whose ratings are not publicly known (also known as shadow ratings) to the performance of those shadow ratings.
Secondly, bond yields (or spreads-to-swaps) and credit default swap premiums are largely incomparable to credit ratings for many reasons. These differences will have to be tackled in a separate piece, but at the very least there’s that non-insignificant concept of liquidity. Both CDS premiums and bond yields include the various risks – not just credit risks – that come with investing in, or buying protection on, a security. Credit ratings speak solely to long-term credit risks.
One may argue that the ratings were far less accurate than CDS spreads during the crisis, and that this (i.e., during a market dislocation) is the only time we depend on accurate default projections and we should therefore abolish rating agencies in general. While I don’t wish to complain of these proposals, I fear that they complain unfairly of the rating agencies.
Yes the CDS spreads may better reflect default probability during a crisis. By definition they’re more adaptive to changing market conditions, versus the ratings which are long-term predictors. But would you want ratings to change in as volatile a fashion as CDS spreads? Would you want ratings to depend on headline news, or on audited (or lightly audited) financial data? Also, one shouldn’t forget that CDS spreads on CDOs and RMBS tranches were just as poor reflections of market-perceived asset quality before the crisis. The crisis could only occur, in part, because the banks were able to buy protection so cheaply from the monolines, by way of being long the CDS -- the infamous negative basis trades.
But even if these proposals made sense and even if their hypotheses were correct, they would be missing at least one crucial point: we need ratings. Meaningful ratings are essential – certainly now. Let me explain why, albeit by way of a long-winded explanation.
For financial reform to be successful it needs ultimately to deal with the flaws in our banks’ risk management procedures – and to deal with them in an environment in which the very serious practice of risk mitigation is left by senior management to risk managers, just as the serious business of growing revenues while attending to shareholder pressure is left by risk managers to upper management.
That these two functions are more adversarial than independent in nature is a concept not to be lost on us. Overly cautious risk management might hinder the implementation of growth opportunities, or the extent thereof. At times, indeed, they may be thought by the skeptic to be mutually exclusive.
Indeed the overpowering pressures that come with business initiatives can influence even the most judicious risk manager’s ability to perform her function in an objective manner, even though her function ought to be both separate from and independent of the business strategies. (See for example “Lehman’s Worst Offense: Risk Management.”)
With both traders and management being compensated for revenue generation, and with prudent risk managers acting only as a hindrance to the initiation and exploitation of growth opportunities, there remains little incentive for senior managers to maintain a healthy risk management environment. Instead of cultivating an environment in which risk managers are educated in monitoring the real risks (which requires expensive resources including personnel, data and systems) they are seen rather as a burden and a cost center, and are therefore starved of the resources necessary to question traders, trades, and trading strategies.
In sum, we remain in the infancy of creating a functioning risk control practice in place at our major banks. We are yet to promote adequate business-peer challenge processes and our price verification processes remain immature. Credit ratings, if created and applied properly, can provide a healthy starting point for internal skepticism; they can provide the independent credit risk assessment that supplements an analysis performed by the front-office or by the back-office.
Conclusion
CDS spreads are untested as a predictor of long-term default probability on unrated securities. Perhaps the reliability of CDS spreads depends on the underlying referenced entity being rated. There’s no doubt that CDS spreads are useful indicators – but I seriously doubt that they’re anywhere near as useful as ratings in predicting long-term default probabilities or losses.
I remain convinced there's an important place in our market for one or more independent agencies to provide their objective opinions in the form of a rating. For ratings reform to be successful, however, requires that the necessary measures be put in place to ensure that rating analysts are unfettered by market share concerns, and are incentivized only by ratings quality and accuracy. If we can achieve these objectives, ratings will return to providing a meaningful utility.