We've included a set of slides to help you get through this. There's also a white paper taking you through the construction and describing our approach. It's open-source, so feel free to have a bash.
~ PF2
PSCF - Press Release
––– a weblog focusing on fixed income financial markets, and disconnects within them
Philosophers from Aristotle to Ayn Rand have contended that “A is A.” Apparently none of these thinkers worked at a credit rating agency - in which “A” in one department may actually mean AA or even BBB in another. While the uninitiated might naively assume that various types of bonds carrying the same rating have the same level of credit risk, history shows otherwise.
During the credit crisis, AAA RMBS and ABS CDO tranches experienced far higher default rates than similarly rated corporate and government securities. Less well known is the fact that municipal bonds have for decades experienced substantially lower default rates than identically rated corporate securities – and that the rating agencies never assumed that a single A-rated issuer ought to carry the same credit risk in both sectors. This discrepancy was noted in Fitch’s 1999 municipal bond study and confirmed by Moody’s executive Laura Levenstein in 2008 Congressional testimony on the topic. Later in 2008, the Connecticut attorney general sued the three major rating agencies for under-rating municipal bond issues relative to other asset categories. (The suit was recently settled for $900,000 in credits for future rating services, but without any admission of responsibility). Last year, three economists – Cornaggia, Cornaggia and Hund – reported that government credit ratings were harsher than those assigned to corporates, which, in turn, were more severe than those assigned to structured finance issues.
One might ask why it is important for ratings in different credit classes to carry the same expectation in terms of either default probability or expected loss? Perhaps we should accept the argument that ratings are intended to simply provide a relative measure of risk among bonds within a given asset class.
There are at least two problems with this approach. First, it is unnecessarily confusing to the majority of the population that is unaware of technical distinctions in the ratings world. Second, it creates counterproductive arbitrage opportunities.
If an insurer is rated AAA on a more lenient scale than insurable entities in another asset class, the insurer can profitably "sell" its AAA rating to those entities without creating any real value in the process.
Municipal bond insurance is a great example. Monoline bond insurers like AAA-rated Ambac, FGIC and MBIA insured bonds issued by states, cities, counties and other municipal issuers for three decades prior to the 2008 financial crisis. In some cases, the entities paying for insurance were of a stronger credit quality than the insurers. As it happened, the insurers often failed while the issuers survived, leaving one to wonder why the insurance was necessary.
During this period, general obligation bonds had very low overall default rates. According to Kroll Bond Rating Agency’s Municipal Default Study, estimated annual municipal bond default rates by issuer count have been consistently below 0.4% since 1941. Similar findings for the period 1970-2010 are reported in The Bloomberg Visual Guide to Municipal Bonds by Robert Doty. This 0.4% annual rate applies to all municipal debt issues, including unrated issues and revenue bonds. The annual default rate for rated, general obligation bonds is less than 0.1%.
Given this long period of excellent performance, one might reasonably expect that most states and other large municipal issuers with diversified revenue bases to be rated AAA. No state has defaulted on its general obligation issues since 1933, and most have relatively low debt burdens when compared to their tax base. Despite these facts, the modal rating for states is typically AA/Aa with several in the A range. (This remains the case despite certain rating agencies’ claims that they have recently scaled up their municipal bond ratings to place them on a par with corporate ratings).
The depressed ratings created an opportunity for municipal bond insurers to sell policies to states that did not really need them. For example, the State of California paid $102 million for municipal bond insurance between 2003 and 2007. Negative publicity notwithstanding, the facts are that single A rated California has a Debt to Gross State Product ratio of 5% (in contrast to a 70% Debt/GDP ratio for the federal government) and that interest costs represent less than 5% of the state’s overall expenditures. While pension costs are a concern, they are unlikely to consume more than 12.5% of the state’s budget over the long term – not nearly enough to crowd out debt service.
California provides but one example. The Connecticut lawsuit mentioned above also cited unnecessary bond insurance payments on the part of cities, towns, school districts, and sewer and water districts.
Meanwhile, AAA-rated municipal bond insurers carried substantial risks, evident to many not working at rating agencies. For example, Bill Ackman found in 2002 that MBIA was 139 times leveraged. As reported in Christine Richard’s book Confidence Game, Ackman repeatedly shared his research with rating agencies – to no avail.
This imbalance between the ratings of risky bond insurers and those of relatively safe municipal issuers essentially created the monoline insurance business – a business that largely disappeared with the mass bankruptcy and downgrading of insurers during the 2008 crisis.
Inconsistent ratings across asset classes thus do have real world costs. In the US, taxpayers across the country paid billions of dollars over three decades for unneeded bond insurance. Individual municipal bond investors, often directed by their advisors to focus on AAA securities only, missed opportunities to invest in tens of thousands of bonds that should credibly have carried AAA ratings, but were depressed by the raters’ inopportune choice of scale.
We believe that one reason for the persistent imbalance between municipal, corporate and structured ratings is the dearth of analytics directed at government securities. Rating agencies and analytic firms offer models (and attendant data sets) that estimate default probabilities and expected losses for corporate and structured bonds. Such tools are relatively rare for government bonds. Consequently, the market lacks independent, quantitatively-based analytics that compute credit risks for these instruments. This lack of alternative, rigorously researched opinions allows the incorrect rating of US municipal bonds to continue, without the alleviation of a positive feedback loop.
Next month, PF2 will do its part to address this gap in the marketplace with the release of a free, open source Public Sector Credit Framework, designed to enable users to estimate government default probabilities through the use of a multi-period budget simulation. The framework allows a wide range of parameterizations, so you may find it useful even if you disagree with the characterization of municipal bond risk offered above. If you wish to participate in beta testing or learn more about this technology please contact us at info@pf2se.com, or call +1 212-797-0215.
--------------------------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study.
![]() |
“We subsequently withdrew our ratings on certain affected classes that are backed by a pool with a small number of remaining loans. If any of the remaining loans in these pools default, the resulting loss could have a greater effect on the pool's performance than if the pool consisted of a larger number of loans. Because this performance volatility may have an adverse affect on our outstanding ratings, we withdrew our ratings on the related transactions.”While it may cause frustration to note holders to see the ratings withdrawn, it augurs well that a rating agency is able and willing to say that it cannot have confidence in the outcome, and therefore chooses to withdraw its rating rather than have investors rely, perhaps falsely, on a rating in which it does not have confidence.
First, put a “fire wall” around ratings analysis. The agencies have already separated their rating and non-rating businesses. This is fine but not enough. The agencies must also separate the rating business from rating analysis. Investors need to believe that rating analysis generates a pure opinion about credit quality, not one even potentially influenced by business goals (like building market share). Even if business goals have never corrupted a single rating, the potential for corruption demands a complete separation of rating analysis from bottom-line analysis. Investors should see that rating analysis is virtually barricaded into an “ivory tower,” and kept safe from interference by any agenda other than getting the answer right. The best reform proposal must exclude business managers from involvement in any aspect of rating analysis and, critically also, from any role in decisions about analyst pay, performance and promotions.Two other elements jump out immediately from the complaint:
This brings back to mind, disturbingly, a recent New York Times article (Ratings Firms Misread Signs of Greek Woes) which focuses on the deliberations within Moody’s (MCO) and their concerns about the deeper repercussions of downgrading Greece – rather than the specifics of credit analysis:
Section 2.1 of S&P’s Code states: “[S&P] shall not forbear or refrain from taking a Rating Action, if appropriate, based on the potential effect (economic, political, or otherwise) of the Rating Action on [S&P], an issuer, an investor, or other market participant.”
and…
Section 2.1 of S&P’s Code states: “The determination of a rating by a rating committee shall be based only on factors known to the rating committee that are believed by it to be relevant to the credit analysis.”
“The timing and size of subsequent downgrades depended on which position would dominate in rating committees — those that thought the situation had gotten out of control, and that sharp downgrades were necessary, versus those that thought that not helping Greece or assisting it in a way that would damage confidence would be suicidal for a financially interconnected area such as the euro zone,” Mr. Cailleteau wrote in an e-mail.”
Defendants JPMorgan, Bear Stearns, WaMu, and Long Beach knew about the poor quality of the loans they securitized and sold to investors like Plaintiffs, because in order to continue to keep their scheme running, they completely vertically integrated their RMBS operations by having affiliated entities at every stage of the process. In addition, Defendants JPMorgan, Bear Stearns, WaMu, and Long Beach were aware of lending abuses on the part of the third party originators they purchased loans from due to, inter alia, their financial ties to the third party originators and their reviews of loan documentation and performance.
Morgan Stanley knew or recklessly disregarded that those lenders were issuing high-risk loans that did not conform to their respective underwriting standards. Morgan Stanley did, in fact, conduct extensive due diligence on the loans it purchased for securitization, as represented in the Offering Materials. In the course of that extensive due diligence process, which, in many instances, included an extensive re-underwriting review of the loans it purchased by an independent third-party due diligence provider, Clayton Holdings, Inc. (“Clayton”), Morgan Stanley learned that the originators routinely and flagrantly disregarded their own underwriting guidelines, originated loans based on wildly inflated appraisal values, and manipulated the underwriting process in order to issue loans to borrowers who had no plausible means to repay them. Indeed, both the President of Clayton and the head of Morgan Stanley’s own due diligence arm testified as to the extensive deficiencies identified through Morgan Stanley’s due diligence. Specifically, over one-third of the loans Morgan Stanley evaluated for purchase and securitization at the height of the mortgage boom (from 2006 through mid-2007) failed to meet the originators’ own underwriting guidelines.