Showing posts with label Rating Agency Reform. Show all posts
Showing posts with label Rating Agency Reform. Show all posts

Saturday, February 8, 2020

Leveraged Loan CLOs and Rating Agencies - Policy Solutions


Over the last couple of years, financial market commentators have become concerned that leveraged loans and Collateralized Loan Obligations (CLOs) are becoming the newest “financial weapons of mass destruction”.  The fear is that mispricing and over-production of these assets could lead to a bubble that would ultimately take down our financial system – just as subprime mortgage backed securities did a dozen year ago.

Further, critics worry that rating agencies – still following the traditional issuer-pays model – lack the incentive to protect us from a leveraged lending meltdown. Instead, agencies are thought to be engaged in a "race-to-the-bottom," lowering their rating standards to enable (or keep) even the less credible corporate borrowers in the Investment Grade category.

If SEC-licensed Nationally Recognized Statistical Rating Organizations (NRSROs) – or so-called credit rating agencies -- are not up to the task, investors could turn to non-licensed analytics firms to more objectively evaluate leveraged loans and the securitization vehicles that house them.  Outside of the market for debt and credit-based financial products, we see many types of ratings published by non-licensed providers. For example, Consumer Reports assigns ratings to a wide array of products, US News ranks colleges and Yelp assigns ratings to service establishments. These systems are imperfect and sometimes deservedly attract criticism, but no rating system is perfect and the widespread use of these assessments suggests that users find them valuable.

The main barrier to entry for non-NRSROs that would want to assess leveraged loans and CLOs specifically is lack of access to data, and this is an issue that the SEC could rectify. The leveraged loans at the center of CLOs are often borrowings made by privately held companies – including holdings of private equity firms – that are not required to make their financial statements public. Only current investors and the rating agencies hired to rate these entities can see these financial statements.

The SEC could simply require all such companies that borrow on the leveraged loan market, subject to a minimum borrowing size, to file their 10-Q and 10-K statements on the EDGAR system. That way independent firms could assess their financial status and estimate default probabilities and expected losses on their loan facilities.

Second, CLO issuers should be required to post both their loan portfolios and details of their capital structures on EDGAR as well. In such a scenario, CLOs could no longer be exempted under Section 4(a)(2) of the Securities Act and sold as Rule 144A private securities. Instead, they would be regulated as public securities.

Finally, many CLOs have complex rules governing how proceeds from the collateral pool should be distributed among the various classes of noteholders and the firms – like the asset manager – that provides services to the CLO deal. These “priority of payment” provisions are outlined in dense legalese included in the CLO's offering documents. Rather than compelling investors and analysts to decipher these legal provisions, issuers should be required to code them as computer algorithms which would also be published as part of the deal’s disclosure. CLOs could then operate like any other “smart contract,” easing the work of deal participants and third parties who need to analyze the many “what-ifs” that can occur over the life of a transaction.

Leveraged loans and CLOs may or may not be the ticking time bomb that will blow up our economy. One way to limit the potential for bubble-creation is to remove dependence on parties (like the incumbent credit rating agencies) that are financially motivated to provide high ratings, thus prompting issuers and other market participants to seek out their services. Thus, our solution is, in short, to make these transactions more transparent, so that other third parties can access information on the securities and analyze them in a cost-effective manner.

Wednesday, March 15, 2017

Can Deregulation and Open Data Solve the Credit Ratings Problem?

This blog is provided by guest contributor Marc Joffe.  The following views are his own, and do not necessarily reflect those of PF2.
~~~   
Credit rating agency scandals, widely blamed for the 2008 financial crisis, now seem to be a distant memory. We have gone several years without another major ratings failure, so casual observers may be forgiven for thinking that the underlying problem has been solved. But as a new Brookings study shows, the defective rating agency market structure that triggered the crisis remains in place. Further reforms would seem unlikely under unified Republican government, but bipartisan support for open financial data may offer a way forward.
Since 2008, the government has taken several steps to address credit rating agency problems. In the waning days of the Obama Administration, the Department of Justice and State Attorneys General settled complaints against Moody’s for $864 million. This followed a larger settlement with S&P and a series of regulatory changes spurred by the 2010 passage of Dodd Frank. That law mandated the removal of credit ratings from regulations, tighter control of SEC-licensed Nationally Recognized Statistical Ratings Organizations (NRSROs) and an SEC study of possible changes to the way investment banks choose rating agencies to rate newly-issued structured finance securities.  
Writing for Brookings, former CBO Director Alice Rivlin and researcher John Soroushian criticize the SEC for failing to execute its Dodd Frank mandate to change the rating agency business model. By not acting, the SEC has left in place a regime under which rating agencies have an incentive to competitively dumb down their standards so that they can sell more ratings to bond issuers. Rivlin and Soroushian recommend that the SEC implement a process under which new structured finance issues are randomly assigned to rating agencies.
Joe Pimbley, a risk analyst who  - like me - used to work at a credit rating agency, goes even further, calling for an outright prohibition of issuer payments for credit ratings. This change would oblige rating agencies to serve investors first, as they did in the years before the transition to the “issuer pays model” around 1970. (Pimbley and I both have consulted for PF2 Securities, which publishes this blog).
Such interventions would seem unlikely under a Republican-led government. If anything, President Trump and Congressional Republicans have expressed the desire to roll back many aspects of Dodd Frank. But, led by Congressman Darrell Issa, Congressional Republicans have shown an interest in more open financial data – and this could be a way forward toward further reform.
To understand the relevance of open data, we must first realize that the credit rating business is not a standalone industry. Instead, the rating agencies are part of a larger industry: the business of credit risk assessment.  This field includes in-house credit analysts at banks, independent credit advisors, and analytics firms, as well as the NRSROs.
By mandating the elimination of credit ratings from federal regulation, Dodd Frank has helped to level the playing field between rating agencies and alternate credit assessment providers. (But, as Rivlin and Soroushian remind us, this process of removing credit ratings from regulations is incomplete.)
Deregulators can go further by pursuing Pimbley’s suggestion of removing credit rating agencies from the list of entities that can receive non-public disclosures from securities issuers, as provided under Regulation FD. Such a reform would allow credit analysts not employed by rating agencies to receive all of the same data at the same time as their agency counterparts.
Indeed, Republicans could completely eliminate the special status of credit rating agencies by scrapping the NRSRO certification entirely, as recommended by NYU’s Lawrence J. White. If ratings are not required by regulation and all credit consultants and analytics firms have equal data access, there would seem to be little benefit to NRSRO status anyway.
But even without regulatory-conferred privileges, rating agencies would still have an advantage over upstart providers of credit analysis. The incumbents’ size allows them to invest in systems and manual procedures to assimilate the large volume of issuer and security data needed to rate and review a large number of debt issues.
Reforms that lower the costs of collecting this data would enable new analytic firms to compete against incumbent rating agencies despite their relatively small size. Representative Issa’s new bill, the Financial Transparency Act of 2017 (HR 1530), would make all financial regulatory data available in machine readable form. Right now, much of this data is only available in PDFs which are costly to process. By instead providing financial filings in the form of structured text, new credit data sets will become more readily available at little or no cost.
One regulator affected by the Act is the Municipal Securities Rulemaking Board (MSRB) which oversees the municipal bond market. Right now, offering materials and continuing disclosures such as annual financial reports are published as PDFs. Anyone hoping to analyze them must either mine the PDFs for relevant data or buy data sets from third parties, usually at high costs and with tight restrictions on redistribution (effectively preventing smaller firms from showing how their opinions are driven by issuer fundamentals). If the MSRB switches to structured text, the cost of analyzing municipal securities would drop, making it easier for municipal analytics startups such as MuniTrend to provide insight across a broad range of instruments.
The MSRB is one of ten regulators affected by the proposed act.  If passed and implemented, the bill would trigger a wave of free and low cost data sets that could help analysts outside of the credit rating agencies keep up with these powerful incumbents. When combined with reforms that remove the special privileges now enjoyed by licensed NRSROs, open financial data could usher in a new era of competition and innovation in the field of credit assessment.

Tuesday, February 3, 2015

S&P Settles: Now How About that US Bond Rating?

In its settlement with the Department of Justice, S&P has backed off its assertion that the federal lawsuit was filed in retaliation for its 2011 downgrade of US Treasury debt. But the downgrade subjected S&P to a barrage of criticism both at the time and ever since, raising the question of whether the decision was appropriate. My view is that the downgrade was the correct credit decision in 2011, but that it is now time for S&P to restore the US to AAA status.

Since rating agencies earn minimal revenue from sovereign ratings, the downgrade was clearly not in the firm’s short term commercial interest. This speaks well for the sovereign group and senior management at the time: the analysts looked the numbers, decided that the US was no longer a triple-A credit and were allowed to implement and publicize their decision. While we often hear negative generalizations about rating agencies, it is worth noting that these firms are heavily siloed; the behavior of the structured finance group does not necessarily reflect on the work of the sovereign team.

Not only was the downgrade principled, but it was also justified. In 2011, the US debt-to-GDP ratio was skyrocketing, the country had an unsustainable fiscal outlook and it lacked the political will to deal with the imbalance between future revenues and swelling entitlements arising from baby boomer retirements. While the British and Italian governments were able to address population aging by raising taxes and delaying eligibility for social insurance programs, divided government in the US prevented a grand bargain from occurring here.

Further evidence that the US did not merit a top credit rating emerged in October 2013 when both parties engaged in brinkmanship over raising the debt ceiling. Had the debt ceiling not been raised, the Treasury would been forced to prioritize payments. While I believe that the Treasury would have prioritized debt service over other obligations, my confidence in this belief does not approach the 99.9%+ level normally associated with AAA ratings.

So what has changed and why am I suggesting an upgrade? First, after taking widespread blame for the debt ceiling debacle, Republicans have changed tactics. It is now extremely unlikely that they will trigger a similar confrontation when the debt ceiling has to be raised again. Since failure to raise the debt ceiling and failure to prioritize debt service are both low probability events, the chances of both occurring seem to be within the AAA risk band.

More relevant to S&P’s original downgrade decision, the nation’s fiscal long term outlook has changed since 2011. When I say this, I am not referring to the marked decline in headline deficit numbers. The fact that the annual deficit declined from $1.3 trillion in fiscal 2011 to a projected $468 billion in fiscal 2015 is not a surprise. Looking back at CBO’s ten year projection from 2011, the agency estimated a $551 billion deficit for the current fiscal year – pretty close to what we are actually seeing. While politicians from both parties may be congratulating themselves for this improvement, the downward deficit trend is exactly what one would expect from an improving economy. Rising tax revenues and lower unemployment insurance costs – not any major reform – are reducing the deficit. There was no grand bargain, nor is there likely to be one anytime soon.

But two developments since 2011 have greatly altered the country’s longer term outlook: reduced healthcare cost inflation and persistently lower interest rates. Between 2000 and 2007, annual healthcare expenditure growth averaged 8.5%. Since 2009, the rate of growth has averaged only 3.9% and health expenditures have stopped rising as a percentage of GDP. Back in 2011, the decline in health cost inflation could be dismissed as a temporary effect of the Great Recession – but now that it has persisted into the recovery, we apparently have a lower baseline rate. Since healthcare costs are such a large component of future federal spending, less cost escalation in this sector is a very important factor in the long term fiscal outlook. 

Last year CBO projected that in 2039 the US debt/GDP ratio would reach 106% under current law and 183% under a likely set of alternative policies. As healthcare disinflation persists these forecast levels are likely to fall. 

Lower interest rates should also slow the accumulation of debt. After years of recovery and many months after the end of quantitative easing, Treasury rates remain near record lows. Rather than assume that rates will return to pre-recession levels, it now seems more reasonable to assume that we have entered a new normal of ultra-low rates just as Japan did after 1990.

While discussion around government solvency often revolves around a nation’s debt-to-GDP ratio, a better measure is the ratio of interest expense to revenue – because it focuses on the government’s ability to maintain debt service. Just after World War II, Britain reached a debt-to-GDP ratio of 250% but did not default because it faced very low interest rates. If interest rates remain low in the US, the federal government can comfortably service the 183% debt load envisaged by CBO’s most pessimistic scenario.

In 1991, the nation’s interest/revenue ratio peaked at 18% - but there was no discussion of a default. Currently, the ratio is below 8%. My study of fiscal history suggests that a Western style government becomes vulnerable to default once this ratio reaches 30%. While the US can always avoid a default by printing money, it is possible that an independent Fed Chair would refuse to do so, out of fear that the resulting price inflation would have worse consequences than a Treasury default. 

Under the CBO’s most pessimistic scenario, the interest/revenue ratio reaches 46% in 2039, well into the danger zone. But this outcome assumes an average interest rate on federal debt of 4.85%. Right now this average is below 2% and falling as higher coupon bonds mature and are replaced by new low-rate issues. Even if the government’s average financing rate drifts up to 3%, its interest/revenue ratio will remain below the critical threshold.

S&P had good reason to downgrade the US in 2011. If health cost inflation and interest rates had returned to pre-recession historical norms, the case for the lower rating would still be strong. But now that we have entered a new normal of quiescent healthcare cost escalation and low interest rates, it appears that the US is due for an upgrade.

Thursday, June 20, 2013

AAAs still Junk -- in 2013!

Breaking news: Several securities, boasting ratings higher than France and Britain as of two weeks ago, are now thought quite likely to default.

Moody's changed its opinion on a number of residential mortgage-backed securities (RMBS), with about 13 of them being downgraded from Aaa to Caa1.

The explanation provided: "Today's rating action concludes the review actions announced in March 2013 relating to the existence of errors in the Structured Finance Workstation (SFW) cash flow models used in rating these transactions. The rating action also reflects recent performance of the underlying pools and Moody's updated expected losses on the pools."

In short: the model was wrong - oops!  ($1.5bn, yes that's billion, in securities were affected by the announced ratings change, with the vast majority being downgraded.)

Okay, so everybody gets it wrong some time or other.  What's the big deal?  The answer is there's no big deal.  You probably won't hear a squirmish about this - nothing in the papers.  Life will go on.  The collection of annual monitoring fees on the deal will continue unabated and no previously undeserved fees will be returned. Some investors may be a little annoyed at the sudden, shock movement, but so what, right?  They should have modeled this anyway, they might be told, and should not be relying on the rating.  (But why are they paying, out of the deal's proceeds, for rating agencies to monitor the deals' ratings?)

What is almost interesting (again no big deal) is that these erroneously modeled deals were rated between 2004 and 2007.  So roughly six or more years ago.  And for the most part, if not always, their rating has been verified or revisited at several junctures since the initial "mis-modeled" rating was provided.  How does that happen without the model being validated?

A little more interesting is that in many or most cases, Fitch and S&P had already downgraded these same securities to CCC or even C levels, years ago!  So the warning was out there.  One rating agency says triple A; the other(s) have it deep in "junk" territory.  Worth checking the model?  Sadly not - it's probably not "worth" checking.  This, finally, is our point: absent a reputational model to differentiate among the players in the ratings oligopoly, the existing raters have no incentive to check their work. There's no "payment" for checking, or for being accurate.

Rather, it pays to leave the skeletons buried for as long as possible.

---------------

For more on rating agencies disagreeing on credit ratings by wide differentials, click here.
For more on model risk or model error, click here.

---------------
Snapshot of certain affected securities (data from Bloomberg)



Monday, May 13, 2013

An Open Source Alternative to No Bid Contracts


At Muniland, Cate Long reports that the US Treasury Department’s Office of the Comptroller of the Currency (OCC) awarded Municipal Market Advisors (MMA) a contract to evaluate the risk of municipal bond holdings by banks it regulates.  OCC did not find any credible alternatives and thus is awarding the contract to MMA on a no-bid basis.  Quoting at length from Cate’s excellent blog post:
So federal regulators, who can no longer use credit ratings for evaluations of the municipal bond holdings of the commercial banks that they regulate, just gave a no bid contract to MMA, a relatively small firm with four principals. In essence the OCC will be substituting the opinions of MMA for those of the credit ratings agencies. Federally chartered banks held $363 billion of municipal securities as of 4th quarter 2012 according to the Federal Reserve ... The federal bank regulator will essentially be substituting the work of credit rating agencies, which issue over 1 million individual municipal ratings, with “research” from a small private shop. Is this wise? I think restricting themselves to such limited information is short-sighted given that muniland has over 80,000 issuers with $3.7 trillion of municipal debt outstanding. High quality credit analysis for even the debt of 50 states requires a shop bigger than MMA. Let alone all the other issuers. … Of course all the folks at MMA are nice, informed market professionals. But this process of hiring independent municipal research is ridiculous. No bid contracts have no place in our new, more transparent, post Dodd-Frank regulatory framework. The municipal bond market is facing its toughest challenges since the Great Depression and this BPD/OCC process needs more public input and openness.
As I will discuss at Tuesday’s SEC Credit Ratings Roundtable, there is a better alternative to this kind of arrangement. For almost 50 years, academics have been churning out corporate default models.  This modeling effort could be extended to structured and government bonds.  If the modeling data and software were fully open, these academic tools could undergo rapid, iterative improvements through a process of mass collaboration:  like Wikipedia or Linux.  If the SEC were to create a standards board for open source credit models, a group of experts would be empowered to separate the wheat from the chaff among these open source products. Regulators could further encourage the development of such tools by allowing results of certified models to be used in lieu of ratings as a credit-worthiness standard – meeting the spirit of Dodd-Frank Section 939A.  I make this argument at greater length here.
What should supplement or replace ratings?  Confidential, non-reproducible findings from a proprietary vendor, or transparent tools developed using academic research protocols and benefiting from peer review? I think the answer is clear.

Saturday, January 26, 2013

The Weak Underbelly of Capitalism

Appraisals. Auditing. Equity Research. Credit Ratings.

These four seemingly unrelated disciplines serve a common purpose. They inform investors about the value and risk of potential investments. If executed well, these services ensure that capital is invested wisely and in a way that promotes economic growth. If executed poorly, these services produce inefficiencies that hinder growth and, at worst, trigger recessions.

Headlines from the last two decades provide us with ample reason to believe that these services are not always performed well. Shoddy audits of Enron were a major enabler of that company’s massive fraud. The internet bubble was abetted by compromised research issued by analysts receiving a share of underwriting fees. Exaggerated appraisals and lenient credit ratings created the subprime bubble and heightened the magnitude of the reversal in home prices - the collapse of which still resonates.

The problem is that these four (supposedly independent) “gate-keepers” are often compromised by business considerations. The people who conduct these types of analysis are rarely at the top of the food chain in the industries they serve. They can be bullied or bribed by rainmakers at their firms or by clients to distort their findings. While outright fudging of the numbers often occurs, the more widespread problem is the selective use of “facts” to produce a desired result in line with preconceived notions. The product may not be an obvious, outright fraud, but even if it’s not, it is often harmful. Fraudulent and incomplete analysis causes the ongoing misappropriation of trillions of dollars of savings. One might call this situation “the weak underbelly of capitalism” – if you are willing to apply the term “capitalism” to today’s economic system.

The problem of biased, inadequate analysis is difficult to address through regulation alone. Even the best regulators can’t be in the room every time an analyst is encouraged to “massage” his or her findings. Much of the analysis is specialized and complex, rendering it difficult for individual regulators to identify shortcomings. Further, like analysts themselves, regulators are also not at the top of the financial industry food chain. Since both analyzing and regulating don’t offer the maximal compensation afforded by managing and rainmaking, members of the first two groups are often outsmarted or manipulated by those in the latter two groups.

Although these four services are products of the market, they can nonetheless be healed through market processes. How? It is often said that “sunshine is the best disinfectant.” In the financial industry, intermediaries maintain their margins by keeping information to themselves. But if more eyes are available to review any given analysis, the biases and distortions affecting this analysis are more likely to be identified and fixed. Further, best practice in each analysis profession can evolve rapidly through peer review, just as the highest visibility Wikipedia articles evolve rapidly toward accuracy and completeness.

The internet, and the Wikis and open source projects it nurtures, can provide the remedy to the “weak underbelly of capitalism” identified here. By making analyses public, and thus subject to widespread review, discussion and editing, these work products can converge toward an optimum.

This outlook motivated me to create an open source government bond assessment tool, the Public Sector Credit Framework (PSCF). This framework enables a user to build a multi-year budget simulation for any government and to use the results to estimate a default probability as well as an implied rating for that government. All source code for PSCF is posted on GitHub, a popular open source repository.

While I was getting started on this project, I learned about a parallel effort launched by a Swiss-based mathematician named Dorian Credé. His web site, Wikirating, directly applies Wiki technology to assessing a broad range of credit instruments. In November, Dorian and I announced a content sharing partnership. Maybe this can be the beginning of a network of mass collaboration efforts focused on improving the quality of credit ratings. And, perhaps, lessons learned in these endeavors can be applied to the other disciplines that inform investors.

Credit ratings, appraising, auditing and securities analysis are all important functions that need reform. Rather than seek top down solutions to improve these services – solutions which often come with adverse unintended consequences – let’s use the organizing power of the internet to find voluntary, collaborative alternatives.

We welcome any responses, and look forward to working with any academics or market participants out there who share a similar interest in creating an alternative, transparent framework that supports investment analysis.

An earlier version of this post appeared on The Progress Report.

Thursday, December 27, 2012

Sovereign Debt Ratings - How High is High?

The ratings agencies' country debt ratings have received a double blow from Bloomberg News in the last 10 days.

First, Bloomberg unleashed a piece entitled "Moody’s Gets No Respect as Bonds Shun 56% of Country Ratings" that argued that movements in bond yields more often disagreed (versus agreed) with a ratings change. If the argument is true, the result would then be that, in trying to predict future market movements, flipping a coin might be more constructive than basing your decision on a ratings action.

Less than a week later, Bloomberg released "BlackRock Sees Distortions in Country Ratings Seeking Revamp" which claims upfront that "Credit rating companies are distorting capital markets by assigning the same debt ranking to countries from Italy to Thailand and Kazakhstan, according to BlackRock Inc. (BLK), the world’s biggest money manager."

These are both punishing blows, but they also force us to reconsider what a rating is, what a rating means. Unfortunately, if we cannot ascertain what the rating describes, we cannot reasonably judge a rating agency's performance.

How High is High?

On a fundamental level, imagine we rank restaurants differently. One ranking may take only the quality of the meal into account. Another may consider the peripherals to the meal, or ambience - the relative comfort of the seats; the temperature; the noise level; the view etc. Can we really compare two ratings provided under different measures?

Unfortunately, is not only that investors are failing to assess what the rating depicts, but the rating agencies are choosing to keep the question open. Why box themselves in?

When things go wrong, rating agencies advertise that they actually got it right - their ratings are only RELATIVE measures of risk. They are trying to predict which countries or companies will be more likely than others to default or suffer impairment. But if you look at their actual ratings actions, and their action definitions, there is often little evidence of relative measurement.

Let's suppose one downgrades the UK. A relative action might read something like this "the UK has grown its debt-to-income ratio more than other countries." But one seldom sees this - they're almost always cardinal (i.e., absolute) and seldom ordinal (i.e., relative rankings).

One does well to ask, if ratings are relative, why when times are rough are we seeing more downgrades than upgrades? Are they all getting relatively worse than each other? In a relative system, one would hope they would roughly equate.


An Example

When Chinese rating agency Dagong put the USA on negative watch on Christmas, we looked a little deeper into the reasons. Their actual release is a little more detailed, but the Financial Times breaks it down for us as follows:
In placing the US rating on negative watch, Dagong cited five factors: 
1. The US is at an impasse in budget negotiations 
2. With no plan for maintaining solvency, the US is monetising its debt 
3. US government debt is growing much faster than fiscal revenue 
4. The fiscal cliff could lead to a US recession in 2013 
5. Frequent emergencies such as the fiscal cliff and debt ceiling deadlines add to the risks
Nothing relative there, so we went to Dagong's website to look at their definitions.

See the following excerpts:

AAA denotes the "lowest expectation of default risk"
AA "ratings denote expectations of very low default risk"
BB "ratings indicate that the issuer faces major ongoing uncertainties..."
B "ratings indicate that expectations of default risk are relatively high but a limited margin of safety remains." (emphasis added by us)

What does "very low" mean - very low relative to others, or to some absolute standard? To borrow from Arturo Cifuentes, what would it mean to restrict a company from building a "very high" building in New York? Would it be relative to the other buildings or relative to a specific measurement of "high"?

"Lowest" sounds absolute.  "Very low" could go either way.  "Faces major ongoing uncertainties" sounds cardinal, or absolute.  And "Relatively high" is certainly ordinal.  So, we're still not sure - relative or absolute, or a little of each? Does anybody know?

Friday, December 14, 2012

Monitoring Ratings Monitoring

Today's "Administrative Action" announced against Standard & Poor’s Ratings Japan K.K. hones in on just the sorts of problems our transparent ratings "drive" would protect against. 

For those of you who haven't been following, PF2 consultant Marc Joffe's open-source PSCF model has been gaining widespread attention, with Marc being recently commissioned by the California State Treasurer's office to further develop the PSCF default probability model for municipal bonds. 

Japan's FSA makes a strong case for promoting ratings transparency.  Having a transparent model allows others to catch ratings errors before they become too problematic - a positive feedback loop that lends to market stability and to investor confidence. 

It also encourages (or forces) rating agencies to keep current their ratings, minimizing the possibilities for larger rating changes (mostly downgrades) if and when raters notice their ratings are out of sync. 

An excerpt from the Japanese FSA's Administrative Release reads as follows (emphasis added):
"[S&P failed] to properly confirm important information that affects Synthetic CDO ("SCDO") credit ratings
 ...

[S&P] did not properly take stock of the cumulative loss amount pertaining to the reference obligations that affects the credit rating of SCDOs. The Company did not take measures such as confirming with arrangers of SCDOs whether there had been any credit events relating to the reference obligations.

Therefore, some cases were identified where incorrect credit ratings had been assigned to certain SCDO products for a significant period of time until just before the withdrawal of the credit ratings due to the redemption of those SCDO products.
 ...
The Company has continued publishing the credit ratings of the relevant SCDOs without confirming whether there were any credit events.
... 
The Company incorrectly maintained until October 2010 a credit rating of a SCDO product that should have been downgraded in January and further in February of that year. This is due to input to the Company’s system of incorrect notional amount data in relation to the reference obligations in the monitoring process of the SCDO credit rating.

The Company has not implemented a verification process whereby a second person checks the accuracy of the data input."

Wednesday, April 11, 2012

Credit Rating Agency Models and Open Source

When S&P downgraded the US from AAA to AA+, the US Treasury accused the rating agency of making a $2 trillion mathematical error. S&P initially denied this accusation, but adjusted some of its estimates in a subsequent press release. Economist John Taylor defended S&P, contending that its calculations were based on a defensible set of assumptions, and thus could not be categorized as a mistake. S&P’s model, which projected future debt-to-GDP ratios, has not been made public. As a result, it is difficult for outside observers to decide whom to believe: the rater or the rated.

There are at least three ways a model’s results can be wrong: if the model’s code itself doesn’t function as intended; if the known inputs are incorrectly entered, and if the assumptions are misapplied. In cases as important as the evaluation of US sovereign debt, we think rating agencies and the investing public would be better off if the relevant models were publicly available. Some may argue that the inputs to the models are proprietary or that they reflect qualitative assumptions valuable to the ratings agencies – i.e., that they are a “secret sauce.” But, even if rating agencies want to keep their assumptions proprietary, making the models themselves available would decrease the likelihood of rating errors arising from software defects.

Keeping one’s internal processes internal is the traditional way. Manufacturers assume that consumers don’t want to see how the sausages are made. In the internet era, it is now much easier to produce the intellectual equivalent of sausages in public – and, as it happens, many consumers are interested in the production process and even want to get involved. Wikipedia provides an excellent example of the open, collaborative production of intellectual content: articles are edited in public and the results are often subject to dispute. Writers get almost instantaneous peer review and the outcome is often rapid iteration moving toward the truth. In their books, Wikinomics and Macrowikinomics, Dan Tapscott and Anthony Williams suggest that Wikipedia’s mass collaboration style is the wave of the future for many industries – including computer software.

Many rating methodologies, especially in the area of structured finance, rely upon computer software. At the height of the last cycle, tools that implemented rating methodologies such as Moody’s CDOROMTM, were popular with both issuers and investors wondering how agencies might look at a given transaction. While the algorithms used by these programs are often well documented, the computer source code is usually not released into the public domain.

Over the last two decades, the software industry has seen a growing trend toward open source technology, in which all of a system’s underlying program code is made public. The best known example of open source system is Linux, a computer operating system used by most servers on the internet. Other examples of popular open source programs include Mozilla’s Firefox web browser, the WordPress content management system and the MySQL database.

In financial services, the Quantlib project has created a comprehensive open source framework for quantitative finance. The library, which has been available for more than 11 years, includes a wide array of engines for pricing options and other derivatives.

Open source allows users to see how programs work and with the help of developers, fully customize software to meet their specific needs. Open source communities such as those hosted on GitHub and SourceForge, enable users and programmers from all over the world to participate in the process of debugging and enhancing the software.

So how about credit rating methodologies? Open source seems especially appropriate for rating models. Rating agencies realize relatively little revenue from selling rating models; they are more likely to be used to facilitate revenue generation through issuer-paid ratings.

Open source enables a larger community to identify and fix bugs. If rating model source code were in the public domain, investors and issuers would have a greater chance to spot issues. Rating agencies would be prevented from covering up modeling errors by surreptitiously changing their methodologies. In 2008, The Financial Times reported that Moody’s errantly awarded Aaa credit ratings to a number of Constant Proportion Debt Obligations (CPDOs) due to a software glitch. The error was fixed, but the incorrectly rated securities were not immediately downgraded according to the FT report. Had the rating software been open source, it would not have been much more difficult to conceal this error, and it would have offered the possibility for a positive feedback loop – an investor or other interested party could have found and fixed the bug on Moody’s behalf.

Not only do open source rating models promote quality, they may also reduce litigation. The SEC issued Moody’s a Wells Notice in respect of the above mentioned CPDO issue, and may well have brought suit. (A Wells Notice is a notification of intent to recommend that the US government pursue enforcement proceedings, and is sent by regulators to a company or a person.) Investors have brought suit against the rating agencies to the extent they felt the ratings were inappropriate, for model-related errors or otherwise. By unveiling the black box, the rating agencies would be taking an active approach in buffering against litigation, and enjoy the material defense that, “yes we may have erred, but you were afforded the opportunity to catch our error – and didn’t.”

Unlike the CPDO model employed by Moody’s, the S&P US sovereign "model" likely took the form of a simple spreadsheet containing adjusted forecasts from the Congressional Budget Office. In contrast to the structured and corporate sectors, there are relatively few computer models for estimating sovereign and municipal default probabilities. While little modeling software is available for this sector, accurate modeling of government credit can be seen as a public good. Bond investors, policy makers and citizens themselves could all benefit from more systematic analysis of government solvency.

Open source communities are a private response to public goods problems: individuals collaborate to provide tools that might otherwise appear in the realm of licensed software. Thus open source government default models populated with crowd-sourced data maybe the best way to fill an apparent gap in the bond analytics market.

On May 2nd, PF2 will contribute an open source Public Sector Credit Framework, which is aimed at filling this analytical gap, while demonstrating how future rating models can be distributed and improved in an iterative, transparent manner. If you wish to participate in beta testing or learn more about this technology please contact us at info@pf2se.com, or call +1 212-797-0215.

--------------------------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study. This posting is the second in a series of posts leading up to May 2nd. The prior piece can be accessed by clicking here.

Wednesday, April 4, 2012

Multiple Rating Scales: When A Isn’t A

Philosophers from Aristotle to Ayn Rand have contended that “A is A.” Apparently none of these thinkers worked at a credit rating agency - in which “A” in one department may actually mean AA or even BBB in another. While the uninitiated might naively assume that various types of bonds carrying the same rating have the same level of credit risk, history shows otherwise.

During the credit crisis, AAA RMBS and ABS CDO tranches experienced far higher default rates than similarly rated corporate and government securities. Less well known is the fact that municipal bonds have for decades experienced substantially lower default rates than identically rated corporate securities – and that the rating agencies never assumed that a single A-rated issuer ought to carry the same credit risk in both sectors. This discrepancy was noted in Fitch’s 1999 municipal bond study and confirmed by Moody’s executive Laura Levenstein in 2008 Congressional testimony on the topic. Later in 2008, the Connecticut attorney general sued the three major rating agencies for under-rating municipal bond issues relative to other asset categories. (The suit was recently settled for $900,000 in credits for future rating services, but without any admission of responsibility). Last year, three economists – Cornaggia, Cornaggia and Hund – reported that government credit ratings were harsher than those assigned to corporates, which, in turn, were more severe than those assigned to structured finance issues.

One might ask why it is important for ratings in different credit classes to carry the same expectation in terms of either default probability or expected loss? Perhaps we should accept the argument that ratings are intended to simply provide a relative measure of risk among bonds within a given asset class.

There are at least two problems with this approach. First, it is unnecessarily confusing to the majority of the population that is unaware of technical distinctions in the ratings world. Second, it creates counterproductive arbitrage opportunities.

If an insurer is rated AAA on a more lenient scale than insurable entities in another asset class, the insurer can profitably "sell" its AAA rating to those entities without creating any real value in the process.

Municipal bond insurance is a great example. Monoline bond insurers like AAA-rated Ambac, FGIC and MBIA insured bonds issued by states, cities, counties and other municipal issuers for three decades prior to the 2008 financial crisis. In some cases, the entities paying for insurance were of a stronger credit quality than the insurers. As it happened, the insurers often failed while the issuers survived, leaving one to wonder why the insurance was necessary.

During this period, general obligation bonds had very low overall default rates. According to Kroll Bond Rating Agency’s Municipal Default Study, estimated annual municipal bond default rates by issuer count have been consistently below 0.4% since 1941. Similar findings for the period 1970-2010 are reported in The Bloomberg Visual Guide to Municipal Bonds by Robert Doty. This 0.4% annual rate applies to all municipal debt issues, including unrated issues and revenue bonds. The annual default rate for rated, general obligation bonds is less than 0.1%.

Given this long period of excellent performance, one might reasonably expect that most states and other large municipal issuers with diversified revenue bases to be rated AAA. No state has defaulted on its general obligation issues since 1933, and most have relatively low debt burdens when compared to their tax base. Despite these facts, the modal rating for states is typically AA/Aa with several in the A range. (This remains the case despite certain rating agencies’ claims that they have recently scaled up their municipal bond ratings to place them on a par with corporate ratings).

The depressed ratings created an opportunity for municipal bond insurers to sell policies to states that did not really need them. For example, the State of California paid $102 million for municipal bond insurance between 2003 and 2007. Negative publicity notwithstanding, the facts are that single A rated California has a Debt to Gross State Product ratio of 5% (in contrast to a 70% Debt/GDP ratio for the federal government) and that interest costs represent less than 5% of the state’s overall expenditures. While pension costs are a concern, they are unlikely to consume more than 12.5% of the state’s budget over the long term – not nearly enough to crowd out debt service.

California provides but one example. The Connecticut lawsuit mentioned above also cited unnecessary bond insurance payments on the part of cities, towns, school districts, and sewer and water districts.

Meanwhile, AAA-rated municipal bond insurers carried substantial risks, evident to many not working at rating agencies. For example, Bill Ackman found in 2002 that MBIA was 139 times leveraged. As reported in Christine Richard’s book Confidence Game, Ackman repeatedly shared his research with rating agencies – to no avail.

This imbalance between the ratings of risky bond insurers and those of relatively safe municipal issuers essentially created the monoline insurance business – a business that largely disappeared with the mass bankruptcy and downgrading of insurers during the 2008 crisis.

Inconsistent ratings across asset classes thus do have real world costs. In the US, taxpayers across the country paid billions of dollars over three decades for unneeded bond insurance. Individual municipal bond investors, often directed by their advisors to focus on AAA securities only, missed opportunities to invest in tens of thousands of bonds that should credibly have carried AAA ratings, but were depressed by the raters’ inopportune choice of scale.

We believe that one reason for the persistent imbalance between municipal, corporate and structured ratings is the dearth of analytics directed at government securities. Rating agencies and analytic firms offer models (and attendant data sets) that estimate default probabilities and expected losses for corporate and structured bonds. Such tools are relatively rare for government bonds. Consequently, the market lacks independent, quantitatively-based analytics that compute credit risks for these instruments. This lack of alternative, rigorously researched opinions allows the incorrect rating of US municipal bonds to continue, without the alleviation of a positive feedback loop.

Next month, PF2 will do its part to address this gap in the marketplace with the release of a free, open source Public Sector Credit Framework, designed to enable users to estimate government default probabilities through the use of a multi-period budget simulation. The framework allows a wide range of parameterizations, so you may find it useful even if you disagree with the characterization of municipal bond risk offered above. If you wish to participate in beta testing or learn more about this technology please contact us at info@pf2se.com, or call +1 212-797-0215.

--------------------------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study.

Thursday, January 26, 2012

Illinois Attorney General Sues S&P – Initial Thoughts

Credit risk ratings are becoming a risky business.

Yesterday’s filing of the complaint against S&P (MHP) centers, in essence, on the allegation of false advertising. Stepping in to this issue for a moment, one of the key defenses offered by the raters is that their ratings are protected under the First Amendment rights to express an opinion (their “speech”). But as law professor Eugene Volokh opines in his letter to the House Committee (May 2009) it is within the framework of commercial advertising that “speech aimed at proposing a commercial transaction – is much less constitutionally protected than other kinds of speech.”

In this case, the AG is not really focusing wholly on whether the ratings were wrong, as much as it’s saying that S&P advertised that it was following a certain code in ensuring the appropriate levels of independence and integrity were being brought to the ratings process.

A former SEC enforcement official, Pat Huddleston, once explained that "[when] I say the [financial] industry is dirty, I don't mean to imply everyone in the industry is dirty," … "[only] that the industry typically promises something it has no intention of delivering, which is a client-first way of operating." This is essentially what the complaint argues: that S&P “misrepresented its objectivity” while offering a service that was “materially different from what it purported to provide to the marketplace.”

This goes back, really, to the key reform measure Mark proposed before the Senate in 2009 – that rating agencies would do well to separate themselves from commercial interests, by building a formidable barrier around the ratings process.


First, put a “fire wall” around ratings analysis. The agencies have already separated their rating and non-rating businesses. This is fine but not enough. The agencies must also separate the rating business from rating analysis. Investors need to believe that rating analysis generates a pure opinion about credit quality, not one even potentially influenced by business goals (like building market share). Even if business goals have never corrupted a single rating, the potential for corruption demands a complete separation of rating analysis from bottom-line analysis. Investors should see that rating analysis is virtually barricaded into an “ivory tower,” and kept safe from interference by any agenda other than getting the answer right. The best reform proposal must exclude business managers from involvement in any aspect of rating analysis and, critically also, from any role in decisions about analyst pay, performance and promotions.
Two other elements jump out immediately from the complaint:

First, the complaint specifically argues that the rating agency “misrepresented the factors it considered when evaluating structured finance securities.” Next, the complaint tries to tie S&P’s actions to its publicly-advertised code of conduct, arguing that its actions were inconsistent with the advertised code.

In respect of actions being inconsistent with the code, certain of these arguments are common-place, such as the contention that the rating agencies did not allocate adequate personnel, in opposition to what’s advertised in the code. This of course becomes a contentious issue – you can see S&P coming back with copious evidence of situations in which they did “allocate adequate personnel and financial resources.” But the complaint hones in on the factors considered in producing a rating, and it focuses on two parts of the code:



Section 2.1 of S&P’s Code states: “[S&P] shall not forbear or refrain from taking a Rating Action, if appropriate, based on the potential effect (economic, political, or otherwise) of the Rating Action on [S&P], an issuer, an investor, or other market participant.”

and…

Section 2.1 of S&P’s Code states: “The determination of a rating by a rating committee shall be based only on factors known to the rating committee that are believed by it to be relevant to the credit analysis.”
This brings back to mind, disturbingly, a recent New York Times article (Ratings Firms Misread Signs of Greek Woes) which focuses on the deliberations within Moody’s (MCO) and their concerns about the deeper repercussions of downgrading Greece – rather than the specifics of credit analysis:


“The timing and size of subsequent downgrades depended on which position would dominate in rating committees — those that thought the situation had gotten out of control, and that sharp downgrades were necessary, versus those that thought that not helping Greece or assisting it in a way that would damage confidence would be suicidal for a financially interconnected area such as the euro zone,” Mr. Cailleteau wrote in an e-mail.”

The question then, is whether rating committees were focused on credit analysis, or whether other concerns were at play, aside even from typical business interests. The concerns for rating agencies, from a legal perspective, can become quite real when the debate centers not on ratings accuracy, but on whether the rating accurately reflected their then-current publicly available methodology. There may be substantial risks, therefore, in delaying a downgrade of a systemically important sovereignty or institution (such as a too-big-to-fail bank or a key insurance company) if such downgrade is appropriate per the financial condition of the company or sovereignty, or in providing favorable treatment to certain companies or sovereignties based on the relative level of interconnectedness.

The allegations of misrepresenting factors considered in their analysis opens another can of worms for rating agencies, as they’ll subsequently be increasingly focused on disclosing the sources of the information relied upon. There’s substantial concern, to the extent they’re relying on the issuing entity (in cases in which the issuing entity is itself the paying customer), that such reliance becomes a disclosure issue to the extent the investor may otherwise have assumed the rating agency was independently verifying such information. This was a frequent problem in the world of structured finance CDOs such as those described in the AG’s complaint.

Last, but not least, the complaint focuses on the effectiveness of ratings surveillance. This is a topic of importance to us, as we feel that proper surveillance, alone, may have substantially diminished the magnitude of the crisis. At the very least, certain securitizations that ultimately failed may not have been executed had underlying ratings been appropriately monitored, and several resecuritizations may have become impossible, limiting the the proliferation of so-called toxic assets. See for example: Barriers to Adequate Ratings Surveillance

That’s all for now. There’s a lot more to this complaint, so we suggest you check it out here.

Monday, October 10, 2011

Will S&P's Wells Notice Change Their Behavior?

The role of the credit rating agencies in the recent financial crisis has been highlighted by numerous investigations including the Financial Crisis Commission and the Senate Permanent Subcommittee on Investigations (PSI). It therefore comes as no surprise that Standard & Poor’s receipt of a Wells Notice, indicating the SEC’s intention to sue, has garnered its fair share of attention. Challenges presented by the issuance of the so-called Wells Notice, and the circumstances surrounding it, will likely culminate in rating agencies and issuing entities having to adjust their approaches to the ratings process.

The various forms of media speculation have focused on Delphinus CDO, a crisis-era structure backed by subprime mortgage bonds, as the focal point of the SEC’s investigation. This transaction was highlighted by Senator Levin’s PSI report as a “striking example” of how banks and ratings firms branded mortgage-linked products safe even as the housing market worsened in 2007.

The implications of S&P’s internal emails, made public through the Senate’s investigative process, are that the rater may have known, at the time it was issuing its ratings on Delphinus, that the ratings being provided were inconsistent with its then-current methodology. In essence, S&P’s model and ratings were contingent on certain preliminary information made available to them by the underwriting bank. When that information changed at a late stage of the deal’s construction, S&P’s model was no longer able to produce results consistent with the desired rating.

That S&P nevertheless issued the originally-requested rating, despite its being incompatible with the new information, opens a line of questioning into whether S&P’s ultimate rating adequately reflected its own analysis. In issuing the Wells Notice, the SEC may at this juncture reasonably suspect S&P of committing a Rule 10b-5 violation.

The SEC would be pressed to show that S&P knew, at the time it was providing the rating, that the rating was ill-deserved (and thus misleading to investors). If the SEC is able to show that S&P intentionally provided a misleading rating, they would distinguish this case from several of the other rating agency complaints that have been dismissed.

Importantly, the SEC’s case would almost certainly survive the rating agency’s preferred defense motion that invokes their First Amendment rights to express an opinion. Floyd Abrams, S&P’s external counsel on First Amendment issues, himself confesses in his September 2009 testimony before the House that “[the] First Amendment provides no defense against sufficiently pled allegations that a rating agency intentionally misled or defrauded investors,” … “nor does it protect a rating agency if it issues a rating that does not reflect its actual opinion.” (emphasis added)

Aside from the abovementioned Wells Notice, the SEC is showing a keen focus on each rater’s application of its own public methodology: their annual rating agency examination report, released late last week, cites as an “essential finding” that “[one] of the larger NRSROs reported that it had failed to follow its methodology for rating certain asset-backed securities.”

The result of the Delphinus investigation notwithstanding, the mere threat of legal action alters a rater’s approach to issuing ratings and maintaining current ratings. We have already seen the fruits borne of pressure instilled by the new regulatory landscape. In late July of this year, S&P stunned the market by pulling away from rating a commercial mortgage-backed securities (CMBS) transaction, led by Goldman Sachs and Citigroup, on the evening prior to the deal’s closing. S&P reportedly needed to adjust its model to reflect “multiple technical changes,” ultimately leading to the deal being shelved.

The manner in which S&P dealt with a controversial and costly methodological change suggests a new-found sensitivity towards violating the 10b-5 legal standard. Such a drastic action, bringing significant embarrassment to S&P in addition to the loss of market share, would have been inconceivable in the prior, revenue-centric, competitive landscape.

With raters seeking at all costs to avoid a 10b-5 violation, we foresee them increasingly turning away bankers who pressure them with “last-minute” demands. Bankers, risking their deals falling through, will be driven to accommodate the rater’s requirements in providing their supporting data in a more timely fashion, well in advance of a deal’s closing. Perhaps we’ll have fewer deals done as a result; but perhaps those deals will be safer, supported by less-hurried analyses.

As we have seen before, it continues to be legal risk, and not reputational risk, that has encouraged oligopolistic rating agencies to re-focus their attentions on the quality of the product being provided. With Damocles’ sword swinging over-head, we can only hope for more objective ratings going forward – and fewer stale ones.

Thursday, May 19, 2011

A Telling Tale of Two Tables

Just how difficult is it to measure ratings performance, and how useful are the measures, really?

There are certain difficulties we all know about – having to rely on the ratings data being given to you by the parties whose performance you’re measuring. So there is naaturally the potential for the sample to be biased or slanted, and that’s very difficult to uncover. (We’ve discussed this hurdle at great length here.)

The next issue we’ve written about is the inability to separate the defaulting assets from those that didn’t default. In our regulatory submission, we called for transparency as to what happened to each security BEFORE its ratings was withdrawn (see Transparency of Ratings Performance).

Let’s have a look at a stunning example today that brings together a few of the challenges, and leaves one with a few unanswered questions as to the meaningfulness of ratings performance, as it's being currently displayed, and the incentives rating agencies have to update their own ratings.

Here’s a Bloomberg screenshot for the rating actions on deal Stack 2006-1, tranche P.


Starting from the bottom up, it seems Moody’s first rated this bond in 2006 whereas Standard & Poor’s first rated it in 2007. If we try to verify this on S&P’s website for history, we come to realize how difficult it can be to verify:


So let’s suppose everything about the Bloomberg chart is accurate.

As verified on their website, Moody’s rated this bond Aaa in August ’06 and acted no further on this bond until it withdrew the rating in June 2010. (They don’t note whether the bond paid off in full, or whether it defaulted.)

S&P, meanwhile, shows its original AAA rating of 2007 being downgraded in 2008 (to BBB- and then to B-) and in 2009 to CC and again in 2010 to D, which means it defaulted (according to S&P).

For Moody’s, the first year for which the bond remains in the sample for a full year is 2007; thus, it wouldn’t be included in 2006 performance data. For S&P, the first complete year is 2008.

So if we consider how Moody’s would demonstrate its ratings performance on this bond, it would say:

Year 2007: Aaa remains Aaa
Year 2008: Aaa remains Aaa
Year 2009: Aaa remains Aaa
Year 2010: Aaa is withdrawn (WR)

No downgrades took place, according to Moody’s … while at the same time S&P shows it as having defaulted:

Year 2008: AAA downgraded to B-
Year 2009: B- downgraded to CC
Year 2010: CC downgraded to D

Here’s what their respective performance would look like, if one were to apply their procedures (at least as far as we understand them):

Friday, November 19, 2010

Meredith Whitney and the Future of Credit Rating Agencies

The popular media are harping on the wrong issue in respect of the future of the credit rating agencies. They say that despite financial overhaul aimed at reducing their influence, “credit raters keep their power” (WSJ Nov. 16) and that “[for] potential newcomers, … , it is difficult to compete against established agencies [like Moody’s and S&P].” (FT Nov. 19)

Rather, the regulatory proposals in both the U.S. and Europe have been quite severe on the rating agencies, demanding both improved transparency and enhancing transparency, which increasing the potential for legal liability; and the very reason that players like Kroll and Meredith Whitney are entering the rating environment is that the established agencies are particularly vulnerable to competition.

To be fair, it is always a challenge to compete against a well-established company. But Kroll and Whitney are seizing the opportunity while the raters are weakened by poor ratings performance, and distracted by the significant increase in both the “volume and cost of defending such [related] litigation.” - from Moody’s (MCO) 10Q

Ratings, as we all know, are interwoven throughout our financial framework. It takes time to remove references to them and there remains a modicum of inertia among market participants in moving away from the Big Three or away from credit raters in general. But there has been a tangible change in momentum, with raters like Canada’s DBRS having already secured a large (majority) share of the U.S. residential mortgage-backed securities (RMBS) market — hardly a sign of “difficulty competing.” (WSJ May 2010)

Rather than being anxious about seeing immediate changes despite the lengthy history, and deeply embedded nature, of credit ratings, we urge the media to rather applaud the substantial regulatory improvements that have been made in respect of reducing reliance on ratings and heightening the integrity of the ratings process. We caution, however, that a material increase in the number of rating agencies leads to greater competition and not to higher quality ratings. More accurately, the readier the supply of ratings, the higher the inflation of ratings provided.


Notes:
(1) Aside from the 11 SEC approved NRSROs, there are already according to our calculations 108 other debt rating companies worldwide, 18 of which are affiliated in some way or other with one of the NRSROs.
(2) To visit submissions to the SEC, including our submission, on the credit rating reform proposals put forth in the Dodd-Frank Act, click here.

Monday, August 9, 2010

FDIC Esoterica – It's Capital

The FDIC is following the NAIC's lead in considering alternatives to credit rating agencies in determining capital reserve standards for their underlying banks. According to the WSJ's "Regulators Plan First Steps on Credit Rating:"

[among] the options being discussed is a greater use of credit spreads, having supervisors develop their own risk metrics and a reliance on existing internal models...
The other option likely to be mulled is the use of service providers and rating agencies outside of the Big Four (Moody's, S&P, Fitch and DBRS). We have contemplated some of the elements of rating operational agency due diligence, here, but ultimately this process would require an understanding of the varying levels of expertise within each rating agency, it's level of accuracy and stability, and the various limitations of its model. (For example, this morning's WSJ reports on Morningstar research that concludes oddly that "using low fees as a guide [to a mutual fund's future success] would give investors better results than even Morningstar's own star-rating system...[because while] the stars system has typically guided investors to better results, it isn't as effective in predicting future returns at times of big market swings.")

The movement towards relying fully on credit default swaps, as contemplated above, seems to us a distant longing: first, CDS liquidity (not to mention maturity) isn't always comparable to that of the securities being referenced resulting in an imperfect spread-to-risk mapping; next, we are yet to witness substantial research evidencing the predictive content of CDS spreads on unrated securities; last, it remains questionable to what extent CDS spreads directly mimick the underlying's fundamental credit risk. (See Credit Ratings vs. Credit Default Swaps for more on this.)

There are at least three reasons why we would welcome the FDIC's direct participation by analyzing the securities interally:

1 - If the FDIC decides to create its own models, they'll be better equipped to appreciate the risks inherent in the securities being purchased by the banks they're to an extent insuring. Not only will they be less reliant on credit rating agencies, but also on broker-dealers' and third-party providers' differing opinions.

2 - With investor sophistication levels having (unfortunately) softened from a legal perspective, it augurs well to have a regulator play an active role in overseeing the investments being allowed by its underlying members. They would be better positioned to push back on riskier investment activities, or to encourage improved hedging or risk mitigation techniques on the bank or system level.

3 - Aside from the obvious advantages of treating all banks equally, the other benefit of having regulators play a more active role in examining capital adequacy reserves is the informational advantage they bring to the table:

For a real-life example, consider the $60bn world of trust preferred securities CDOs (or TruPS CDOs), whose performance depends on the performance of its underlying banks. Who would be better positioned than the FDIC to have an accurate handle on future bank defaults? With a model to support them, the FDIC can estimate the effect of default of all banks they consider to be poorly-positioned or undercapitalized. As it happens, in somewhat circular a fashion, banks (like Zions Bancorp) often also hold TruPS CDOs, which are themselves supported by other banks. Thus, the FDIC will be able to model the exact public utility of allowing a bank to prepay on its preferred securities at a discount, as they'll be able to measure the overall affect of that bank's prepayment on all TruPS CDOs holding that bank's preferreds; and they'll know how much each other bank holds of those TruPS CDOs which hold the original bank's preferreds. Ah, the beauty of information!

As an alternative, banks holding TruPS CDOs have been subjected to abysmal ratings performance which has come in tremendous waves of downgrades by rating agencies that in some cases are not rating the underlying banks and in other cases are guessing at how the models will perform.

From a recent American Banker article entitled "TruPS Leave Buyers in CDO Limbo:"

According to Fitch's Derek Miller, the agency is "in the process of reviewing all our assumptions" on trust-preferred CDO defaults and deferrals. For recent rating actions, he said, Fitch did not do precise cash-flow modeling, because it felt that the nuances of the capital structure have been drowned out by the sheer volume of defaults and deferrals that determine payouts.
Knowing just how sensitive some banks are to ratings changes en masse, we're excited at the prospect of the FDIC becoming more hands-on, and potentially smoothing the shocks. In this way, not every ratings failure need precipitate a liquidity crunch, a lending freeze, or a public-sector intervention.

Wednesday, April 28, 2010

Credit Ratings vs. Credit Default Swaps

As an alternative to relying overly on ratings produced by credit rating agencies, several ratings reform proposals offer the usage of bond or credit default swap (CDS) prices or spreads as a more plausible option. Some of these proposals are positively suggestive of the fact that market prices are both more accurate and more predictive than credit ratings.

I’m not convinced.

Firstly, with ratings being so deeply embedded throughout our financial structure, the ratings of the assets themselves become an integral component of the market-implied risk assessment. For example, even when analyzing securitized products Vink and Fabozzi (2009) show credit ratings to be a major factor accounting for the movement of primary market spreads. Thus, for any proposal to be convincing it would have to test the accuracy and reliability of CDS spreads on unrated bonds or companies. Alternatively, a study would need to compare the performance of traded securities whose ratings are not publicly known (also known as shadow ratings) to the performance of those shadow ratings.

Secondly, bond yields (or spreads-to-swaps) and credit default swap premiums are largely incomparable to credit ratings for many reasons. These differences will have to be tackled in a separate piece, but at the very least there’s that non-insignificant concept of liquidity. Both CDS premiums and bond yields include the various risks – not just credit risks – that come with investing in, or buying protection on, a security. Credit ratings speak solely to long-term credit risks.

One may argue that the ratings were far less accurate than CDS spreads during the crisis, and that this (i.e., during a market dislocation) is the only time we depend on accurate default projections and we should therefore abolish rating agencies in general. While I don’t wish to complain of these proposals, I fear that they complain unfairly of the rating agencies.

Yes the CDS spreads may better reflect default probability during a crisis. By definition they’re more adaptive to changing market conditions, versus the ratings which are long-term predictors. But would you want ratings to change in as volatile a fashion as CDS spreads? Would you want ratings to depend on headline news, or on audited (or lightly audited) financial data? Also, one shouldn’t forget that CDS spreads on CDOs and RMBS tranches were just as poor reflections of market-perceived asset quality before the crisis. The crisis could only occur, in part, because the banks were able to buy protection so cheaply from the monolines, by way of being long the CDS -- the infamous negative basis trades.

But even if these proposals made sense and even if their hypotheses were correct, they would be missing at least one crucial point: we need ratings. Meaningful ratings are essential – certainly now. Let me explain why, albeit by way of a long-winded explanation.

For financial reform to be successful it needs ultimately to deal with the flaws in our banks’ risk management procedures – and to deal with them in an environment in which the very serious practice of risk mitigation is left by senior management to risk managers, just as the serious business of growing revenues while attending to shareholder pressure is left by risk managers to upper management.

That these two functions are more adversarial than independent in nature is a concept not to be lost on us. Overly cautious risk management might hinder the implementation of growth opportunities, or the extent thereof. At times, indeed, they may be thought by the skeptic to be mutually exclusive.

Indeed the overpowering pressures that come with business initiatives can influence even the most judicious risk manager’s ability to perform her function in an objective manner, even though her function ought to be both separate from and independent of the business strategies. (See for example “Lehman’s Worst Offense: Risk Management.”)

With both traders and management being compensated for revenue generation, and with prudent risk managers acting only as a hindrance to the initiation and exploitation of growth opportunities, there remains little incentive for senior managers to maintain a healthy risk management environment. Instead of cultivating an environment in which risk managers are educated in monitoring the real risks (which requires expensive resources including personnel, data and systems) they are seen rather as a burden and a cost center, and are therefore starved of the resources necessary to question traders, trades, and trading strategies.

In sum, we remain in the infancy of creating a functioning risk control practice in place at our major banks. We are yet to promote adequate business-peer challenge processes and our price verification processes remain immature. Credit ratings, if created and applied properly, can provide a healthy starting point for internal skepticism; they can provide the independent credit risk assessment that supplements an analysis performed by the front-office or by the back-office.

Conclusion

CDS spreads are untested as a predictor of long-term default probability on unrated securities. Perhaps the reliability of CDS spreads depends on the underlying referenced entity being rated. There’s no doubt that CDS spreads are useful indicators – but I seriously doubt that they’re anywhere near as useful as ratings in predicting long-term default probabilities or losses.

I remain convinced there's an important place in our market for one or more independent agencies to provide their objective opinions in the form of a rating. For ratings reform to be successful, however, requires that the necessary measures be put in place to ensure that rating analysts are unfettered by market share concerns, and are incentivized only by ratings quality and accuracy. If we can achieve these objectives, ratings will return to providing a meaningful utility.