Wednesday, May 2, 2012

PF2 Launches Open-Source Sovereign and Muni Rating Tool

Hi everyone

For those of you who have been following our last few posts, you'll be happy to know that we've launched PSCF today.

In conjunction with the launch, we're making available sample models for the United States and California.

We've included a set of slides to help you get through this. There's also a white paper taking you through the construction and describing our approach. It's open-source, so feel free to have a bash.

Click around at http://www.publicsectorcredit.org/pscf.html and share your thoughts.

 ~ PF2

PSCF - Press Release

Wednesday, April 25, 2012

Substandard and Porous? A Belated Response to Nate Silver

After S&P downgraded the US last August, Nate Silver analyzed the agency’s record on sovereign debt and found it wanting (See Why S&P’s Ratings Are Substandard and Porous). Silver ran a number of statistical tests, and determined that S&P’s ratings were serially correlated, highly related to the Corruption Perceptions Index and less predictive of default than simple quantitative measures.

Silver’s analysis is impressive for an industry outsider, but it suffered some deficiencies. For example, he applied a linear scale (AAA = 9, AA = 8, A = 7, etc.) when mapping ratings to decimal values. Given the structure of historic default rates, some sort of geometric scaling would have been more appropriate.

Our criticism, however, should not compromise Silver’s core point: that statistical analysis can probably do a better job of telling us about sovereign default probabilities than the traditional rating agency approach.

An Argument for a Model-Based Approach

Certainly, an intensive statistical analysis avoids several of the pitfalls facing sovereign ratings, as currently implemented.

First, a rating methodology that relies heavily on qualitative techniques is vulnerable to bias. The serial correlation Silver found stems from a natural bias at rating agencies against extreme actions. Rather than imposing a large-scale, multiple-notch downgrade, rating committees may be predisposed to implementing a lesser, often single-notch change in the hope that subsequent events will obviate further action. While the biased rater might thus apply a rating inconsistent with the methodology, a computer model, lacking the capacity to “hope,” reverts directly to the honest, brutal truth.

There’s also a more fundamental objection to the rating agency model. Their qualitative approach, requiring the human touch, is labor intensive. But for-profit rating agencies are oddly notorious for understaffing their sovereign rating groups.

A model-based approach enables more frequent, more intensive analysis – as opposed to the infrequent reviews sovereigns now receive. New data can be loaded into the model at regular intervals and new results calculated. Analysts should still oversee the model parameters and check any results that may look suspicious.

The Ingredients of a Sovereign Debt Model

Model-based approaches to sovereign risk often involve credit default swap spreads, as the independent or dependent variable. A model can either extract default probabilities from CDS spreads, or attempt to predict those spreads on the grounds that they are a proxy for actual risk. Silver takes this latter approach in his piece.

The use of market inputs in credit models is fairly common. Such a modeling choice often implicitly or explicitly relies on the Efficient Market Hypothesis – the idea that market prices incorporate all relevant information and are thus the best available estimate of value.

Since the financial crisis, critics of EMH have sharpened their attacks. But whether or not you subscribe to rational expectations, the use of sovereign CDS is hard to defend. Most EMH advocates recognize that only liquid markets are efficient. Since liquid markets have numerous participants, their equilibrium prices incorporate substantial amounts of information.

This is not the case with sovereign CDS markets. Kamakura Corporation examined sovereign trading volumes reported by DTCC for late 2009 and 2010, and found that the vast majority of sovereign CDS contracts were traded fewer than five times per day (excluding inter-dealer trades). Five transactions per day falls well short of a liquid market, and thus the information content of sovereign CDS spreads is doubtful at best.

Absent meaningful CDS spread data, what else can a government credit model rely upon?

While one might look at Corruption Perception Indices, per capita GDP and/or terms of trade, it is not clear that these inputs will differentiate between advanced economy sovereigns and sub-sovereigns. Fortunately, government issuers produce reams of actual and projected fiscal data. This information, combined with demographic inputs and economic forecasts, can take us a long way.

When we suggest that budget forecasts can be employed in government credit modeling, skeptics point out accuracy issues with government forecasters.

The most famous forecasting error is attributed to the US CBO, which predicted trillions of surpluses for the first decade of the 21st century, instead of the trillions in deficits that actually appeared.

CBO forecasts are usually published in the form of point estimates. To be reliable, they have to reflect accurate forecasts of interest rates, GDP, tax levels and a host of other macroeconomic and policy variables. Given the number of variables and our (collective) limited capacity to predict, the point estimate is bound to be wrong. That notwithstanding, we can be pretty certain that these variables will fall within a given range. For example, it is almost certain that US GDP growth will be somewhere between -3% and +6% next year (2013). If we run a large number of scenarios with different GDP growth rates within this range, it is likely that some of the trials will closely approximate the ultimate fiscal outcome.

We can run a large number of budget scenarios by using a Monte Carlo simulation – in which scenarios are created by generating random numbers. Budget simulation forms the basis for PF2’s Public Sector Credit Framework that we will release next week. The tool allows the user to enter a default threshold in the form of a fiscal ratio; create macroeconomic series that vary with each trial through linkages to random numbers; and design fiscal series that rely on one or more of these macroeconomic elements. If you would like to learn more about this technology, please contact us at info@pf2se.com, or call +1 212-797-0215.
------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study. This is the last of four blog posts introducing PF2’s Public Sector Credit Framework. Previous posts on this topic may be found here, here and here.

Wednesday, April 18, 2012

Pro Bono Finance

Lawyers fight to save death row inmates. Doctors provide charity care – treating the indigent, often without government reimbursement. In the financial services industry, volunteer work usually takes the form of pitching in at schools and cleaning parks. We finance folks lack a tradition of using our skills for public service. With our reputation in tatters, perhaps it is time to begin such a tradition.

Fears of sovereign and municipal debt crises offer a worthy volunteer opportunity. Government debt problems can easily become matters of life and death. Argentina’s sovereign debt crisis claimed 24 lives in December 2001. In Greece, crisis-related protests have claimed at least 5 lives and caused over 300 injuries. Annual suicide rates are up about 20 percent, as people despair over their diminished circumstances.

A US federal debt crisis could similarly lead to violent protests, fatalities and widespread psychological damage; it could also be accompanied by high levels of inflation and sudden, sharp cuts in benefits. Older people dependent on savings and social insurance payments would be especially hard hit.

Because hurricanes and tornados kill and injure, scientists have invested substantial time and effort in forecasting these natural disasters and helping members of the public avoid them. Fiscal crises – a type of human-made disaster – can also be anticipated and potentially curbed, or even avoided.

Last summer’s debt ceiling debate was a failed opportunity to avoid a US fiscal crisis. As we look back on the debate, it becomes evident that false and misleading rhetoric frequently crowded out accurate information. Among the myths that plagued last year’s discussion:

  • Failing to raise the debt ceiling would have inevitably triggered a default.1

  • The US government has never defaulted in its entire history (see our earlier blog post for the facts).

  • The nation’s long term budget imbalance can be resolved simply by controlling domestic discretionary spending or allowing the Bush tax cuts on high earners to expire (neither of these steps generate enough savings to avoid future problems).<

  • A 90% Debt-to-GDP ratio will trigger a fiscal crisis (see Japan).

  • We need to balance the budget to avoid a fiscal crisis (the Debt-to-GDP ratio will improve as long as the stock of debt grows more slowly than GDP).

  • Rapid economic growth is impossible if the federal government spends more than 20% of GDP (see 1999 economic and fiscal statistics for a refutation of this contention).

The financial community can provide a useful community service by educating the public about sovereign, state and municipal credit issues. And when I say educate, I don’t mean pontificate. Many of us - this writer included - hold views about what should be done about taxes and spending. Mixing these opinions with facts is not an unambiguous public service. Just as we strive to dispassionately evaluate credit and select investment opportunities, we can and should separate fact from opinion when informing voters about their fiscal options.

Mary Meeker and her colleagues issued a free report entitled “USA, Inc.” that provided the type of service I am suggesting. The report, which received substantial publicity, analyzed the nation’s fiscal position in a manner similar to that of an equity investor analyzing a business – with hundreds of slides describing revenue and expense drivers.

The next challenge is to broaden the scope of analysis to provide a credit perspective with its focus on default risk and recovery. Also, rather than misapply corporate or structured debt analytics, we need a fresh approach that directly addresses the unique challenges of assessing government debt. We’ll also be better positioned if the analysis is ongoing, or even real-time, rather than a “snapshot” analysis provided in the Meeker report.

Ideally, having a well-structured, up-to-date model on which to base policy positions seems enviable. If would assist Congress and the Administration in defining the task at hand and dispelling myths surrounding the task – and it would encourage an environment in which commentators would be required to support their opinions with quantifiable data, not simply foggy criteria.

At PF2, we will kick-start the effort to dispel the fog of opinion by offering a free, open source Public Sector Credit Framework (PSCF). Our framework will be accompanied by a timely, transparent US federal budget simulation model, which we’ve designed to estimate the likelihood of a fiscal crisis in each of the next thirty years. Although we don’t know everything about this topic, we do know that many financial professionals are equipped to improve the software and the model. We encourage you all to join us in enhancing the analysis.

Few can afford to provide pro bono services exclusively – and we are no exception. If the framework generates interest, we may use it as a platform for valuing sovereign and municipal bonds – a service we hope to monetize. But that notwithstanding, the software and model are being supplied at no charge under GNU’s Lesser General Public License, for anyone to use and improve. Assuming interest is sufficient, we will regularly update the US federal model as a public service.

Many of us in the financial service industry have done quite well. Having reaped some of the rewards, we think we have found a great way to give back. We look forward to your collaboration.

------------------------------------
1 Treasury could have avoided a default through some combination of asset sales and spending reductions. The President could have invoked a clause in the 14th Amendment of the Constitution to mandate principal and interest payments that would have exceeded the debt ceiling. Congress would then have had to file a legal case to overturn the President’s order.

Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study. This posting is the third in a series of posts leading up to May 2nd. The prior pieces can be accessed by clicking here and here.

Wednesday, April 11, 2012

Credit Rating Agency Models and Open Source

When S&P downgraded the US from AAA to AA+, the US Treasury accused the rating agency of making a $2 trillion mathematical error. S&P initially denied this accusation, but adjusted some of its estimates in a subsequent press release. Economist John Taylor defended S&P, contending that its calculations were based on a defensible set of assumptions, and thus could not be categorized as a mistake. S&P’s model, which projected future debt-to-GDP ratios, has not been made public. As a result, it is difficult for outside observers to decide whom to believe: the rater or the rated.

There are at least three ways a model’s results can be wrong: if the model’s code itself doesn’t function as intended; if the known inputs are incorrectly entered, and if the assumptions are misapplied. In cases as important as the evaluation of US sovereign debt, we think rating agencies and the investing public would be better off if the relevant models were publicly available. Some may argue that the inputs to the models are proprietary or that they reflect qualitative assumptions valuable to the ratings agencies – i.e., that they are a “secret sauce.” But, even if rating agencies want to keep their assumptions proprietary, making the models themselves available would decrease the likelihood of rating errors arising from software defects.

Keeping one’s internal processes internal is the traditional way. Manufacturers assume that consumers don’t want to see how the sausages are made. In the internet era, it is now much easier to produce the intellectual equivalent of sausages in public – and, as it happens, many consumers are interested in the production process and even want to get involved. Wikipedia provides an excellent example of the open, collaborative production of intellectual content: articles are edited in public and the results are often subject to dispute. Writers get almost instantaneous peer review and the outcome is often rapid iteration moving toward the truth. In their books, Wikinomics and Macrowikinomics, Dan Tapscott and Anthony Williams suggest that Wikipedia’s mass collaboration style is the wave of the future for many industries – including computer software.

Many rating methodologies, especially in the area of structured finance, rely upon computer software. At the height of the last cycle, tools that implemented rating methodologies such as Moody’s CDOROMTM, were popular with both issuers and investors wondering how agencies might look at a given transaction. While the algorithms used by these programs are often well documented, the computer source code is usually not released into the public domain.

Over the last two decades, the software industry has seen a growing trend toward open source technology, in which all of a system’s underlying program code is made public. The best known example of open source system is Linux, a computer operating system used by most servers on the internet. Other examples of popular open source programs include Mozilla’s Firefox web browser, the WordPress content management system and the MySQL database.

In financial services, the Quantlib project has created a comprehensive open source framework for quantitative finance. The library, which has been available for more than 11 years, includes a wide array of engines for pricing options and other derivatives.

Open source allows users to see how programs work and with the help of developers, fully customize software to meet their specific needs. Open source communities such as those hosted on GitHub and SourceForge, enable users and programmers from all over the world to participate in the process of debugging and enhancing the software.

So how about credit rating methodologies? Open source seems especially appropriate for rating models. Rating agencies realize relatively little revenue from selling rating models; they are more likely to be used to facilitate revenue generation through issuer-paid ratings.

Open source enables a larger community to identify and fix bugs. If rating model source code were in the public domain, investors and issuers would have a greater chance to spot issues. Rating agencies would be prevented from covering up modeling errors by surreptitiously changing their methodologies. In 2008, The Financial Times reported that Moody’s errantly awarded Aaa credit ratings to a number of Constant Proportion Debt Obligations (CPDOs) due to a software glitch. The error was fixed, but the incorrectly rated securities were not immediately downgraded according to the FT report. Had the rating software been open source, it would not have been much more difficult to conceal this error, and it would have offered the possibility for a positive feedback loop – an investor or other interested party could have found and fixed the bug on Moody’s behalf.

Not only do open source rating models promote quality, they may also reduce litigation. The SEC issued Moody’s a Wells Notice in respect of the above mentioned CPDO issue, and may well have brought suit. (A Wells Notice is a notification of intent to recommend that the US government pursue enforcement proceedings, and is sent by regulators to a company or a person.) Investors have brought suit against the rating agencies to the extent they felt the ratings were inappropriate, for model-related errors or otherwise. By unveiling the black box, the rating agencies would be taking an active approach in buffering against litigation, and enjoy the material defense that, “yes we may have erred, but you were afforded the opportunity to catch our error – and didn’t.”

Unlike the CPDO model employed by Moody’s, the S&P US sovereign "model" likely took the form of a simple spreadsheet containing adjusted forecasts from the Congressional Budget Office. In contrast to the structured and corporate sectors, there are relatively few computer models for estimating sovereign and municipal default probabilities. While little modeling software is available for this sector, accurate modeling of government credit can be seen as a public good. Bond investors, policy makers and citizens themselves could all benefit from more systematic analysis of government solvency.

Open source communities are a private response to public goods problems: individuals collaborate to provide tools that might otherwise appear in the realm of licensed software. Thus open source government default models populated with crowd-sourced data maybe the best way to fill an apparent gap in the bond analytics market.

On May 2nd, PF2 will contribute an open source Public Sector Credit Framework, which is aimed at filling this analytical gap, while demonstrating how future rating models can be distributed and improved in an iterative, transparent manner. If you wish to participate in beta testing or learn more about this technology please contact us at info@pf2se.com, or call +1 212-797-0215.

--------------------------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study. This posting is the second in a series of posts leading up to May 2nd. The prior piece can be accessed by clicking here.

Wednesday, April 4, 2012

Multiple Rating Scales: When A Isn’t A

Philosophers from Aristotle to Ayn Rand have contended that “A is A.” Apparently none of these thinkers worked at a credit rating agency - in which “A” in one department may actually mean AA or even BBB in another. While the uninitiated might naively assume that various types of bonds carrying the same rating have the same level of credit risk, history shows otherwise.

During the credit crisis, AAA RMBS and ABS CDO tranches experienced far higher default rates than similarly rated corporate and government securities. Less well known is the fact that municipal bonds have for decades experienced substantially lower default rates than identically rated corporate securities – and that the rating agencies never assumed that a single A-rated issuer ought to carry the same credit risk in both sectors. This discrepancy was noted in Fitch’s 1999 municipal bond study and confirmed by Moody’s executive Laura Levenstein in 2008 Congressional testimony on the topic. Later in 2008, the Connecticut attorney general sued the three major rating agencies for under-rating municipal bond issues relative to other asset categories. (The suit was recently settled for $900,000 in credits for future rating services, but without any admission of responsibility). Last year, three economists – Cornaggia, Cornaggia and Hund – reported that government credit ratings were harsher than those assigned to corporates, which, in turn, were more severe than those assigned to structured finance issues.

One might ask why it is important for ratings in different credit classes to carry the same expectation in terms of either default probability or expected loss? Perhaps we should accept the argument that ratings are intended to simply provide a relative measure of risk among bonds within a given asset class.

There are at least two problems with this approach. First, it is unnecessarily confusing to the majority of the population that is unaware of technical distinctions in the ratings world. Second, it creates counterproductive arbitrage opportunities.

If an insurer is rated AAA on a more lenient scale than insurable entities in another asset class, the insurer can profitably "sell" its AAA rating to those entities without creating any real value in the process.

Municipal bond insurance is a great example. Monoline bond insurers like AAA-rated Ambac, FGIC and MBIA insured bonds issued by states, cities, counties and other municipal issuers for three decades prior to the 2008 financial crisis. In some cases, the entities paying for insurance were of a stronger credit quality than the insurers. As it happened, the insurers often failed while the issuers survived, leaving one to wonder why the insurance was necessary.

During this period, general obligation bonds had very low overall default rates. According to Kroll Bond Rating Agency’s Municipal Default Study, estimated annual municipal bond default rates by issuer count have been consistently below 0.4% since 1941. Similar findings for the period 1970-2010 are reported in The Bloomberg Visual Guide to Municipal Bonds by Robert Doty. This 0.4% annual rate applies to all municipal debt issues, including unrated issues and revenue bonds. The annual default rate for rated, general obligation bonds is less than 0.1%.

Given this long period of excellent performance, one might reasonably expect that most states and other large municipal issuers with diversified revenue bases to be rated AAA. No state has defaulted on its general obligation issues since 1933, and most have relatively low debt burdens when compared to their tax base. Despite these facts, the modal rating for states is typically AA/Aa with several in the A range. (This remains the case despite certain rating agencies’ claims that they have recently scaled up their municipal bond ratings to place them on a par with corporate ratings).

The depressed ratings created an opportunity for municipal bond insurers to sell policies to states that did not really need them. For example, the State of California paid $102 million for municipal bond insurance between 2003 and 2007. Negative publicity notwithstanding, the facts are that single A rated California has a Debt to Gross State Product ratio of 5% (in contrast to a 70% Debt/GDP ratio for the federal government) and that interest costs represent less than 5% of the state’s overall expenditures. While pension costs are a concern, they are unlikely to consume more than 12.5% of the state’s budget over the long term – not nearly enough to crowd out debt service.

California provides but one example. The Connecticut lawsuit mentioned above also cited unnecessary bond insurance payments on the part of cities, towns, school districts, and sewer and water districts.

Meanwhile, AAA-rated municipal bond insurers carried substantial risks, evident to many not working at rating agencies. For example, Bill Ackman found in 2002 that MBIA was 139 times leveraged. As reported in Christine Richard’s book Confidence Game, Ackman repeatedly shared his research with rating agencies – to no avail.

This imbalance between the ratings of risky bond insurers and those of relatively safe municipal issuers essentially created the monoline insurance business – a business that largely disappeared with the mass bankruptcy and downgrading of insurers during the 2008 crisis.

Inconsistent ratings across asset classes thus do have real world costs. In the US, taxpayers across the country paid billions of dollars over three decades for unneeded bond insurance. Individual municipal bond investors, often directed by their advisors to focus on AAA securities only, missed opportunities to invest in tens of thousands of bonds that should credibly have carried AAA ratings, but were depressed by the raters’ inopportune choice of scale.

We believe that one reason for the persistent imbalance between municipal, corporate and structured ratings is the dearth of analytics directed at government securities. Rating agencies and analytic firms offer models (and attendant data sets) that estimate default probabilities and expected losses for corporate and structured bonds. Such tools are relatively rare for government bonds. Consequently, the market lacks independent, quantitatively-based analytics that compute credit risks for these instruments. This lack of alternative, rigorously researched opinions allows the incorrect rating of US municipal bonds to continue, without the alleviation of a positive feedback loop.

Next month, PF2 will do its part to address this gap in the marketplace with the release of a free, open source Public Sector Credit Framework, designed to enable users to estimate government default probabilities through the use of a multi-period budget simulation. The framework allows a wide range of parameterizations, so you may find it useful even if you disagree with the characterization of municipal bond risk offered above. If you wish to participate in beta testing or learn more about this technology please contact us at info@pf2se.com, or call +1 212-797-0215.

--------------------------------------------
Contributed by PF2 consultant Marc Joffe. Marc previously researched and co-authored Kroll Bond Rating Agency’s Municipal Default Study.

Thursday, February 16, 2012

Withdrawing, Confidently

As structured finance deals wind down and the asset pools grow smaller, the situation often arises that the effectiveness of outstanding tranche ratings – previously based on a portfolio-level diversification – can hinge on the performance of one or two bonds. The problem is compounded, of course, in that many of the models work best for large, diverse, portfolios and often break down when the portfolios become arbitrarily small.

The question then becomes, if a rated tranche can just as easily be rated AAA or D, what does one do?

This tricky situation, now part skill, part luck, calls into question the predictive content of highly sophisticated ratings models when the outcome is really not a model-driven result, but simply a short-term occurrence (e.g., a payoff or a default) or the lack of an occurrence, in a credit-default swap environment.

Moody’s and S&P suffered severe blushes in January when a well-structured CDO, backed heavily by other CDOs and RMBS (including substantial subprime, yes subprime), paid off in full ­– with their outstanding ratings on all tranches having been in the CC to CCC range. What was interesting was that both ratings agencies had visited this deal as recently as June of last year.
From our conversation with analysts at one of the agencies, what happened here was simply that as the deal was winding down, the manager was able to sell the few remaining assets at prices high enough to pay down all the notes, rendering irrelevant the Monte Carlo default simulation trials being run by the raters. In other words, the model let them down.

In an interesting, perhaps prudent decision, S&P took a different course in an announcement they made earlier today, entitled “S&P Takes Various Ratings Actions on 30 U.S. RMBS Deals.” As certain deals dwindled down, compromising the predictive content of their ratings, they chose to simply withdraw the ratings.
“We subsequently withdrew our ratings on certain affected classes that are backed by a pool with a small number of remaining loans. If any of the remaining loans in these pools default, the resulting loss could have a greater effect on the pool's performance than if the pool consisted of a larger number of loans. Because this performance volatility may have an adverse affect on our outstanding ratings, we withdrew our ratings on the related transactions.”
While it may cause frustration to note holders to see the ratings withdrawn, it augurs well that a rating agency is able and willing to say that it cannot have confidence in the outcome, and therefore chooses to withdraw its rating rather than have investors rely, perhaps falsely, on a rating in which it does not have confidence.

Friday, February 3, 2012

Analysis of The Shortcomings of Statistical Sampling in the Mortgage Loan Due Diligence Process

This is a popular litigation-related piece on our website we thought we'd share through this post (pdf version available here) - enjoy the read.



Introduction

Financial institutions, when assembling mortgage pools for the purpose of inclusion in residential mortgage-backed securities (RMBS), often hire independent analytical companies, like Clayton Holdings LLC (“Clayton”), to perform due diligence on the loans and flag any that are problematic.

Leading up to the financial downturn, Clayton reviewed mortgages for its clients - investment and commercial banks and lending platforms, including those of Bear Stearns, Barclays, Bank of America, C-Bass, Countrywide, Credit Suisse, Citigroup, Deutsche Bank, Doral, Ellington, Freddie Mac, Greenwich, Goldman, HSBC, JP Morgan, Lehman, Merrill Lynch, Morgan Stanley, Nomura, Société Générale, UBS and Washington Mutual (the “Issuers”). As such Clayton was purportedly one of the larger due diligence companies that analyzed whether these loans met specifications like loan-to-value ratios, credit scores and the income levels of borrowers.

Clayton describes, in the presentation it provided to the Financial Crisis Inquiry Commission (“FCIC”), the results of its review of a total of 911,039 mortgage loans between Q1 2006 and Q2 2007 1. As can be seen from the chart, of the loans shown to Clayton, Clayton determined approximately 72% of them to be in compliance, and 28% of them to be out of compliance with the standards tested, or “non-conforming.”

Upon determining that a loan failed to meet its guidelines, an Issuer (i.e., Clayton’s client) would have the ability to exercise their contractual right in “putting back” these non-conforming loans to the mortgage lenders – New Century, Fremont, Countrywide, Decision One Mortgage – rather than include them in securitizations.

The regulatory bodies, and the media, have concentrated heavily on the sizeable portions of non-conforming loans, and the lowering of underwriting standards throughout this period; but for this analysis, we concentrate on a more illuminating aspect of the way in which non-conforming loans ultimately found their way into the securitized RMBS pools.

There are at least two ways that non-conforming loans can find their way into the securitizations:
  • First, the Issuer may choose to waive the loan back into the pool, despite its being originally rejected by Clayton.
  • Second, a more overwhelming mechanism, is to not show the loan to Clayton.


The Intricacies of Loan Sampling

Importantly, it seems to have been common practice for Issuers to show only a sample of the loans to Clayton. A sample risks being unreflective of the population of loans, but random sampling can provide an effective statistical approximation under very strict conditions. It can be a cheaper process and, if the sample is well chosen, can accurately reflect the pool.

The objective of sampling is satisfied if the randomly-selected sample is sufficiently large, and is deemed to be in order. Alternatively, if the sample fails to meet expectations, the entire portfolio ought to be revisited. However, in the mortgage due diligence process the samples were often deemed to be problematic – they resulted in an average of 28% of loans failing their criteria. Importantly, the samples were then adjusted, as we understand it, but the original portfolios were not: the Issuers would only put back certain non-conforming loans from that sample.

In this case, the resulting sample, after throwing out certain non-conforming loans, fails to accurately depict the remaining portfolio of loans it was chosen to represent.



How the Sampling Process Worked, and Difficulties Therewith

Former President and COO of Clayton, D. Keith Johnson, explained to the FCIC committee, during their hearing of September 2010, that in the 2004 to 2006 time period, sample sizes went down to the region of two to three percent2. As the sample size decreases, which it did, the effect of the sampling process alone begins to undermine the effectiveness of the due diligence process.

The media have focused their attentions on what happened to the 28% non-conforming loans – the slices in red in the associated charts. Indeed, many of these non-conforming loans, approximately 39%, were not “kicked out” or put back to the mortgage lenders, but were “waived” back in to the to-be-securitized portfolio. This 39% is substantial, and a factor worthy of the media’s attentions.

But the game-changing fact is not among these 28% non-conformers, or the 39% of them which remained in the securitized pool. These are only part of a sample, and when the sample becomes insignificantly small, its overall contribution to the portfolio as a whole is rendered less meaningful. Rather, it is more prudent to consider the composition of the pool as a whole.

For illustrative purposes, let us assume that the sample loans shown to Clayton represented 3% of the pools, on the higher end of those referred to in the abovementioned Johnson hearing. Let us conservatively assume that the sample provided to Clayton was truly randomly selected.3

For a pool of 10,000 loans, Clayton would have been presented with approximately 300 loans, or 3%.

As we can see from the analysis performed, the effect of “throwing out” 61% of all non-conforming loans is marginal: the pool’s overall composition decreased only from 28% non-conforming to 27.67% non-conforming thanks to the due diligence process. Even had the Issuers returned all 84 non-conforming loans, the overall portfolio would not have been greatly altered –non-conforming loans would have declined from 28% to 27.16%.

When a random sample is tampered with, the final product, by definition, no longer represents the original pool. Here, the sample reflects that ultimately 89% of the pool is conforming and 11% are non-conforming (11% = 28% x 39%). But given the reality of the situation, with the original pool remaining status quo, in fact 27.76%, not 11%, of the overall pool was non-conforming, even after the put backs administered as part of the due diligence process.

A well-selected random sample can effectively capture the characteristics of a pool under certain conditions. But an altered sample seldom accurately reflects the original pool.



1 http://fcic-static.law.stanford.edu/cdn_media/fcic-testimony/2010-0923-Clayton-All-Trending-Report.pdf
2 http://fcic.law.stanford.edu/resource/interviews#J
3 course, the sample sizes used and the percentages rejected by Clayton will differ from Issuer to Issuer. So too will the waiver rate.

Thursday, January 26, 2012

Illinois Attorney General Sues S&P – Initial Thoughts

Credit risk ratings are becoming a risky business.

Yesterday’s filing of the complaint against S&P (MHP) centers, in essence, on the allegation of false advertising. Stepping in to this issue for a moment, one of the key defenses offered by the raters is that their ratings are protected under the First Amendment rights to express an opinion (their “speech”). But as law professor Eugene Volokh opines in his letter to the House Committee (May 2009) it is within the framework of commercial advertising that “speech aimed at proposing a commercial transaction – is much less constitutionally protected than other kinds of speech.”

In this case, the AG is not really focusing wholly on whether the ratings were wrong, as much as it’s saying that S&P advertised that it was following a certain code in ensuring the appropriate levels of independence and integrity were being brought to the ratings process.

A former SEC enforcement official, Pat Huddleston, once explained that "[when] I say the [financial] industry is dirty, I don't mean to imply everyone in the industry is dirty," … "[only] that the industry typically promises something it has no intention of delivering, which is a client-first way of operating." This is essentially what the complaint argues: that S&P “misrepresented its objectivity” while offering a service that was “materially different from what it purported to provide to the marketplace.”

This goes back, really, to the key reform measure Mark proposed before the Senate in 2009 – that rating agencies would do well to separate themselves from commercial interests, by building a formidable barrier around the ratings process.


First, put a “fire wall” around ratings analysis. The agencies have already separated their rating and non-rating businesses. This is fine but not enough. The agencies must also separate the rating business from rating analysis. Investors need to believe that rating analysis generates a pure opinion about credit quality, not one even potentially influenced by business goals (like building market share). Even if business goals have never corrupted a single rating, the potential for corruption demands a complete separation of rating analysis from bottom-line analysis. Investors should see that rating analysis is virtually barricaded into an “ivory tower,” and kept safe from interference by any agenda other than getting the answer right. The best reform proposal must exclude business managers from involvement in any aspect of rating analysis and, critically also, from any role in decisions about analyst pay, performance and promotions.
Two other elements jump out immediately from the complaint:

First, the complaint specifically argues that the rating agency “misrepresented the factors it considered when evaluating structured finance securities.” Next, the complaint tries to tie S&P’s actions to its publicly-advertised code of conduct, arguing that its actions were inconsistent with the advertised code.

In respect of actions being inconsistent with the code, certain of these arguments are common-place, such as the contention that the rating agencies did not allocate adequate personnel, in opposition to what’s advertised in the code. This of course becomes a contentious issue – you can see S&P coming back with copious evidence of situations in which they did “allocate adequate personnel and financial resources.” But the complaint hones in on the factors considered in producing a rating, and it focuses on two parts of the code:



Section 2.1 of S&P’s Code states: “[S&P] shall not forbear or refrain from taking a Rating Action, if appropriate, based on the potential effect (economic, political, or otherwise) of the Rating Action on [S&P], an issuer, an investor, or other market participant.”

and…

Section 2.1 of S&P’s Code states: “The determination of a rating by a rating committee shall be based only on factors known to the rating committee that are believed by it to be relevant to the credit analysis.”
This brings back to mind, disturbingly, a recent New York Times article (Ratings Firms Misread Signs of Greek Woes) which focuses on the deliberations within Moody’s (MCO) and their concerns about the deeper repercussions of downgrading Greece – rather than the specifics of credit analysis:


“The timing and size of subsequent downgrades depended on which position would dominate in rating committees — those that thought the situation had gotten out of control, and that sharp downgrades were necessary, versus those that thought that not helping Greece or assisting it in a way that would damage confidence would be suicidal for a financially interconnected area such as the euro zone,” Mr. Cailleteau wrote in an e-mail.”

The question then, is whether rating committees were focused on credit analysis, or whether other concerns were at play, aside even from typical business interests. The concerns for rating agencies, from a legal perspective, can become quite real when the debate centers not on ratings accuracy, but on whether the rating accurately reflected their then-current publicly available methodology. There may be substantial risks, therefore, in delaying a downgrade of a systemically important sovereignty or institution (such as a too-big-to-fail bank or a key insurance company) if such downgrade is appropriate per the financial condition of the company or sovereignty, or in providing favorable treatment to certain companies or sovereignties based on the relative level of interconnectedness.

The allegations of misrepresenting factors considered in their analysis opens another can of worms for rating agencies, as they’ll subsequently be increasingly focused on disclosing the sources of the information relied upon. There’s substantial concern, to the extent they’re relying on the issuing entity (in cases in which the issuing entity is itself the paying customer), that such reliance becomes a disclosure issue to the extent the investor may otherwise have assumed the rating agency was independently verifying such information. This was a frequent problem in the world of structured finance CDOs such as those described in the AG’s complaint.

Last, but not least, the complaint focuses on the effectiveness of ratings surveillance. This is a topic of importance to us, as we feel that proper surveillance, alone, may have substantially diminished the magnitude of the crisis. At the very least, certain securitizations that ultimately failed may not have been executed had underlying ratings been appropriately monitored, and several resecuritizations may have become impossible, limiting the the proliferation of so-called toxic assets. See for example: Barriers to Adequate Ratings Surveillance

That’s all for now. There’s a lot more to this complaint, so we suggest you check it out here.

Monday, January 23, 2012

The Mortgage Litigation Hangover (and who knew what, when)

Aside from the relatively new foreclosure disputes, including those relating to MERS, the big banks continue to suffer the ill-effects of lawsuits relating to their original portfolio selection and sale of mortgage-backed securities (and derivatives thereof, like ABS CDOs).

A number of these cases were dismissed last year, as judges often failed to sympathesize with the plaintiff's arguments that they were "duped." On the back of Congressional and FCIC-related testimony, plaintiffs have been able to strengthen their arguments as they search for viable legal theories that satisfy these, potentially higher, pleading standards.

Today's two cases filed today focus on the what they believe were material informational asymmetries between the buyers and sellers, and possible scienter on the side of the sellers.

In John Hancock Life Insurance Co. v. JPMorgan Chase & Co., 650195/2012, New York state Supreme Court (Manhattan), the complaint argues that:
Defendants JPMorgan, Bear Stearns, WaMu, and Long Beach knew about the poor quality of the loans they securitized and sold to investors like Plaintiffs, because in order to continue to keep their scheme running, they completely vertically integrated their RMBS operations by having affiliated entities at every stage of the process. In addition, Defendants JPMorgan, Bear Stearns, WaMu, and Long Beach were aware of lending abuses on the part of the third party originators they purchased loans from due to, inter alia, their financial ties to the third party originators and their reviews of loan documentation and performance.

In Sealink Funding Ltd. v. Morgan Stanley, 650196/2012, New York state Supreme Court (Manhattan), the plaintiff contends that because the seller never disclosed certain of its practices to the investors, the "investors were not compensated for the additional risks that they unknowingly took on in purchasing those Morgan Stanley RMBS."
Morgan Stanley knew or recklessly disregarded that those lenders were issuing high-risk loans that did not conform to their respective underwriting standards. Morgan Stanley did, in fact, conduct extensive due diligence on the loans it purchased for securitization, as represented in the Offering Materials. In the course of that extensive due diligence process, which, in many instances, included an extensive re-underwriting review of the loans it purchased by an independent third-party due diligence provider, Clayton Holdings, Inc. (“Clayton”), Morgan Stanley learned that the originators routinely and flagrantly disregarded their own underwriting guidelines, originated loans based on wildly inflated appraisal values, and manipulated the underwriting process in order to issue loans to borrowers who had no plausible means to repay them. Indeed, both the President of Clayton and the head of Morgan Stanley’s own due diligence arm testified as to the extensive deficiencies identified through Morgan Stanley’s due diligence. Specifically, over one-third of the loans Morgan Stanley evaluated for purchase and securitization at the height of the mortgage boom (from 2006 through mid-2007) failed to meet the originators’ own underwriting guidelines.

We concentrated on the effectiveness of the Clayton due diligence sampling in ar piece late last year. (See Analysis of the Shortcomings of Statistical Sampling in the Mortgage Loan Due Diligence Process.)

But watch this space for more coverage on how legal teams representing investors seek to survive threshold challenges by showing that – even if their client is or was a sophisticated investor – they may not have been privy to the types of information available to the structuring banks.

Thursday, January 12, 2012

A Fair (Value) Solution

Hello readers, and a belated welcome to 2012!

In yesterday's FT Citigroup CEO Vikram Pandit advocates for heightened transparency across the banking system, enabling an apples to apples comparison that “[clears] some of the obscurity that causes people to believe the system is a game rigged against their interests.”

He is not alone. Late last year, Barclays' Group Finance Director Chris Lucas called for greater transparency in the financial reporting of liability valuations. The concept, of course, is that transparency is desired by investors, once bitten, before they can again get comfortable investing in banks.

Pandit proposes a solution that involves creating a benchmark portfolio against which banks can measure their relative risk.

Asset valuation itself has become the number one concern for the SEC in 2011: through mid-December the SEC had, according to the WSJ, issued a total of 874 “comment” letters to 802 distinct companies concerning their fair valuation and estimation of assets and contracts. Meanwhile, audit firms PwC, KPMG, Deloitte and others have been criticized as to their oversight of their clients' valuations and valuation processes (see Contested Pricing List). And so it is not surprising that banks are trying to overcome these substantial hurdles, though understandably in ways that suit them best.

The problem here is akin to the one faced by technology companies: the evaluation of their patent portfolios is no mean feat and is highly subjective – yet it is a crucial component of their stock price, especially in an acquisition or dismantling process.

Pandit’s solution is one solution. Another is more arduous, but overcomes the dual problems of inconsistency and subjectivity. It also combats the material regulatory arbitrage gaming business that has been created to minimize capital reserves. We would be able to say good-bye to a whole business of utility-free resecuritizations, structured solely to game the ratings models to achieve, or manufacture, lower reserves. It would be the end of certain Re-REMICs and perhaps even AAA-rated principal protected notes.

The solution, of course, is to evaluate each and every bond. This is already being done (to an extent) by the NAIC, and would add stability and assurance to our investor base.

Asset valuation may be more art than science, especially in the world of illiquid assets – but at least it's not a game. If well-performed, it can provide the cross-company valuation consistency even our bankers are calling for.

But it won’t be cheap.




----------------------------------------------
For our Central Pricing Solution, click here.