Tuesday, January 13, 2009

Static Measures for a Dynamic Environment

It is with a touch of disappointment that we read S&P's announcement yesterday that they're downgrading certain leveraged super senior (LSS) notes based on a change to their model.

What's particularly disappointing is that they're applying a (new) static measure for volatility -- a variable which we all know is certainly not static -- to long-term transactions. We call this a model "plug." Essentially, each time volatility changes, S&P could legitimately recalibrate its model and either upgrade or downgrade tranches (typically downgrade) on the basis of this change. From their release...


Today's rating actions reflect a recalibration of the model we use to rate these transactions. Specifically, we have increased the volatility parameter we use when simulating spread paths.

In future, we will use CDO Evaluator v4.1 and a volatility parameter of 60% to model all LSS transactions that have spread triggers. Before this recalibration we used a volatility parameter between 35% and 40%.


Given the rating agencies typically hide behind the mantra that their ratings are long-term "opinions" (expected loss or default probability measures, as the case may be) it's odd that they would adapt their model to incorporate short term "plugs." Each time volatility changes going forward, can we expect a reciprocal adjustment to be made to rating levels?

The ratings are being given based on a static volatility measure, and updated at S&P's whim to recognize the inherent flaw with the model... and that's not to mention the elephant in the room, the static correlation assumptions for all environments.

Let's look at this from another viewpoint: when trading stock options using Black-Scholes, one could legitimately argue that you're trading volatility. The price is out there in the market - it's a given. The Black-Scholes formula allows you to solve for volatility, and you could then buy or sell the option based on your assessment of the current and future volatility of the stock's price. Thus volatility is assumed to change over time, and if it were assumed to be constant (such as S&P is assuming), options wouldn't trade. Black Scholes suffers from having a static volatility measure (sigma), but if you're going to have a model where price is not known (from the market) and volatility is a huge unknown, it can't suffice to apply a static measure which then gets updated with time, at the agency's discretion, potentially adversely affecting your supposedly long-term ratings.

What do we recommend?
If rating consistency is important (which we believe it is, especially when pension funds and other endowment funds are substantial investors and when institutions place regulatory capital -- and hedge funds post margin -- against assets based on their ratings), then it's suboptimal to apply static, long-term measurements for both correlation and volatility, as these are key inputs to the model. We're not suggesting it's easy to model each variable as path-dependent, or time-dependent, but if you can't accurately estimate the key parameters in your model the only solution is to stop rating the instrument until you feel you can accurately estimate those variables.

Summary
Volatility, like correlation, is best understood as a measure that's dependent on time. Having said that, if S&P legitimately believes that the new (60%) measure is forever good, we would like to see them admit that the original measure was flawed (and show us how and why they reached the incorrect level), and reimburse all note issuers who paid and continue to pay for these flawed ratings. Ah, the responsibilities that come with collecting fees for your assumptions and modeling capabilities!

No comments: