Showing posts with label Short-Termism. Show all posts
Showing posts with label Short-Termism. Show all posts

Tuesday, April 28, 2020

Rating Agency Déjà Vu


Short Shrift for Surveillance

There seems to be a scramble on at the rating agencies, but we think we’ve seen this scramble before. 
Beginning late in 2006 the subprime and alt-A residential mortgage-backed securities (RMBS) market began its long downward slide into unimaginable losses.  
The credit rating agencies – of which S&P, Moody’s, and Fitch have the leading market shares – appeared to be slow to respond.  Perhaps there was good reason in the early months of 2007.  Or perhaps not.  An argument against promptly downgrading the RMBS was to wait for a few monthly reporting periods to validate the trend of rising mortgage delinquencies.  The risk in waiting, of course, is that it suddenly becomes “too late” and you are required to implement larger, more frantic downgrades than in the alternative in which you had been downgrading incrementally if and when appropriate.  But while ratings were yet to be downgraded there was no doubt, however, that the values and prospects of both the RMBS and their underlying mortgages were tumbling violently.
We’re reminded of the iconic J.P. Morgan himself who once said (paraphrase):  “Every man has two reasons for doing something.  There’s the reason he gives that sounds good and then there’s the real reason.” 
The rating agencies had at least one “real” reason for not downgrading RMBS bonds promptly: for RMBS and other structured products, they were ill-equipped to provide the services!  From our perspectives having been in the industry, reviewing Congressional testimony and materials from the Financial Crisis Inquiry Commission, and having conferred with others playing similar roles at other agencies, we can tell you that the rating agencies did not have the capacity to methodically or efficiently oversee their models and ratings on the RMBS bonds they purported to be monitoring.  There was very little that was programmatic to it.  The process for the supposedly continuous, ongoing surveillance of thousands of RMBS bonds was (shockingly) manual.  Or it was not done at all.
The rating agencies were much more concerned with clipping high revenue-generating new deal rating tickets, and much less concerned about providing the required surveillance on existing deals.  Monies for surveillance, after all, get paid whether or not the service is provided.
So the RMBS downgrades came in waves spaced out over many months in 2007.  The rating agencies placed many bonds on downgrade watch prior to the requisite modeling and analysis based on mortgage characteristics, transaction vintage, and broad market performance.  This “watch period” supposedly bought the rating agencies time to perform the actual analysis before specifying actual downgrades.
Of course, while the existing ratings were intermittently “wrong” or awaiting correction, there was much ado about how to rate new deals or other existing deals that depended on those outstanding – but yet to be addressed – ratings.
Now it’s 2020 and we wonder if the ratings surveillance process has improved at all.  The tremendous economic stress of the coronavirus (“COVID-19”) raises doubts of the sudden inability of homeowners to continue making mortgage payments.  Many of these debtors are now unemployed with much uncertainty for the future.  The house prices themselves may be greatly depressed – we won’t know until the end of this suspension period for real estate transactions. Falling home prices in 2007-2008 themselves prompted many mortgage defaults.  Thus, it’s fair to say that RMBS bonds have much greater risk of default than they did at the beginning of the year.
Moody’s acknowledges the impact of the coronavirus, and the substantial uncertainty it brings with it.  
Our analysis has considered the increased uncertainty relating to the effect of the coronavirus outbreak on the US economy as well as the effects of the announced government measures which were put in place to contain the virus. We regard the coronavirus outbreak as a social risk under our ESG framework, given the substantial implications for public health and safety.”[1]
Importantly, Moody’s accepts that “It is a global health shock, which makes it extremely difficult to provide an economic assessment.”  However, Moody’s is in the business of making these assessments, meaning it needs a robust and methodical way of tackling the new variable, and accounting for the uncertainty that comes with it. 
The rating agencies have two very interesting vocations at present. 
First, they’re actively downgrading RMBS (and most other bonds too).
Next, they’re actively rating new deals, even though some of the new deals are backed by assets that they’re simultaneously downgrading.  For example, rating agencies rated five new CLO deals, backed by loans in April; meanwhile, as of early April JPMorgan was showing that loan downgrades were outpacing upgrades at a rate of 3.5-to-1.

Part 1: Ratings Downgrades
Moody’s announced, on April 15, its placement of 356 RMBS bonds on review for downgrade and the actual downgrade of 48 bonds to the Baa3 level.[2]  The downgraded certificates remain on watch for further downgrade.
Placing bonds on watch for potential downgrade is not, alone, all that interesting.  But the downgrades were interesting.  Moody’s rationale:
The rating action reflects the heightened risk of interest loss in light of slowing US economic activity and increased unemployment due to the coronavirus outbreak. In its analysis, Moody's considered the sensitivity of the bonds' ratings to the magnitude of projected interest shortfalls under a baseline and stressed scenarios. In addition, today's downgrade of certain bond ratings to Baa3 (sf) is due to the sensitivity of the ratings to even a single period of missed interest payment […]
There are a few things that are interesting here. 
First, the lack of specificity. Moody’s does not specify any quantitative elements of its analysis – Moody’s does not cite to changes in default rates, severity rates, recovery rates or prepayment speeds – that were adjusted to accommodate COVID-19 or that justify the downgrade action or explain the specific rating as Baa3, which is (curiously) the lowest level of investment-grade.  It is all rather vague.
Second, you might have noticed some unusual language there.  The “downgrade of … ratings … is due to the sensitivity of the ratings…”  That is all meaningless. One does not downgrade a rating because of the sensitivity of the rating.  All ratings, aside perhaps from some Aaa ratings, are “sensitive.”  One downgrades because of a higher potential for default or loss, as ascertained by an analysis performed – not because of the sensitivity. 
That brings us to the next issue: what analysis was performed?  The short answer is, well, that there was no proper analysis performed!  Aside from Moody’s failure to describe the specific analysis performed, we have at least two reasons to believe that no analysis was performed. 
  1. Moody’s does not have a methodology for incorporating the coronavirus stress: Moody’s explicitly acknowledges that the principal methodology used in these ratings was the "US RMBS Surveillance Methodology" published in February 2019, which of course was pre-COVID-19
  2. Moody’s acknowledges that it did no modeling: “Moody's did not use any models, or loss or cash flow analysis, in its analysis. Moody's did not use any stress scenario simulations in its analysis.”  
Why does it matter that Moody’s did no modeling?  Because the Feb. 2019 methodology referenced is a quantitative one, and it requires extensive modeling.  One simply cannot apply a quantitatively-heavy methodology while applying no models.  Ergo, we suspect that Moody’s did not apply, or could not have properly applied, the Feb. 2019 methodology.
Altogether, we see 48 bonds downgraded to Baa3, absent:
  • an updated methodology that reflects the key new coronavirus risk, despite acknowledgment of the key importance of this risk to the deals;
  • a proper application of the existing methodology from Feb 2019 or any explanation as to the deviations from the methodology; and
  • the application of any models at all.
It is commendable to tell investors that you have not modeled cash flows, or performed any stress scenario analyses.  But the questions are then: What was done and how exactly did you arrive at a Baa3 rating? What is the point of publishing a ratings methodology if you are not going to follow it, or describe specific deviations from it, so that investors can similarly follow your reasoning?
Strictly speaking and adhering to the meaning of Moody’s ratings, it is extraordinarily difficult to imagine how Moody’s could determine that 48 bonds should simultaneously earn the new, lower rating of Baa3 without analysis.  With no quantitative estimate for increased loss or diminished and re-directed cash flow, how can Moody’s determine the precise new rating level?
One thought, however, is that Moody’s guessed at the rating based on the methodology, and perhaps because there were no models to apply: in the rush of rating new deals, maybe Moody’s hadn’t gotten around to building well-functioning models to replace the old ones?  Or, equally plausible is the possibility that Moody’s models do not exist in the cloud, meaning that the Moody’s analysts and supervisors working from remote (home) locations could not access the necessary models, data or tools to perform their reviews. What an astounding disclosure this would be, if true, for a global data-centric organization!
Whatever the reason, the Baa3 ratings are flawed.  No analysis by an outsider, or perhaps even a Moody’s insider, when applying Moody’s methodology, can credibly achieve an outcome of Baa3.  It is just a guess.

Part 2: New Ratings
The rating agencies are actively downgrading bonds, loans, and credit instruments across most sectors and industries.[3] Many of these downgrades may be well-intended, due to newly introduced COVID-19-related risks.  But while they are downgrading these (newly unstable) credits they are rating new structured finance credits that are supported by these very same unstable credits.  Their structured finance ratings problematically look to their own ratings of the credits, which are being downgraded daily.  Thus, new ratings cannot be said to be robust to the degree the rating agencies lack confidence in the sturdiness of their own underlying ratings.  Moreover, the new structures are not being rated according to any new, post-coronavirus methodology.  So we might expect those newly-created bonds to be similarly downgraded, too, in short order. 
When S&P rated Harriman Park CLO, a new deal, on April 20th, S&P said nothing at all about coronavirus in its press release.  In explaining how it came about its ratings, S&P cited only to related criteria and research from 2019 and earlier.  There is no evidence that any scenario was run at all differently by S&P.  No mention is made of any newly-imposed stress scenario being run. 
When Fitch rated this same deal, Fitch explained its thought process and how it is deviating from its pre-existing methodologies.  That’s a whole lot better than S&P in this case, but even Fitch failed to explain the “why.”  Fitch simply explained what it is doing, but not why what is doing makes sense.  How have they calibrated their models (if at all)?  How do we know the new assumptions are adequate or comprehensive? 
“Coronavirus Causing Economic Shock: Fitch has made assumptions about the spread of the coronavirus and the economic impact of related containment measures. As a base-case scenario, Fitch assumes a global recession in 1H20 driven by sharp economic contractions in major economies with a rapid spike in unemployment, followed by a recovery that begins in 3Q20 as the health crisis subsidies. As a downside (sensitivity) scenario provided in the Rating Sensitivities section, Fitch considers a more severe and prolonged period of stress with a halting recovery beginning in 2Q21. 
Fitch has identified the following sectors that are most exposed to negative performance as a result of business disruptions from the coronavirus: aerospace and defense; automobiles; energy, oil and gas; gaming and leisure and entertainment; lodging and restaurants; metals and mining; retail; and transportation and distribution. The total portfolio exposure to these sectors is 9.9%. Fitch applied a common base scenario to the indicative portfolio that envisages negative rating migration by one notch (with a 'CCC-' floor), along with a 0.85 multiplier to recovery rates for all assets in these sectors. Outside these sectors, Fitch also applied a one notch downgrade for all assets with a negative outlook (with a 'CCC-' floor). Under this stress, the class A notes can withstand default rates of up to 61.8%, relative to a PCM hurdle rate of 53.9% and assuming recoveries of 40.6%.”
While it continues to rate new deals, S&P (for example) has placed roughly 9% of all its CLO ratings on watch negative since March 20th.[4]
The rating agencies suffered tremendous reputational damage when they were found to be rating new CDO deals in 2007 while they already knew they could no longer rely on the RMBS ratings supporting those deals, as they would imminently be downgraded.  Once the CDO rating analysts knew the RMBS ratings were unstable and about to be downgraded, the CDO analysts were taking a legal risk in producing ratings they knew would not be robust.  It can be difficult to turn away the sizable revenues that come with rating new deals – even when you do not have a sustainable ratings methodology or any conviction in the credibility of the data (including ratings) you are relying on.
The essence of formulating credit ratings for all entities (structured, corporate, municipal, etc.) requires the ability to estimate future revenues, expenses, and liabilities for a multi-year time period.  Such estimates are critical to the rating process.  As we write, forecasting for the broad economy and for most specific entities is highly uncertain.  We do not see how it is possible to perform meaningful rating analysis amidst this uncertainty.  Hence, the credit rating agencies should arguably pause all new ratings until uncertainty declines or until they develop rating methodologies that fully incorporate the wide uncertainty. 

Part 3: Summary/Conclusions
Stale ratings, inflated ratings and other faulty ratings can sometimes go unnoticed during an upswing. But shortcomings become most pronounced during an economic downturn or crisis.  We are concerned that we are (again) seeing the result of years of prior, weak, ratings mismanagement.
When crises occur, rating agencies should be able to swiftly update their methodologies, models and ratings.  And rating agencies should stave off rating new deals until and unless they have strong and current methodologies in place to explain their new ratings and how their ratings accommodate the upheaval, whatever it may be.
Rating agencies seem quickly to have forgotten (or simply to have ignored) the lessons they should have learned from the last crisis.  Regulatory settlements with the DOJ for subprime crisis-era misconduct (S&P for $1.375 billion[5], and Moody’s, in the amount of $864 million[6]) concerned issues in which they deviated from their code, which required them to provide objective, independent ratings.  Those were not about the rating agencies being wrong, or failing to predict the downturn, but closer to argument that the rating agencies failed to believe their own ratings.[7] 
In the above case, we would have to ask how Moody’s can be sure that Baa3 is the right rating – for all 48 bonds, across different deals, structure types and vintages – consistent with its methodology, given it failed to apply any modeling? 





This article was co-written by Joe Pimbley, who consults for PF2.

  1. https://www.moodys.com/research/Moodys-places-404-classes-of-legacy-US-RMBS-on-review--PR_422633
  2. https://www.moodys.com/research/Moodys-places-404-classes-of-legacy-US-RMBS-on-review--PR_422633
  3. https://www.wsj.com/articles/bond-downgrades-begin-amid-coronavirus-slowdown-11585045800
  4. https://www.opalesque.com/industry-updates/5969/96-reinvesting-clo-ratings-placed-on-creditwatch.html
  5. https://www.justice.gov/opa/pr/justice-department-and-state-partners-secure-1375-billion-settlement-sp-defrauding-investors
  6. https://www.justice.gov/opa/pr/justice-department-and-state-partners-secure-nearly-864-million-settlement-moody-s-arising
  7. https://www.bloomberg.com/news/articles/2013-02-05/s-p-won-t-employ-first-amendment-defense-in-u-s-ratings-lawsuit

Monday, February 26, 2018

Florida Shootings Require Cultural & Mindset Changes

Our failure to prevent the Florida school shooting illustrates a pervasive problem in modern societies: we often have access to ample warning signs but all-too-frequently fail to leverage this information to avoid disaster. The issue not only impacts law enforcement agencies, but our financial institutions as well. To more effectively handle all the intelligence available to them, organizations will require major structural and cultural change. 

The FBI and local law enforcement reportedly had more than enough information to legally disarm and detain confessed school shooter Nikolas Cruz before he killed 17 people at Parkland High on February 14. This is not the first such intelligence failure, and won’t be the last. Consider these examples:
  • 9/11 could have been prevented had the CIA and FBI done a better job of sharing and handling intelligence.
  • Russian intelligence warned the FBI about Tamerlan Tsarnaev long before he carried out the Boston Marathon bombing.
  • In France, authorities failed to act on multiple clues that would have enabled them to prevent the Paris bombings that claimed 130 lives in November 2015. 

In the financial industry, rating agencies and bank risk management teams failed to act in their or their clients’ best interests when they continued to create and sell residential mortgage-backed securities, despite the deterioration in mortgage lending standards, and the increasing and disturbing amount of mortgage fraud being reported by the FBI in its annual mortgage fraud reports.

A well-operating risk-management function, with a voice, would most likely have limited the potential for the cultural failures seen at the Royal Bank of Scotland, as detailed in the recently published report commissioned by the Financial Conduct Authority. The extraordinary activities of the gung-ho Global Restructuring Group at RBS in London could immediately have been stymied, as they posed reputational and business risks far outweighing the group's short-term revenue-generating interests. As the report explains: 
“GRG enjoyed an unusual independence of action for a customer-facing unit of a major bank. It saw the delivery of its own narrow commercial objectives as paramount: objectives that focused on the income GRG could generate from the charges it levied on distressed customers. In pursuing these objectives, GRG failed to take adequate account of the interests of the customers it handled and, indeed, of its own stated objective to support the turnaround of potentially viable customers.” 
These assorted failures suggest that we have a systemic problem with risk monitoring, or a failure to incorporate it appropriately within institutions. And because the problem is systemic it won’t be solved by firing a few bad apples. Instead, we need to understand and address the root cause. 

One feasible argument is that jobs involving risk monitoring and mitigation generally come with a relatively low social status and thus do not necessarily attract the most motivated applicants.

This phenomenon is epitomized by our (often unfair) stereotype of security guards: that they are ineffective and prone to sleeping on the job. Because security jobs are low paying, they don’t often attract type “A” individuals. The job itself is quite boring: most of the time nothing happens. While a more proactive security guard could find and act upon many clues during the course of his or her day, almost all of the extra effort will be for naught. At least 99 times out of 100 that suspicious backpack won’t contain an explosive device. 

Although bank risk managers and FBI call handlers undoubtedly have higher social status than security guards, they are most likely to be subordinated within their organizations. At a bank, monitoring credit risk is much less glamorous and lucrative than acquiring or merging companies, underwriting deals or trading securities. And, as with the case of the seemingly suspicious backpack, most clues won’t lead anywhere anyway: for every legitimate call law enforcement departments receive there are many that lead nowhere; a missed charge card payment, similarly, often doesn’t presage a mortgage foreclosure. 

Ideally, we should elevate the status of risk monitoring jobs and make them more exciting. More attention from senior management may help. Although most money-center banks took massive losses during the financial crisis, Goldman Sachs came out relatively unscathed. A major reason is that the bank’s Chief Financial Officer reviewed daily risk management reports and held a meeting in his office to call for immediate action once its was detected that mortgage backed securities had begun to underperform in 2006. Goldman is also an exceptional case in that it rotated fast-track talent between moneymaking and risk management roles, and it empowered risk management staff to veto certain trading activities. 

Although more high-level attention might help those charged with receiving and sifting through raw intelligence, the job is still a tedious one – akin to looking for a needle in a haystack. 

Conviction Dilemma 

In addition to the possibility that risk monitoring personnel are as a group less motivated, risk personnel tend to be a more introverted type than their front-desk colleagues. This may manifest in their being apprehensive when expressing themselves to their comparatively more aggressive colleagues, and potentially come off as being indecisive or speculative.  Leaders often like a strong, definitive opinion: “hedge this risk!” and may shun or ignore a more complex opinion coming from a more cautious analyst. 

In short, the personalities hired into risk-management roles often suffer from what we will term the “conviction dilemma,” which emanates from the work of Philip Tetlock and others, who studied predictive expertise. Tetlock's research findings informed his commentary that those whose expertise was valued and sought out, for example pundits on TV shows like The McLaughlin Group, were those who had vocal, unequivocal opinions, that could be articulated with utter conviction – but were often wrong. 

Altogether, even strong and motivated risk experts may be introverted and may be indecisive when expressing themselves. Playing a function that is considered as subordinated in management’s eye, they might struggle to make convincing and resolute “do this!” arguments, and management might therefore be less likely to take them seriously, and act on them expeditiously. 

Applying Technology 

In the 21st century, we have learned to assign boring or laborious jobs to computers. We can identify potential attackers earlier by entering all the clues law enforcement receives into shared databases, and we have state-of-the-art data science tools built for analyzing this mass of information. This approach need not violate privacy: social media posts, calls from tipsters and prior arrests are all legitimately available to law enforcement today. 

Palantir is among the most prominent of companies offering software that enables intelligence agencies to find needles in the haystacks of raw data they receive. Unfortunately, Palantir is not an inexpensive solution, and may thus be beyond the budgets of smaller law enforcement agencies. 

Governments and NGOs may wish to invest in the development of free, open source data analysis. Aleph is an open source tool that can analyze large volumes of unstructured data. Although designed for investigative journalists, it could be customized for use by law enforcement of for counterparty tracking. Whether they use licensed or open-source solutions, law enforcement and intelligence agencies should establish and apply technical standards for data sharing. Because financial firms are overtly competitive, data sharing of financial intelligence may be less appropriate between competing firms, but may be more prevalent within institutions. 

Often the information needed to prevent mass killings is hiding in plain sight. By improving organizational structures and leveraging technology, financial firms and law enforcement agencies can harvest more actionable data from legally available information. Armed with this data, they can prevent certain future acts of carnage. While no single policy solution – VaR levels or gun control included – can ever guarantee endless success, we need to be thoughtful and dynamic in going about limiting the frequency or even the magnitude of these catastrophes, and we would do well to use our tools effectively in pursuing the goal not only of making money, winning clients and awards, but also of limiting the downside.


---------------------

This piece was co-written by Marc Joffe, who consults for PF2, and members of PF2’s staff. For more on this topic, visit our 2016 piece on the detrimental impact of short-term thinking patterns on conduct with financial firms. Among other things, we recommend a re-thinking of the design of incentive structures: “The approach we put forward here is the studious linking of profit-sharing to successful and honest risk-taking and business practices.”

Marc Joffe is a Senior Policy Analyst at the Reason Foundation and a researcher in the credit assessment field. He previously worked as a Senior Director at Moody’s Analytics. 

Tuesday, January 17, 2017

Moody's Settlement and Its Wider Implications for Finance and Beyond

This blog is provided by guest contributor Marc Joffe.  Marc also studies and writes extensively on debt issues in sovereign and sub-sovereign markets.  His recent commentary on the relative strength of US cities can be found here.  The following views are his own, and do not necessarily reflect those of PF2.


~~~     

On Friday afternoon, Moody’s settled DOJ and state attorneys general charges that it inflated ratings on toxic securities in the run-up to the financial crisis. Moody’s paid $864 million to resolve certain pending (and potential) civil claims, considerably less than S&P's settlement of $1.35 billion. While the settlement appears to close the book on federal investigations of rating agency malfeasance, the episode deserves consideration because it has something to teach us all about broader institutional failures, and their implications for both the economy and news coverage.

Moody's received better treatment than S&P despite the fact that its malpractice was painstakingly documented in 2010 by the Financial Crisis Inquiry Commission. I suspect that Moody's achieved a better outcome than S&P for some combination of three reasons: (1) employees were more disciplined about what they committed to email, so DOJ lacked some of the smoking guns S&P analysts handed it (including the infamous message sent by one S&p analyst to another "We rate every deal. It could be structured by cows and we would rate it."); (2) senior management and corporate counsel took a less confrontational approach to prosecutors; and (3) Moody's carried the water for Democrats at critical times during the Obama administration. Moody's Analytics economist Mark Zandi was a vocal proponent of the 2009 stimulus bill and other Obama policies. Meanwhile, the rating agency declined to follow S&P in downgrading US debt from AAA in 2011. 

Having worked at Moody’s structured finance in 2006 and 2007 – but not in a ratings role – I recall that most rating analysts didn’t think they were doing anything wrong (although it is also true that some left exasperated). I believe this was the case because the corruption of the rating process occurred gradually. In the 1990s, ratings techniques were primitive, but appear to have been motivated by an intention to objectively assess then novel mortgage backed securities and collateralized debt obligations. After the company went public in 2001, quarterly earnings became a concern for the many Moody’s professionals who were now eligible for equity-based compensation.

As the structured finance market soared during the early part of the last decade, the pressure to dumb down ratings standards increased. As portrayed in The Big Short, analysts at S&P and Moody’s understood that the failure to give investment banks AAA ratings for the junk bonds they were assembling from poorly underwritten mortgages would place their companies at a competitive disadvantage. 

The collapse of rating agency standards is one case of a much larger set of problems that are threatening our economy and social fabric: professionals who we expect to provide objective information prove to be biased. It’s like a baseball umpire calling a strike when a batter lays off a wild pitch. While that type of behavior could ruin a ball game, the loss of integrity by financial umpires has more earth-shaking implications.

Aside from rating agency bias, the financial crisis was also triggered by a spate of dodgy appraisals. Inflated home appraisals, made at the behest of originators trying to qualify new mortgages, also contributed to the 1980s Savings and Loan crisis.

Malpractice by supposedly unbiased professionals also exacerbated the 2001-2002 recession. The values of dot com stocks were inflated when securities analysts issued misleading reports exaggerating the companies’ earnings potential. The analysts’ judgment was clouded by incentives at their investment banks, which profited from underwriting stocks issued by these overrated companies. Meanwhile, Arthur Andersen’s shortcomings in auditing Enron's books magnified the impact of that firm’s spectacular 2001 crash.

So the credit rating agency problem is part of a more generalized issue that encompasses appraisers, auditors and security analysts. It can occur whenever professionals are asked to provide objective evaluations: they can succumb to their own biases or pressure from those who have a vested interest in the outcome of the review. Because the judges usually receive less compensation and have lower social status than those who are judged, they are especially vulnerable to temptation.

And the problem is not limited to finance. Journalistic institutions which have built stellar reputations for objective, fact-based news reporting have let their standards slip, especially during the contentious 2016 election and its aftermath.  For example, the Washington Post recently embarrassed itself by hastily reporting that the Russians had hacked a Vermont power utility. Ultimately, it turned out that a computer virus created in Russia was found on a laptop at the utility’s offices. The laptop was not connected to the power system, and the virus was typical of Eastern European computer worms that proliferate across the internet. Adding insult to injury, the newspaper failed to issue a proper retraction when the story collapsed.

Like those working at Moody’s, I suspect that most WaPo reporters didn’t imagine that they were doing anything wrong. They may have been guided by a belief that the public needed to be more wary of the Russians, especially now that they appear to have influence within the Trump administration.  Some mainstream media reporters may honestly believe that protecting America from the Russians and from Donald Trump is more important than living up to the ideal of objectivity that had been the gold standard of 20th century news coverage. It is also possible that reporters are influenced by pundits, government sources and political power brokers: there have been many cases of journalists cycling in and out of government roles, so a victory by one’s favored party can offer career benefits.

But whether it’s Moody’s and S&P or a major news outlet, we all suffer when systemically important providers of allegedly objective information lose their bearings. Even the most primitive organisms need facts to survive. Prey that deny knowledge of the position and trajectory of predator movements become dinner. Today’s most complex social organism, American society, needs institutions that provide just the facts in a dispassionate manner. The rot destroying the foundations of these organizations should worry all of us regardless of our position in the financial hierarchy or on the political spectrum.

Friday, September 16, 2016

The Influence of Short-Termism on Corporate Culture

Our first piece on corporate culture at financial firms is up and ready (click here).

We managed to grab the first bit of news in the ongoing Wells Fargo saga, but there should be further developments next week Tuesday, when Wells' CEO, John Stumpf, appears before a Senate Banking Committee hearing.  He will face some tough questions.

(Meanwhile the House Financial Services Committee announced today that it too will be investigating the allegedly illegal activity by Wells Fargo employees, and says it too will summon CEO Stumpf to testify later this month.)

We don't want to give it away, but here's a short blurb, enticing you to read the piece.  We hope you'll dig in!

                                                                       --------------

Bank of America is cutting costs. Its headcount has decreased by roughly 15,000 employees annually over the last three years. Things are moving fast. CEO Moynihan recently explained that: "We're driving a thing we call responsible growth. You've got to grow, no excuses." 

Meanwhile, it has become known that numerous Wells Fargo's employees have been creating unauthorized accounts for their clients. Millions of accounts. More than 5,000 employees are said to have been fired. 

Perhaps growth has its costs. 

Short-Termism and its Potent Influence on Corporate Culture at Financial Firms tries to frame the issue: how and to what extent can short-term thinking patterns hurt the long-term goals of a firm?

The financial firms are under pressure, including from increased regulatory oversight (and non-compliance risk) and the renewed focus on cultural issues within financial firms.  

We analyze some of the problem zones and propose a framework for tackling short-term thinking patterns and the associated cultural concerns -- appreciating that the mindsets, incentive structures, and personality types at financial firms can make matters all the more challenging. 

In addition to Wells Fargo, we include examples and thought-pieces from: 
  • Credit Suisse Citigroup 
  • Green Tree Financial 
  • JP Morgan 
  • Platinum Partners 
  • Société Générale 
  • Visium Asset Management
As always, all feedback is welcome and appreciated!

~PF2