Yes, I know, throughout this book I keep referring to rating agencies but, like it or not, they are an integral part of this market. Rightly or wrongly, throughout the credit crisis they were heavily criticized for failing to see the problems looming and their reputation was severely dented and I fear they may never be able to restore this back to pre-crisis levels.
However, they are by no means the only culprits: investment banks pressurized the agencies to churn out ratings and, prior to the crisis, I kept saying to people that the agencies had to some extent turned into ‘‘conveyor belt analysts’’—meaning they sit there, get the information (and the pressure) from investment bankers, digest the information quickly, convene hastily arranged rating committees, assign the ‘‘desired’’ ratings, publish the new-issue report, and move on to the next deal, only to do it all over again.
And then, at the other end of the spectrum, there were investors that were also under pressure from arrangers who ‘‘invited’’ them to participate in ‘‘heavily oversubscribed’’ deals, which putting it bluntly meant ‘‘you had better be quick in coming back to us to let us know whether or not you would like to invest in such bonds, but we need your feedback pretty much pronto, almost kind of now.’’
What would investors do in such a pressurized scenario? Well, they would have a look at the rating agencies’ presale reports, quickly identify the bond’s strengths and weaknesses (mainly based on the agencies’ rating designator), ensure the pricing looks appropriate for such a bond, and then make a quick decision—based to a large extent on the rating agencies’ analyses.
9.3.1 What the ‘‘users’’ of ratings think . . .
Although this appeared all very well pre credit crunch, it was certainly not good enough and many investors were gutted when they discovered during the credit crisis that sole reliance on AAA ratings was actually a dangerous approach that should have been avoided in the first place.
Throughout the credit crisis I gave a series of talks and seminars on the topic of ‘‘rating agencies and overreliance on ratings’’ and was surprised how many myths were out there with regard to ratings, sometimes leading to a dangerous level of overreliance by the various users of ratings. The questions I asked during this series were:
Question Do you use and, if so, what is the frequency of your use of rating agencies?
The respective responses were as follows: a total of 24% answered that they do not use them, 16% use them once a day or more, 16% use them once a week or more, and 44% said they use them but only infrequently. This means that 76% of respondents use rating agencies and, hence, should be familiar with credit ratings. But the following answers would indicate they do not necessarily understand the meaning of ratings.
In the survey, 32% believed that a Fitch AAA rating is the same as a AAA from Standard & Poor’s and a Aaa from Moody’s. They were not aware that Fitch’s and S&P’s ultimate default risk view, which uses the probability of default (PD)—in other words, the first dollar of loss—differs from
Moody’s expected loss (EL) methodology, which focuses on the amount of net loss suffered (i.e., by multiplying the PD by the loss-given default (LGD).
Even more revealing, a stark 47% thought that all the agencies mark a defaulted instrument with a D rating and that the D is the same for all three agencies. A close look at the lowest rating category, however, reveals that Moody’s lowest rating category is the C rating (it does not have a D rating). An easy way to remember that ‘‘Fitch’’ and ‘‘S&P’’ have D ratings is that there is no letter ‘‘D’’ in their names in contrast to Moody’s who has no D ratings but does have a ‘‘D’’ in its name.1
A considerably more confident 87% concluded that two different structured finance bonds that are both rated AAA by all three agencies cannot directly be compared against each other because of both ratings being on the same level. However, the respondents had no explanation as to why that was so and were largely unaware of the AAA rating cap that exists in structures that have so-called ‘‘super senior’’ bond tranches.
9.3.2 Rating agency failures
Clearly, rating agencies have occasionally failed in the past and the list of failures for particular names as well as asset classes is long: AIG, Alt-A bonds, Bear Stearns, Bradford & Bingley, CDO of ABS, CDO2, CDO3, Enron, Icelandic banks, Lehman Brothers, monolines, Northern Rock, subprime bonds, Parmalat, etc.
The agencies publicly admitted their failures in front of various official committees such as the Treasury Select Committee (U.K.) and the U.S. Government Oversight and Reform Committee.
In their own words, they said the following about the credit crisis:
. Moody’s admitted it ‘‘. . . did not . . . anticipate the magnitude and speed of the deterioration in mortgage quality or the suddenness of the transition to restrictive lending.’’
. S&P admitted ‘‘. . . it is now clear that a number of assumptions used in preparing ratings on mortgage-backed securities issued between 2005 and mid-2007 did not work.’’
. Fitch admitted it ‘‘. . . did not foresee the magnitude of the decline. . . or the dramatic shift in borrower behavior . . .’’
In the meantime, there have been more ‘‘public’’ commissions, enquiries, and unsettling admissions and discoveries. However, the purpose of this particular chapter is not to point the finger of blame, but to ensure that when you use credit ratings you are fully aware of the limitations that come hand in hand with ratings as analytical tools.
9.3.3 Ratings scope
Credit ratings, as the name already suggests, have limited scope and normally only capture ‘‘credit risk’’. Ratings do not capture market risk, liquidity risk, operational risk, and basis risk. Nevertheless, credit ratings do play an important role since they have been ‘‘hard-wired’’ by Basel II into banks’
credit-rating models and are also an obligatory requirement in many investment guidelines and asset management mandates. By the way, this is where the distinction between investment grade (i.e., ratings good enough to undertake investments) and non-investment grade or speculative grade (i.e., ratings not permitted for investments) comes from.
1 Yes, I know ‘‘Standard & Poor’’ has a letter ‘‘D’’, but ‘‘S&P’’ does not. The key is to remember the difference.
9.3.4 Use of credit ratings
Ratings are used by banks, other financial institutions, originators and issuers, investors, financial regulators, other rating agencies, and many other market participants. They are used by such market participants to ‘‘outsource analysis’’ (which can be dangerous and will be costly under the Basel III regime), determine required economic and regulatory capital charges, manage individual credit risks as well as portfolio-related risks, and occasionally have featured as input into various structured finance models. By the way, this is one of the reasons for the failure of some instruments (such as CDO of ABS, CDO2, and CDO3, etc.) that used other rated instruments and in doing so placed too much reliance on the ratings of the underlying instruments.
9.3.5 Common criticism
Rating agencies are often criticized for a variety of reasons. They are too slow to react in their analysis of business models. Example 9.1 illustrates what lies behind this complaint. Although fictitious very similar cases happened in the runup to the credit crisis.
Example 9.1. How ‘‘timely’’ are ‘‘timely rating actions’’?
This example concerns a monthly reporting U.S. residential mortgage-backed transaction where the underlyings are subprime mortgages. It illustrates the ‘‘timeliness of rating actions’’ and clearly shows very practical limitations to timely rating decisions.
July 31. This the reporting cutoff date for the pool of underlying subprime mortgages. Any loan that defaults on, say, July 28 makes it into the report. If a loan defaults on August 3, it’ll make it into the next report.
August 15. Distribution date for the investor report. The 15th of each month (if it falls on a weekend, then usually the next working day) is what is agreed in the transaction’s documents.
This is the day when the investor report is formally ‘‘distributed’’ by the trustee—either sent out or provided on the trustee’s website for download. We assume here it’s been sent to all relevant parties.
August 20. This is when the responsible analyst at the rating agency ‘‘receives’’ the investor report. He may have had the report for a couple of days sitting in his inbox. The rating agency has a collective email alias for receipt of these reports and the relevant analyst will pick ‘‘his’’ report up from these. Don’t forget they receive an awful lot of reports each day because they rate many thousand different transactions. This can be bit of an administrative nightmare and it can take time for the report to reach the responsible analyst’s desk.
August 27th. Our analyst now has a chance to enter all the key performance indicators into the rating agencies’ proprietary model and has the result of the first performance analysis on his desk.
The results look odd: suddenly there seem to be a high level of delinquencies and defaults and our analyst is concerned about the overall performance of this transaction. The names involved are pretty good. Well-known originator and robust servicer, well-diversified subprime mortgage loan portfolio, the previous months showed little tendencies for rising delinquencies but overall it didn’t look too bad. Maybe it requires looking into a bit further. Anyway, it looks as if it may be an adequate move to place the transaction on rating watch negative (RWN) in order to further investigate. RWN signals to investors that the rating agencies are currently undertaking further analysis and may or may not need to amend the ratings after this analysis has been completed.
This can take up to 6 months, so an RWN does not signal an immediate imminent rating change
but is thought more of as a thumbs up (or down?). Good compomise thinks our analyst and finishes the committee paper in order to put this deal on RWN.
August 31st. The paper is finished and submitted to the rating committee with a view of placing the underperforming bond on RWN. It’s a rather awkward situation right now because as a result of the market being so buoyant over the past 2 years or so, quite a few senior directors and managing directors have decided to move on and are now working for banks that pay a lot more.
There seems to be an underlying pattern where investment banks are frequently snapping up staff from rating agencies. They suppose analysts from agencies have well-groomed analytical skills, know the rating agency business, and can add considerable value to them—investment banks are usually able and willing to pay them considerably more, particularly in a market that is doing so well. Back to our transaction: The committee papers to place the transaction on RWN have been submitted, but our analyst finds it quite difficult to achieve the required quorum at this time of the year: the main summer holiday season combined with the recent departure of senior agency staff do not mix very well.
September 15. More than 3 weeks have now passed since the analyst submitted the committee papers and today is the first time the committee has considered the performance of the underlying collateral and whether or not the instrument should be placed on RWN. Normally, the quorum would have agreed with the analyst’s proposal, but the most experienced and senior quorum member is deeply concerned about some of the performance trends. Delinquencies and defaults appear to have jumped to an uncomfortable level and, if they continue increasing at the same pace, there really will not be sufficient credit enhancement in the next quarter. The committee feels that the observed underperformance is more severe than previously thought and refrains from placing the deal on RWN. Instead, they brief the analyst to have a chat with the originator, check the report from August that should have become available today, and then consider a couple of additional stress scenarios to see the likely impact on the erosion of credit enhancement.
October 9. Our analyst has returned from his holidays; fortunately, he was only away for a couple of weeks on vacation. He knows he had better get stuck into the further analysis he was briefly dealing with before he went away. So, off he goes to have a chat with the originator/
servicer, and it looks like the performance is worsening. Looks like falling house prices are one of the reasons so many of these mortgages are now becoming delinquent. With falling house prices many borrowers seem to be realizing that all they have in their property now is negative equity.
With that in mind, what’s the incentive for them to pay off their debt? Well, there isn’t any, hence the analyst will prepare more serious steps.
October 12. The results of the second analysis and model run are reflected in an updated rating agency’s committee paper.
October 16. In addition to this particular deal, our analyst has also had a chat with the in
house economist and they both looked at the latest economic data only to realize that U.S. house prices in decline appear to be on a much larger scale than originally anticipated. Looking at the data, you could almost say this is a problem of national scale rather than just regional pockets. As a result the analyst decides to recommend a downgrade (DG) for many of this particular deal’s bond tranches—including some of the currently AAA-rated mezzanine notes—and expects a somewhat controversial rating committee meeting.
October 23. Second committee meeting. This goes smoother than originally thought, but the analyst brought the agency’s in-house economist with him to support his view that national house prices are in decline. The committee is not particularly happy that it has to downgrade some AAA-rated tranches to Aþ and takes a view that AA� looks better than such a steep downgrade and opts for the AA� rating. But, this is really the least of their worries: declining house price issues seem to be a bigger problem as there had been similar committees like this one over the past week or so.
October 23. Some of the senior committee members raise their concern to their internal credit
policy department as well as legal counsel about declining house prices and the wider implications for the thousands of similar transactions they rate.
October 25. Our analyst wonders why this has become such an internal hot topic? Literally, everybody in the rating agency is now talking about the impact the downgrades of ‘‘his’’ bond seem to have had. The rating agency is now considering placing all subprime deals they rate which total several hundred million USD on rating watch negative and reviewing all the ratings for this particular asset class. The market seems to be aware of the underlying issues, at least that’s what some of the investors were telling him on the phone after he published the press release for the downgrade of those bonds. They were asking why it took the rating agency so long to do anything about this transaction and that downgrades like this should be much timelier. Little do they know what’s coming their way . . .
Whilst there is no indication in Example 9.1 of the year this fictitious sequence of events happened, given the circumstances, it’s probably pretty clear to you by now that the supposed year was 2007.
However, it can also happen again with different asset classes but, hopefully, not on such a large scale.
The lessons that need to be learned are—even if rating agencies claim to be timely with their decisions—many impracticalities are there that really do detract from this objective. Furthermore, rating agencies are corporate entities comprising human beings. People make errors, such as getting analyses wrong, overlooking important facts, procrastinating actions that can delay rating decisions, having internal arguments over who can do what and who has most ‘‘power’’, politics as a result of either internal or external pressures, and being driven by market share, profit, and sometimes plain greed.
Where rating agencies as companies are different is that the impact of their rating decisions can have far-reaching impacts on the entities that are affected: this can be corporates with a few hundred or many thousands of employees or sovereigns where literally millions may be affected (such as when the AAA rating of a country is downgraded, as seen with several European countries in late 2010 and early 2011).
It is of utmost importance that rating agencies apply the necessary professional due diligence and care that comes with such a powerful position and do whatever is necessary to ensure that ratings decisions are conveyed in a timely fashion.
Further criticisms
One major critique is the rating agency business model (this applies to all four large rating agencies—
Moody’s, Standard & Poor’s, Fitch Ratings, and Dominion Bond Rating Service—but not necessarily to the smaller of the 75 or so rating agencies that exist globally), also known as the ‘‘issuer pays’’
model. Key here is the common perception that a credit rating for an issuer that is being paid for by the issuer will never come across as unbiased and impartial. There are many examples to illustrate this, but my favorite is where a film company prior to the release of its new blockbuster asks a professional film reviewer to review the film and, of course, pays the reviewer for his work. The film reviewer, happy to have another paying client, reviews the film—which transpires to be ‘‘average’’ but he manages to spice his review up a little hoping that he may get repeat business with this produ
cer—and gives the film company his ‘‘independent review’’ to use as they please. Of course, there are differences, but the underlying principle is the same—there is a perceived conflict of interest and even with regulation and the IOSCO’s code of conduct, it naturally taints the perceived impartiality of the rating agencies.
Another major critique is that credit ratings can only capture credit risk, but not market risk, operational risk, basis risk, and liquidity risk. This means that the risk captured and expressed by ratings is limited. Whilst this may be sufficient for a credit officer, it certainly is not sufficient for
someone looking at risk from a more holistic angle. Furthermore, some of these risks are intercon
nected and whilst they may initially manifest themselves in one form (e.g., liquidity risk during the first months of the credit crisis), they cannot always be clearly separated and of course can contaminate other risk categories (such as credit risk which manifested itself in later months of the credit crisis).
Having a limited view on credit risk via a credit rating is still only a ‘‘limited view’’.
Using more than one external rating is fine as long as you are looking at just one agency. In practice, most banks and other financial institutions, however, use three external ratings. Typically, at inception of a structure you would find external ratings to be the same—if they are not, look closely at them to understand why. However, throughout the life of the deal, credit rating agencies will undertake their analysis independently and, hence, may or may not change the ratings that were originally assigned.
You can then end up holding a bond tranche that is rated AAA by one agency, AA� by another, and A3 by the third, essentially resulting in so-called split ratings. In the case of split ratings the question is which agency’s analysis do you trust most and, hence, which rating do you use as ‘‘the’’ rating for capital charge calculation, internal risk analysis, as well as internal and external portfolio analysis.
This depends largely on the institution you are working for and whether or not they have internal ratings that can be used instead of external ratings.
Prior to the credit crunch, one other practice led to considerable criticism of the agencies—the practice of so-called ‘‘notching’’. Standard & Poor’s and Moody’s used to be very active in this area.
Notching describes the practice in which one rating agency revises another agency’s rating downward prior to using it in their model—just because it is from one of their competitors.
Rating agency models are nothing more than that: models. The fuel that powers these models is rating criteria, methodologies, and assumptions which is only a reflection of how the agency conveys that an instrument works.
More recently, there has been another interesting development in the rating agency space: increasing internal ‘‘competition’’ where some agencies ramp up their analytics and consultancy business—which is also owned by the relevant agency’s holding company—but conduct analysis that may differ from traditional rating agency work. Moody’s Analytics and Fitch Solutions are two such companies. These companies are usually independent of the core ratings business and fire-walled or have Chinese Wall policies in place which separate them from the ratings business. However, these consultant companies do use traditional credit ratings to compare their products against. These analytics companies offer a wide range of products including implied ratings (market, CDS, or equity implied) and integrated tools that use models and cash flow engines providing users with the ability to replicate some of the work that agency staff would use in order to rate transactions. Equally, they make the same underlying data available that are used by the rating agencies’ own analytical staff who are rating the original deals.
Companies such as Moody’s Analytics claim that their product would be capable of forecasting rating changes before the agency actually changes the ratings and support their claims with various samples. I leave it to you to determine whether or not the products can add value to your business, but they certainly do not come cheap. However, the point I am making is that these companies are now in an internal competition with their own sister companies by offering fairly similar products—I wonder if this is a good way forward but the agencies’ parent companies seem to think there is certainly business to be made and there is also considerable interest in the market.