Continued part 1, part 2 of ebook Credit risk management - Basic concepts: Financial risk components, rating analysis, models, economic and regulatory capital provide readers with content about: portfolio models for credit risk; basel II; measures of portfolio risk; concentration and correlation; portfolio model formulations; information technology aspects;...
Trang 1credit risk
An important concept of modern banking is risk diversification In a fied setting, the outcome of a single loan is binary: non-default or default,with possibly a high loss as a result For a well-diversified portfolio withhundreds of loans, the probability of such a high loss is much smaller becausethe probability that all loans default together is many times smaller than thedefault probability of a single loan The risk of high losses is reduced bydiversifying the investment over many uncorrelated obligors By the law oflarge numbers the expected loss in both strategies is exactly equal The risk
simpli-of high losses is not equal Because bank capital serves to provide protectionfor depositors in case of severe losses, the first lending strategy of one singleloan requires the bank to hold much more capital than the second lendingstrategy with a well-diversified portfolio The diversification impacts thecapital the bank is expected to hold and also performance measures likereturn on capital and risk-adjusted return on capital
Portfolio models provide quantified information on the diversificationeffects in a portfolio and allow calculation of the resulting probabilities ofhigh losses On a portfolio level, the risk of the portfolio is determined bysingle facility risk measures PD, LGD and EAD and by concentration andcorrelation effects On a more global view, migrations, market price move-ments and interest rates changes can also be included in the portfolio riskassessment to measure the market value of the portfolio in the case of a liqui-dation Portfolio models have become a major tool in many banks to measureand control the global credit risk in their banking portfolios Idealized andsimplified versions of portfolio models are rating-based portfolio models,where the portfolio loss depends only on general portfolio parameters andthe exposure, default risk and loss risk of each loan, represented by the PD
Trang 2and LGD ratings, respectively Exposure risk in such simplified models iscurrently represented by an equivalent exposure amount that combines on-and off-balance sheet items.
Such a risk calculation based on ratings is practically very useful andallows calculation of portfolio-invariant capital charges that depend only onthe characteristics of the loan and not on the characteristics of the portfolio
in which the loan is held Rating-based portfolio models and the resultingportfolio invariant capital charges are of great value in the calculation ofregulatory capital In early52 stages, loans were segmented based on roughcriteria (sovereign, firm, mortgage, .) and risk weights for each segment
were prescribed The proportional amount of capital (8% of the risk weights)was prescribed by the regulators The new Basel II Capital Accord calcu-lates the risk of the bank using a simplified portfolio model calibrated onthe portfolio of an average bank In addition, the Basel II Capital Accordencourages banks to measure its portfolio risk and determine its economiccapital internally using portfolio models
The main components of the risk of a single loan, exposure at default, lossgiven default and probability of default, impact on an aggregated level theportfolio loss distribution as explained in section 5.2 Common measures ofportfolio risk are reviewed in section 5.3 section 5.4 illustrates the impact
of concentration and correlation on portfolio risk measures Portfolio modelformulations are reviewed conceptually in section 5.5 and an overview of thecurrent industry models is given in section 5.6 Some of these models alsoinclude the risk of changing interest rates and spreads The Basel II portfoliomodel for regulatory capital calculation is explained in detail in section 5.7.Application and implementation issues are reviewed in section 5.8 Theconcepts of economic capital calculation and allocation are summarized
in section 5.9 and a survey of risk-adjusted performance measures isgiven
5.2.1 Individual loan loss distribution
Banks charge a risk premium for a loan to cover a.o its expected loss Theexpected loss reflects the expected or mean value of the loss of the loan.The expected loss depends on the default risk of the borrower, the loss
52 A comparison between Basel I and Basel II risk weights is made in the next chapter.
Trang 3percentage of the loan in case the borrower defaults and the exposure at the
time of default The loss L for a given time horizon or holding period is a
stochastic variable that is
with
EAD: the exposure at default can be considered as a stochastic or a
deter-ministic variable, the stochastic aspect is most important for credit cardsand liquidity lines
LGD: the loss given default is a stochastic variable that typically ranges
between 0 and 100% The LGD distribution is typically assumed to follow
a beta-distribution or a bimodal distribution that can be fitted using kernelestimators Sometimes, the LGD distribution is represented by combin-ing a discrete distribution at 0 and 100% and a continuous distribution
in between The LGD represents the severity of the loss in the case ofdefault
PD: the probability of default follows a Bernoulli distribution with events
either 1 (default) or 0 (non-default) The probability of default is equal to
PD (P(δPD = 1) = PD), while the probability of non-default is equal to
1 – PD (P(δPD = 0) = 1 − PD) The expected value of δPD is equal to
E(δPD) = PD, the variance is equal to V(δPD) = PD(1 − PD).
For credit risk applications, one typically applies a holding period equal toone year In the case of independent distributions EAD, LGD andδPD, theexpected value of the loss probability distribution equals
E(L) = E(EAD) × E(LGD) × E(δPD),
= EAD × LGD × PD,with expected or average probability of default PD, the expected loss givendefault LGD and the expected exposure at default EAD The expected loss
is the expected exposure times the loss in the case of default multiplied bythe default probability The expected loss is typically used for provision-ing and/or calculated in the risk premium of the loan Proportional to theexposure, the risk premium should cover the LGD× PD This explains theappetite to invest in loans with low default risk, low loss risk or both, on con-dition the margin is sufficiently profitable The proportional loss distribution
of a single loan with constant LGD is depicted in Fig 5.1a
Trang 4reported below each graph in terms of percentage of the total exposure N With the same
expected loss, the portfolio distribution is less risky because high losses are less likely to occur compared to the loss distribution of the individual loan Note the different scales of the axes.
5.2.2 Portfolio loss distribution
The loss distribution of a portfolio composed of a large number of loans
N is obtained by summing up the loss distribution of the individual
Trang 5The expected loss of the portfolio is the sum of the expected losses of theindividual loans:
However, the portfolio loss distribution can be totally different from theloss distribution of the individual loan Indeed, the distribution of the sum
of two independent random variables corresponds to the convolution ofthe two individual distributions The convolution will smooth the discreteindividual distribution in the case of deterministic EAD and LGD (Fig 5.1a)into a quasicontinuous portfolio loss distribution
Consider, e.g., a homogeneous portfolio of N loans with deterministic
EAD= EAD and LGD = LGD that are equal for all counterparts Assumefor the moment that the Bernoulli distributions δ i are independent Themore general case of dependent distributions will be further discussed insection 5.4 The distribution of the loan portfolio is obtained as the convolu-tion of the individual loan loss distributions The procentual loss distribution
of the portfolio is given by the following formula
P(L P = LGD × j) =
n j
PDj (1 − PD) N −j.
By the central limit theorem, the distribution tends to a normal distributionwith mean PD and variance PD(1 − PD).
Figures 5.1a–d depict the loss distribution of a homogeneous portfolio of
N = 1, 10, 100 and 1000 independently distributed loans with EAD = 1,LGD= 50% and PD = 5% For small portfolios, the graphs depict alreadysome important properties of the portfolio loss distribution: the distribution
is fat-tailed53and skewed to the right This is not surprising given the pretation of a loan as a combination of risk-free debt and a short position on
inter-an option as explained in paragraph 4.3.1.1 The shape of the distribution
is further influenced by concentration and correlation properties, as will bediscussed in section 5.4 First, common risk measures are reviewed in thenext section
53 In a fat-tailed distribution function, extreme values have higher probabilities than in the corresponding normal distribution with the same mean and variance.
Trang 65.3 Measures of portfolio risk
The portfolio loss distribution summarizes all information of the risk inthe credit portfolio For practical purposes, calculations, investment deci-sions, management and regulatory reporting, the loss distribution needs to
be summarized into risk measures These risk measures highlight one ormore aspects of the risk in the portfolio [25, 82, 124, 260, 291, 468]
A risk measureρ is said to be a coherent risk measure if it satisfies the
following four properties [25]:
Subadditivity: the risk of the sum is less than the sum of the risks,ρ(X +
Y ) ≤ ρ(X ) + ρ(Y ) By combining various risks, the risk is diversified.
Monotonicity: the risk increases with the variables;54 if X ≤ Y , then ρ(X ) ≤ ρ(Y ) Riskier investments have a higher risk measure.
Positive homogeneity: the risk scales with the variables;ρ(λX ) = λρ(X ),
withλ ≥ 0 The risk measure scales linearly with a linear scaling of the
variable
Translation invariance: the risk translates up or down by substraction or
addition of a multiple of the risk-free discount factor; ρ(X ± αr f ) = ρ(X ) ± α, with α ∈ R and r f the risk-free discount factor
The variables X and Y are assumed to be bounded random variables.
In the next sections, different portfolio risk measures are discussed Anoverview of their most interesting properties is given in Table 5.1 Some areillustrated in Fig 5.2 Ideally, a practical risk measure should comply withall the four properties Some practical risk measures may not satisfy all ofthem This means that there exist circumstances in which the interpretation
of the risk measure becomes very difficult For classical portfolios, suchcircumstances may occur rather seldom
5.3.1 Expected loss (EL)
The expected loss (EL) of a portfolio of N assets or loans is equal to the sum
of the expected loss of the individual loans:
Trang 7Table 5.1 Advantages and disadvantages of portfolio risk measures The last column indicates whether it is a coherent risk measure.
Risk Measure Advantages Disadvantages Coherent
Expected
loss
Information on average portfolio loss, Direct relation with provisions
No information on the shape of the loss distribution
Yes
Loss standard
deviation
Information on loss uncertainty and scale of the loss distribution
Less informative for asymmetric distributions
No
Value-at-risk Intuitive and commonly
used, Confidence level interpretation, Actively used in banks by senior management, capital calculations and risk-adjusted performance measures
No information on shape, only info on one percentile, Difficult to compute and interpret
at very high percentiles
No information on shape, only info on one percentile, Difficult to compute and interpret
at very high percentiles
Less intuitive than VaR, Only tail and distribution information for the given percentile, Computational issues at very high percentiles
expected loss is a coherent measure of risk
55 See the Appendix for the definition of the concepts location, dispersion and shape.
Trang 85.3.2 Loss standard deviation (LSD,σL)
The loss standard deviation (LSD,σ L) is a dispersion measure of the portfolioloss distribution It is often defined56 as the standard deviation of the lossdistribution:
σ L P =E(L P− ELP )2
Because a normal distribution is completely defined by its first two moments,the EL andσ Lwould characterize the full distribution when the loss distri-bution is Gaussian However, credit loss distributions are far from normallydistributed, as can be seen from Fig 5.1b
The loss standard deviation of a single loan with deterministic EAD =EAD and independent PD and LGD distribution is given by:
The loss standard deviation of the loan increases with the uncertainty
on the LGD and PD Observe that for some commercial sectors, e.g.,firms, the assumptions of independent LGD and PD may be too optimistic.Experimental studies on PD and LGD mention correlations for large firms[16, 133, 227, 432] However, it is not yet clear how these experimentsdepend on the LGD calculations of Chapter 4 (market LGD, work-out LGD)and how these results can be extrapolated to loans of retail counterparts orother counterpart types
The loss standard deviation of a portfolio with N facilities is given by
Trang 9whereρ ij = ρ ji denotes the correlation between the loss distribution of the
facilities i and j In matrix form, the above expression becomes
Given a portfolio, one also wants to identify which positions cause most
of the risk The marginal loss standard deviation (MLSDf) measures the risk
contribution of facility f to the portfolio loss standard deviation LSD P:
Trang 10Given expression (5.4), the marginal loss standard deviation is
Part of the loss standard deviation can be reduced by a better tion, e.g., by increasing the number of loans, as can be seen from Fig 5.1.Nevertheless, a part of the risk cannot be diversified, e.g., macroeconomicfluctuations will have a systematic impact on the financial health of all coun-terparts It is part of the bank’s investment strategy to what extent one wants
diversifica-to diversify the bank’s risk and at what cost From a macroeconomic spective, the bank fulfills the role of risk intermediation, as explained inChapter 1
per-5.3.3 Value-at-risk (VaR)
The value-at-risk (VaR) at a given confidence levelα and a given time horizon
is the level or loss amount that will only be exceeded with a probability of
1−α on average over that horizon Mathematically, the VaR on the portfolio with loss distribution L Pis defined as
VaR(α) = min{L|P(L P > L) ≤ (1 − α)}. (5.8)One is 1− α per cent confident not to lose more than VaR(α) over the given
time period The VaR is the maximum amount at risk to be lost over the timehorizon given the confidence level The time horizon or holding period formarket risk is usually 10 days, for credit risk it is 1 year The VaR depends
on the confidence level and the time horizon Figure 5.2 illustrates58 theVaR concept VaR measures are typically reported at high percentiles (99%,99.9% or 99.99%) for capital requirements The management is typically alsointerested to know the lower percentiles, e.g., the earnings-at-risk measureindicates the probability of a severe risk event that is less severe to threatensolvency, but will have a major impact on the profitability
58 For readability purposes, losses are reported on the positive abcissa.
Trang 11ES VaR
EC EL
(a) EL, VaR, EC, ES (b) VaR measures
Fig 5.2 Expected loss (EL), value-at-risk (VaR), economic capital (EC) and expected shortfall (ES) are numerical measures to describe the main features of the loss distribution Pane (a) illustrates the VaR, EC and ES at the 95th percentile The right pane (b) illustrates that two loss distributions can have the same VaR, but different averages and tail distributions.
VaR is a well-known and widely adopted measure of risk, in particularfor market risk (market VaR) The Basel II Capital Accord [63] also usesthe concept of a 99.9% credit risk VaR and of a 99.9% operational risk VaR.Unfortunately, the VaR measure has important drawbacks Amajor drawback
is that the VaR does not yield information on the shape of the distribution and
no information on the (expected) loss that can happen inα per cent of the time when the portfolio loss L exceeds the VaR For credit and operational risk, one
typically uses very high confidence levels in the deep tail of the distribution
At these levels, all assumptions regarding correlations and distributions mayhave an important impact on the VaR The VaR estimate can become unstable
at high confidence levels Moreover, VaR is not a coherent measure of risk,
it does not satisfy the subadditivity property [25, 195]
Incremental VaR and marginal VaR are related risk measures that capture
the effect of a facility f to the portfolio VaR P [215] The incremental VaR(IVaRf) measures the difference between the VaRPof the full portfolio andthe VaRP −f of the portfolio with the facility f :
IVaRf (α) = VaR P (α) − VaR P −f (α).
The IVaR is a measure to determine the facilities that contribute most to thetotal risk of the portfolio Its disadvantage is that the sum of the incrementalVaRs does not add up to the total VaR of the portfolio,
An alternative risk measure, intuitively closely related to the IVaR, is the
Trang 12marginal VaR that measures the sensitivity of the portfolio VaRP to the
facility f with assets A f:
MVaRf (α) = ∂VaR ∂A P (α)
f
A f
The sum of the marginal VaRs adds up to the portfolio VaR,
f MVaRf =VaRP The marginal VaR is also known as delta VaR
5.3.4 Economic capital (EC)
The economic capital (EC) at a given confidence level 1− α is defined as
the difference between the value-at-risk and the expected loss
It measures the capital required to support the risks of the portfolio As the ECmeasure is based on the VaR measure, it has the same properties (not sub-additive, instability for high confidence measures) In some applications,
one uses a capital multiplier m α to approximate the economic capital as amultiple of the loss standard errorσ L:
For a normal distribution, the capital multiplier at 99%, 99.9% and 99.99%
is equal to 2.3, 3.1 and 3.7, respectively For more fat-tailed distributions,capital multipliers between 5 and 15 have been reported [133]
The extensions to incremental economic capital IECf (α) = IVaR f (α) −
ELf and marginal economic capital MECf (α) = MVaR f (α)−EL f are easilymade, where it should be noted that these measures depend on the portfoliothey are part of
When more portfolios P1, P2, ., P nare combined, the EC of the whole islower than the sum of the individual portfolio ECs (assuming subadditivity).The diversification benefit (DB) is equal to
DB= EC(P1) + EC(P2) + · · · + EC(P n )
EC(P1+ P2+ · · · + P n ) . (5.11)
The diversification benefit indicates the reduction in economic capital from
a diversified investment strategy Economic capital at the firm level will bediscussed in section 5.9
Trang 135.3.5 Expected shortfall (ES)
Expected shortfall (ES) measures the expected loss when the portfolio lossexceeds the VaR limit
VaR limit will be exceeded in the small number of cases with probabilityα.
They are complementary risk measures that describe the tail of the lossdistribution It indicates the average loss given a default event, i.e when theeconomic capital is not sufficient to absorb the losses
Expected shortfall takes a conditional average As such it is a more stableestimate than VaR measures Therefore, ES is often preferred over VaR forcapital allocation Expected shortfall is a coherent measure of risk [1, 468].Other names for expected shortfall are expected tail loss, conditional VaRand worst conditional expectation
Apart from the individual loan characteristics, EAD, PD and LGD, lation59 and concentration are key elements that shape the portfolio lossdistribution These effects are illustrated in Figs 5.5 and 5.6 for determinis-tic EAD and LGD values on the unexpected loss While the UL is a measure
corre-of the width corre-of the distribution, also the distribution shape and the tail fatnesscan depend on the correlation and concentrations A detailed description isprovided in book II
5.4.1 Correlation effect on unexpected loss
Consider a homogeneous portfolio of N equal-sized loans with equal
and deterministic EAD, LGD; equal PD distributions and known default
59 Note that it is mathematically more correct to speak about dependence rather than on correlation [165].
Trang 14No correlation (ρ = 0): the unexpected portfolio loss (eqn 5.13) becomes
The risk reduces inversely proportional to the square root of the number
of loans in the portfolio In general, in the case of no correlation, the pected loss on portfolio level is the square root of the summed squaredfacility unexpected losses ULP = ( N
unex-i=1UL2i )1/2 For homogeneous
portfolios, this yields the factor 1/√N
Perfect anticorrelation (ρ = −1, N = 2): suppose that two loans are
perfectly anticorrelated This corresponds to a perfect hedge and the pected loss (eqn 5.13) reduces to zero It should be noticed, however, thatdefault correlations are typically positively correlated
unex-The correlation effect is depicted in Fig 5.6 It clearly shows the significantincrease in risk for high correlations
The expression for the homogeneous portfolio also illustrates the dence of the risk in terms of correlations and granularity The granularity andcorrelation influence the unexpected loss via the 3-term expression under the
depen-60 Note that this is assumed subject to the feasibility constraint In the case of, e.g., three loans, it is not possible thatρ = −1.
Trang 150 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0
5 10
Fig 5.3 Impact of the PD correlation on the unexpected loss for a homogeneous portfolio
of N= 10, 100, 1000 loans, total exposure EADP= 1000, PD of 1% (BBB-range) and LGD
of 50% The expected loss is equal to 5, indicated by the horizontal dashed-dotted line.
square root in eqn (5.13) The expression consists of the granularity 1/N,
the correlation ρ and their cross-term −ρ/N The impact is depicted in Fig 5.3 The unexpected loss decreases rapidly for small N For larger N ,
the reduction is less important
5.4.2 Concentration effect on unexpected loss
Assume a homogeneous portfolio with N loans with EAD i (i = 1, , N),
where the individual loans have the same PD and LGD characteristics andzero PD correlationρ = 0 The expression for the unexpected portfolio loss
ULPfrom (eqn 5.6) becomes
√ HHI =√1/N∗
EAD2P
Trang 16Fig 5.4 The Herfindahl–Hirschman index (HHI) is a standard index to measure the degree
of market concentration of a particular industry in a particular geographic market [252, 253,
255, 256, 414] The index is computed as the sum of the squared market shares compared
to the squared total market share Graphically, it compares the total area of the shaded small squares to the area of the large square For market shares expressed as a value between 0 and 100%, the HHI ranges from 0–1 moving from a very large number of small firms to one single monopolist The US Department of Justice considers values between 1% and 18% as moderately concentrated For this example, the HHI is equal to 22.6% For some applications, the market shares are scaled between 0 and 100 and the HHI ranges from 0 to 10000.
The Herfindahl–Hirschmann index, HHI (Fig 5.4), measures the tration effect of the portfolio:
The measure is widely used in antitrust analysis to measure the concentration
in the market A value close to zero indicates low concentration, a valueclose to 1 indicates high concentration For the homogeneous portfolio, the
Trang 170.2 The expected loss is equal to 5 indicated by the horizontal dashed-dotted line.
equivalent number of loans N∗in the portfolio is defined as
N∗= 1
EAD2P N
i=1EAD2i
The equivalent number of loans lies in between 1 (one single exposure) and
N (equal-sized exposures) The granularity of the portfolio can be expressed
as 1/N∗ A fine granular exposure has a high N∗.
The impact of concentration on the unexpected loss in depicted in Fig 5.5
It is seen that the concentration effect is especially important in absolute bers for small concentrations Each time the number of equivalent loans inthe portfolio doubles, the unexpected loss reduces by about 30% Of course,this holds in the case of zero PD correlation Note that credit portfolios tend
num-to be quite lumpy: in [107] it is reported that the largest 10% of exposuresaccount for about 40% of the total exposure
5.4.3 Combined correlation-concentration effect
Given a homogeneous portfolio of N loans with exposures EAD i (i =
1, , N), identical PD and LGD characteristics, and PD correlation ρ.
Trang 18The unexpected portfolio loss (eqn 5.6) becomes
N∗ + ρ − ρ
Comparison of this expression with the expression (5.13) for a homogeneousportfolio with equal-sized exposures indicates that, in terms of expected andunexpected loss, the portfolio with non-equal-sized exposures has the samerisk as a homogeneous portfolio with the same total exposure equally spread
over N∗loans The joint effect is illustrated in Fig 5.6 Cross-sections are
reported in Figure 5.3 and 5.5
In the case of non-homogeneous portfolios that consist of facilities withdifferent PDs, LGDs and correlations, the expressions become more com-plex It becomes more difficult to find analytical expressions to match the
Trang 19first moments of the loss distribution of the non-homogeneous portfolio tothe moments of an equal-sized, homogeneous portfolio This will be furtherdiscussed in section 5.7 where the granularity adjustment is discussed thatwas proposed in an earlier consultative paper on the new Basel II CapitalAccord Of course, one can always use expressions of the form (5.4) and(5.5) to calculate the unexpected loss.
5.4.4 Correlations
Correlations limit the benefit of concentration reduction Diversificationmeans that one should take care to spread the portfolio investment acrossmany investments that exhibit low correlation
Correlations or dependence in general indicates that the stochastic ponents of the portfolio loss (eqn 5.2) exhibit a joint behavior [165] Thestochastic components or random variables are (partially) driven by commonfactors Important types of dependence are known as default correlations andcorrelation between PD and LGD
com-The default correlation reflects the property that default events are centrated in time There are years with many defaults during recessionsand there are expansion years with a low number of defaults, as is illus-trated in Fig 3.1 Because of the low number of defaults, the measurement
con-of PD correlations is a difficult task Therefore, the PD correlation isoften expressed in correlations that are more intuitive and easier to mea-sure: correlations of asset or equity evolutions; correlations between ratingmigrations
The dependence between PD and LGD has been reported in some cal research on LGD modelling [16, 133, 227, 432] In recession periods with
empiri-a high number of defempiri-aults, the LGD for eempiri-ach defempiri-ault is sometimes observed
to be higher This indicates that in downturn periods, the capital buffer has
to absorb two elements: a high number of defaults and high losses for eachdefault
Correlations also exist between market prices of non-defaulted issues,which is important in a mark-to-market portfolio explained below The esti-mation and representation of correlations is a complex task because of thelow number of observations and because correlation is only observed andmeasured indirectly Dependence modelling is explained further in book II
An additional difficulty is that correlation tends to increase in times ofstress [19, 108, 148, 164, 290]
Trang 205.5 Portfolio model formulations
5.5.1 Taxonomy
Portfolio models are widely used to analyze portfolios of assets Theiruse can be motivated by regulatory purposes,61 internal economic capitalcalculation, capital allocation, performance measurements, fund manage-ment and pricing and risk assessment of credit derivatives and securitizationinstruments62 (e.g., collateralized debt obligations, CDOs)
5.5.1.1 Classification
There is a wide variety of portfolio model formulations, each with differentproperties Generally, the models are classified according to the followingproperties:
Risk definitions: The risk of the portfolio can be considered as pure default
risk only or loss due to changes in market values and rating changes.Default-mode models only take into account default risk, movements
in the market value or its credit rating are not relevant Mark-to-marketmodels consider the impact of changes in market values, credit ratings andthe impact of default events These models allow a fair market value to begiven to the portfolio Because a value has to be computed for survivingloans as well, mark-to-market models are computationally more intensive.For trading portfolios, mark-to-market models are more appropriate Forhold-to-maturity portfolios, with typically, illiquid loans, default-modemodels are more applicable When no market prices are readily avail-able, mark-to-model approaches are a good alternative for mark-to-marketapproaches Advanced models go beyond the pure credit risk and includeinterest rate scenarios
Conditional/unconditional models: In conditional models, key risk factors
(PD, LGD, .) are explicitly conditional on macroeconomic variables.
In unconditional models, the (average) key risk factors are assumed to beconstant, the focus is more on borrower and facility information In con-ditional models, typically the PD is made dependent on macroeconomicvariables
61 The pillar 1 credit risk charges of the Basel II Capital Accord are computed based on a simplified portfolio model, while pillar 2 recommends advanced banks to check the consistency of the regulatory model results with results from internal portfolio models.
62 Securitization instruments are discussed in section 1.8.4.
Trang 21Structural/reduced-form default correlations: In structural models,
cor-relations are explained by joint movements of assets that are possiblyinferred from equity prices Changes in the asset values represent changes
in the default probability In reduced-form models, the correlations aremodelled using loadings on common risk factors like country or sectorrisk factors Because dependencies are obtained in a different way, somedistribution properties are different, as explained in book II
Distribution assumption: Bernoulli mixture models consider the loss
distri-bution as a mixture of binary Bernoulli variables Poisson mixture modelsuse a Poisson intensity distribution as the underlying distribution For thesame mean and variance, Bernoulli mixture models have fatter tails thanPoisson mixture models
Top-down/bottom-up: In top-down models, exposures are aggregated and
considered as homogeneous with respect to risk sources defined at the toplevel Details of individual transactions are not considered Bottom-upmodels take into account the features of each facility and counterpart in theportfolio Top-down models are mostly appropriate for retail portfolios,while bottom-up models are used more for firm models
Large/small portfolios: Portfolio models are defined for a collection of
credits In most cases, this concerns thousands of facilities and largesample effects are important During the last decade, structured productsemerged, as discussed in section 1.8.4 Such structured products allowexchange of credit risk and are defined upon an underlying portfolio of amuch smaller number of facilities compared to the whole bank’s portfolio.Except when large-sample assumptions are made, the portfolio models arealso applicable to structured products
Analytical/simulation models: Analytical models make well-chosen
sim-plifying assumptions on the loss distributions of the asset classes.Exposures are grouped into homogeneous asset classes on which theloss distributions are calculated and afterwards aggregated to the fullportfolio level Given the assumptions made, the results are obtainedfrom the analytical expressions allowing fast computation A disadvan-tage of these models are the restrictive assumptions that have to be made
in order to obtain closed-form solutions from analytical expressions.Simulation-based models aim to approximate the true portfolio distri-bution by empirical distributions from a large number of Monte Carlosimulations Because the portfolio losses are obtained from simulations,one does not have to rely upon the stringent assumptions that one some-times has to make in analytical models The main disadvantages are the
Trang 22high computation time and the volatility of the results at high confidencelevels.
A detailed mathematical description is provided in book II In section 5.5.2the Vasicek one-factor model is explained, an analytical default-mode modelformulation that serves as the basis for the Basel II capital requirements
A related simulation based model is explained in section 5.5.3
5.5.2 Vasicek one-factor model
Consider a Merton model in which the asset A i follows a standard normal
distribution [355] The asset defaults when the asset value A i drops below
the level L i The default probability P i equals P i = p(A i ≤ L i ).
Consider a one-factor model63with systematic factorη and idiosyncratic
noiseε i The asset values are driven by bothη and ε i [499–501]:
The stochastic variables A i,η and ε i follow a standard normal distribution.The asset correlationρ denotes the correlation between the assets A i and A j:
ρ[A i , A j ] = E[(√η +1− ε i )(√η +1− ε j )]
= E[η2] + − 2(E[ηε i ] + E[ηε j ]) + (1 − )E[ε i ε j ] = .
The asset correlation is constant between all assets A i It is the commonfactor between all considered assets that reflects, e.g., the overall state of theeconomy
The unconditional default probability PDi = P(δPDi = 1) is the probability that the asset value A i drops below the threshold value L i:
PDi = P(A i ≤ L i ) = N (L i ). (5.19)These unconditional probabilities can be obtained from long-term defaultrate statistics as reported in Fig 3.2 Given the idealized PDi = 0.20% for
a BBB rating, the default threshold becomes L i = −1N (0.20%) = −2.878.
63 The one-factor model is a mathematical representation of asset evolutions The representation here allows negative asset values, which is financially not feasible It is often opted to define a mathematical process that reflects the basic drives in a mathematically convenient way Positive assets can be obtained
by a constant shift or transformation.
Trang 23The conditional default probability given the systematic factorη is
Given that the systematic risk factorη follows a standard normal distribution,
the expression (5.21) allows computation of a worst case PD at the(1 − α)
confidence level given the systematic risk factor
Trang 24(a) Probability density (b) Cumulative probability distribution
Fig 5.7 Probability density function and cumulative probability distribution function of the portfolio Default Rate (DR) for the Vasicek one-factor model The PD of 1% is indicated
by the vertical line The higher the asset correlation, the fatter is the tail and the more likely become very high default rates.
For a very large portfolio, the cumulative distribution function becomes[499–501]
P(L P ≤ 1 − α) = N
1
√1− −1N (1 − α) − −1N (PD),whereα indicates the confidence level The proportional loss L P denotesthe default rate DR The limiting loan loss distribution has correspondingdensity function
with mean valueE[p(DR)] = PD, median value N ((1 − ) −1/2 −1N (PD))
and mode N ((1 − )1/2 /(1 − 2)−1N (PD)) for < 1/2 This limiting
loan loss distribution is highly skewed Its probability density and cumulativedistribution are depicted in Fig 5.7 For very large portfolios the uncertainty
of the binomial distribution reduces to zero and the worst case default rate
at the 1− α confidence level is obtained from eqn 5.22 Conditional on the
systematic factorη, the expected default rate or conditional PD is
Trang 25the time-varying economic condition represented byη An illustration is
available from (eqn 3.1) in section 3.6.3 [3]
2 Generate a simulation of correlated asset realizations
3 Compute for each facility the migration events
4 Compute the loss realized with each migration In the case of default, theloss is computed via a simulation from the LGD distribution that can beconditionally dependent on the macroeconomic situation
5 Compute the full portfolio loss aggregating the losses of the assets.This scheme is then realized for many simulations and the empirical dis-tribution is obtained A flow chart of the simulation scheme is depicted inFig 5.8 In the next sections, the main elements of the simulation frameworkare discussed
5.5.3.1 Correlated asset realizations
Consider the case of a one-factor model for a portfolio with homogeneousasset correlation The standardized returns of the assets are obtained as
whereη and ε i are simulations from independent standard normal tions The systematic part of the asset returns is equal to√η, while the
distribu-asset or firm-specific part√
1− ε i is also known as the idiosyncratic part.Other applications have non-homogeneous asset correlations Counter-parts exhibit a higher or lower dependence depending on whether they
operate in the same industrial sector or geographic region Let Q ∈ RN
be the correlation matrix of the asset returns r i , with q ,ij = corr(r i , r j )
(i, j = 1, , N) The Cholesky factorization Q = R T R is a generalization
of the square root√ in eqn 5.25 The matrix R is an upper triangular matrix
Trang 26PD, migration prob's LGD par's EAD, CCF par's
Measure mark-to-market loss for non-default migrations
Measure loss for default migrations
Generate migration events Generate EAD values Generate LGD values
Correlation Par's Issuer/Issuer Risk Par's
PD, migration prob's PD-LGD par's PD-EAD, CCF par's
Market Data
Spreads Interest rates Macroeconomy
Random number generator simulates correlated factors
Empirical portfolio loss distribution
Parameter calibration
MC simulation engine
Many times
Fig 5.8 Flow chart of a simulation framework for a credit risk portfolio model The Monte Carlo engine generates simulated correlated asset realizations For each asset, the migration
is calculated and the resulting loss is determined either in a mark-to-market or default-mode setup These individual gains or losses are aggregated to the portfolio loss The procedure is repeated many times to obtain the empirical loss distribution Apart from the technicalities of the simulation engine itself, an important aspect of the portfolio model design is the model calibration Simple implementations apply only correlations between PD rating migrations, advanced models take into account PD-LGD, PD-EAD correlations and even dependencies with spread evolutions and interest rates The latter is not depicted on the graph, but can be taken into account similarly as the PD-LGD dependence.
such that Q = R T R An example of the Cholesky factorization is given in
the Appendix The correlated asset returns are then generated from
with i = i
j=1R (j, i)2 The vectorη ∈ R N of dependent factors and theidiosyncratic noise ε i are simulations from independent standard normaldistributions In the case of non-homogeneous asset correlations, it is com-putationally not straightforward to compute all asset correlations for large
portfolios Indeed, for N assets the correlation matrix has O (N2) correlations
Trang 27to be calculated and the Cholesky decomposition requires O (N3)
compu-tations An alternative approach, used by CreditMetrics [225], is to regress
the asset return of facility i on a number of n j (j = 1, , n):
r i = w i1 f1+ w i2 f2+ · · · + w in f n + σ i ε i, (5.27)
where the factors f j are standardized realizations of sectorial and
geograph-ical risk factors The factor loadings w ij (j = 1, , n) represent the weight
of factor j to explain the assets returns r i The loading can be obtainedfrom least-squares regression (see book II), where one typically imposesthat 0 ≤ w ij ≤ 1 or −1 ≤ w ij ≤ 1 The variance σ2
i is obtained from the
unit variance constraint on the asset return r i:
Based on the Cholesky decomposition of the factor correlation matrix
[ f k , f l]k,l =1:n , one needs to store the factor loadings w for each asset i and
the idiosyncratic varianceσ2
i The factor correlations can be obtained from
a historical time series on sector returns or country returns, where one canemphasize stressed periods or choose the dependence together with the finan-cial experts One can then generate a simulation of correlated factors fromthe Cholesky decomposition and a simulation for the idiosyncratic noiseε i for all assets i = 1, , N The correlated asset returns are then obtained
from eqn 5.27
5.5.3.2 Migration events
The correlated asset realization r i generated from eqns 5.25–5.27 follows a
standard normal distribution Based upon the current rating R t, e.g., BBB,the 1-year migration matrix yields the conditional migration probabilities
Trang 28D
Fig 5.9 Migration events (left) and resulting losses (right) for a BBB company depending
on the realization of r For large r, an upgrade to A or AA is possible For very low realizations
of r, the company defaults In 84% of the cases, the rating remains stable and the losses are
zero The right pane indicates the resulting losses in the case of a migration The exact value
of the loss in the case of migration depends on facility properties and spread structure.
P(AAA|BBB), P(AA|BBB), , P(CCC|BBB) and P(D|BBB) from BBB to
the rating classes64AAA, AA, , CCC and the default state D, respectively Given the standard normal asset return r i, the next rating is then assigned
The estimation of the migration matrix (including default rates) is animportant element in the migration matrix One may even choose to putsome migration probabilities equal to zero in cases where one considers itimpossible or unrealistic for the bank to hold capital for such events Suchexamples can result from country-ceiling effects or rating-support floors
64 Observe that one may also consider larger migration matrices including rating modifiers The main elements in the trade-off are accuracy of the model results, computation cost, data availability and reliability for model calibration; and consistency across different sectors The 1-year migration matrix
is chosen here consistent with a 1-year horizon for the loss calculation If appropriate, one may also use multiple-year migration matrices estimated from rating migrations or derived from the 1-year migration matrix under the Markov assumption.
Trang 29from mother or state intervention Advanced systems are equipped withflexible limits based on the rating of the country and/or mother.
Migration matrices estimated from historical migrations may suffer fromstatistical noise or may not be meaningful financially speaking Statisticalresults indicate that except for the main diagonal elements, the uncer-tainty on estimated elements can be very high due to the limited number
of observations Typical approaches to obtain coherent migration matricesare smoothing per row the left and right part of the diagonal or the estimation
of consistent migration generator matrices [225, 279]
5.5.3.3 Mark-to-market losses
A zero-coupon bond does not pay any coupon and repays the face value F
at the maturity date M The bond price P relates the yield y to the face value
and maturity:
When discounting the face value at the yield y, the present value of the face value F equals the bond price P Given the bond price P, face value F and maturity M , the yield is obtained from eqn 5.30 as y = (F/P)1/M − 1 Fordifferent maturities, different yields are obtained The relation between yieldand maturity is referred to as the term structure of interest rates The yieldcurve has typically an upward trend, as depicted in Fig 1.6 The yield for azero-coupon bond is called the spot rate
The cash flows from other bond types can be considered as a nation of zero-coupon bonds with different maturities Consider now a
combi-fixed-coupon bond with face value F, coupon C, (remaining) maturity M and price P For a given term structure of yields y1, y2, , y M, the relation65between bond prices, yields, coupons, face value and maturity is
P M =
M−1
i=1
C (1 + y i ) i +(1 + y F + C
65 For simplicity, it is assumed here that the previous coupon has just been paid and that annual coupons are paid Formulas for other coupon-payment frequencies have a similar form When there is more time between the previous coupon payment, one may also take into account the accrued interest, which is the time-weighted proportion of the next coupon Prices without accrued interest are clean prices, with accrued interests they are called dirty prices See [173, 291] for more details.
Trang 30The iterative estimation of y i based upon P i , F, C and yields with shorter maturity y1, , y i−1is called a bootstrapping procedure From the calcu-lated yield/maturity relation the theoretical spot rate curve is obtained It isreferred to as the term structure of interest rates In practice, the calculationbecomes more complex because there are more observations at the samematurity trading at different yields and because prices are not available forall maturities More details on yield curves and modelling can be found
in [173, 376]
The forward rate f m, M is the rate at a future date m for remaining maturity
M − m It indicates the rate that will be charged in the future for a given
remaining maturity The forward rate is related to the spot rate (Fig 5.10)
y1 = f0,1
y M = y m f m, M
y M = y1f 1, M
The fair value V of the bond next year (at the end of the 1-year horizon for
the portfolio modelling evaluation) is calculated as follows
V =
M−1
t=2
C (1 + f 1,t ) t−1 + (1 + f F + C
Fig 5.10 Spot and forward rates for different maturities The forward rates are illustrated
for various start dates and remaining maturities The forward rate f n, mis related to the spot
rates y and y via 1+ y = (1 + y )(1 + f ).
Trang 31250 500 750 1000 1250 1500
Maturity
AAA AA A BBB BB B
AAA AA A BBB BB B
3m 1y 10y
AAA AA A BBB BB B (a) Spread 1-y maturity
(c) Spread BBB (d) Spread term structure
(b) Spread 5-y maturity
Fig 5.11 Monthly average spread values for USD denominated US firm bonds from uary 1999 till June 2006 (source: CreditDelta) The top 2 graphs report the spreads for AAA −, AA−, A−, BBB−, BB− and B–rated bonds for a maturity of 1 and 5 years, respectively The bottom left graph (c) illustrates the evolution of the spreads for a BBB rating for various maturities The bottom right graph depicts the spread term structure for January 2003.
Jan-rating downgrade will reduce the fair value V , whereas a Jan-rating upgrade will increase the fair value V An indicative shape of the procentual losses
in the case of migration is depicted in Fig 5.9b Figure 5.11 depicts thebond spreads for different rating grades The spread is reported for USDdenominated US firm bonds It is observed that the risk premium chargedfor different rating grades is highly volatile The volatility of spreads andthe corresponding impact on the bond prices is known as spread risk Spreadrisk models are on the borderline between market and credit risk models.The time horizon varies from 10 days to 1 year
Bonds with longer maturity are more sensitivity to rate changes becausethe mark-to-market losses of bonds with longer maturity are more sensitive
to forward and yield changes For a bond with face value F= 100, maturity
Trang 321% 2% 3% 4% 5% 6% 7% 8% 9% 10% 40
Fig 5.12 Illustration of bond price P sensitivity to the yield y for a bond with face value
100, coupon rate 5% and different maturities The modified duration is reported at yield
y = 5% and its linear approximation reported for a maturity M = 20.
M and fixed yearly coupon of C = 5%, the price P when discounting future cash flows at a constant yield y is illustrated in Fig 5.12.
Note the increasing sensitivity for a longer maturity The modified duration
(eqn 1.1) expresses the sensitivity of the price P to the yield y, relative
to the current price P For smaller (remaining) maturities, the prices are
pulled to the face value and the price sensitivity is small Note that for largeryield changes, the convex relation becomes more important The price–yieldfunction is said to have positive convexity, which is due to the exponentialdiscounting formula
The bond market has many different bond types, e.g., bonds with annual fixed coupons, bonds with floating coupon rates, callable bonds, etc.For each of these bond types, an appropriate pricing and market sensitivityare determined Loans may have similar features as discussed in section 1.8.More complex bonds may include option characteristics An overview ofthe pricing and risk analysis of bonds is available in [173, 291]
semi-5.5.3.4 Default losses
In the event of default, one needs to assign the loss In the case of defaultmode models, this is the only case a loss is registered In the case of a
Trang 33fixed average or stressed LGD, the loss is easily determined In the moregeneral case, an LGD distribution needs to be or has been estimated, and
a random LGD value is drawn from the estimated distribution in the case
of a default event Typically, a beta distribution is used to fit LGD butions Advanced approaches use kernel density estimation methods to fitbimodal LGD distributions and/or combine discrete beta or kernel distribu-tions with discrete distributions for the case of 0% or 100% LGD In the case
distri-of beta distributions, the parameters can be obtained be fitting the mean andvariance of the beta distribution on the empirical mean and variance of theobserved LGDs This is particularly interesting because these measures areoften reported, e.g., in rating agencies reports
Given the cumulative LGD-distributionLGD, the loss in the case of adefault is randomly obtained as follows:
LGD= −1LGD( N (x)), with x a standard normal distributed stochastic variable, N the cumula-tive standard normal distribution and−1LGDthe inverse of the cumulativeLGD-distribution Stochastic LGDs are more realistic as the LGD may
be dependent on many sources The use of stochastic LGDs increases thecomputational requirements and the additional uncertainty increases the tailfatness of the loss distribution A dependence with the default risk factors is
introduced when x=√ρη +√1− ρε Instead of varying the LGD, one
can also reduce the collateral value for secured facilities [201, 202]
5.5.4 Model calibration
Although there exist different model formulations, the differences on theresulting risk measures are reduced by a proper calibration, as is explained,e.g., in [124] Apart from the methodology choices and the IT implementa-tion of the (Monte Carlo) calculation engine, the calibration is an importantaspect of the setup of a portfolio model [82, 124, 196, 212, 313]
The calibration concerns the decision on the key risk parameters Someparameters, like default, loss and exposure risk are relatively easy to cali-brate This work needs to be done anyway for banks that adopt the Basel IIadvanced internal-ratings-based approach The correlation and dependenceare only observed indirectly The calibration of these parameters has animportant impact on the tail distribution and requires special care and under-standing In complex portfolio models, the calibration task requires a carefulselection of each parameter and verification of the global result The latter is
Trang 34Table 5.2 Comparison of the basic portfolio model formulations.
KMV PM CreditMetrics PRT CPV Macro CreditRisk +
Originator KMV JP Morgan S&P66 McKinsey Credit Suisse Documentation [84] [225] [131] [352] [121]
Defaults migration
industry)
Macro economic factors
Sector default intensities
PD Corr Asset value
factor model
Equity value factor model
Asset value factor model
economic factor model
Macro-Default intensity model
LGD Distr Beta Beta Beta Random Constant
Calculation Monte Carlo
simulation
Monte Carlo simulation
Monte Carlo simulation
Monte Carlo simulation
Analytical solution
achieved via benchmarking and backtesting The literature on the backtesting
of portfolio models is rather limited and is still an area of research
Table 5.2 summarizes the basic features of some well-known and lar portfolio model formulations Each of these models and its possibleextensions are discussed individually
popu-5.6.1 CreditMetrics
This model considers an asset value process like eqns 5.25 and 5.27 Default
or migrations occur when the firm’s asset value exceeds one of the cal threshold values (eqn 5.29) The asset value process is assumed to be
criti-66 The model formulation has been commercialized by S&P.
Trang 35dependent upon industrial sector and regional factorsφ j (i = 1, , J):
j=1w ij φ j the firm’s composite factor and ε i the firm-specific
effect or idiosyncratic noise part Compared to eqn 5.18, one has R i = √, the value R2i denotes how much of the asset return volatility r iis explained bythe volatility of the composite factorη i It measures the correlation with thesystematic influences of the industry and country indicesψ j The parameters
w ijmeasure the factor loadings
Conditional upon z = η i, the default probability becomes
indus-The simulation tool is not only default-mode oriented, but also market Mark-to-market price changes and losses are triggered by ratingchanges The random generator generates correlated transitions dependent
mark-to-on the correlated assets (eqn 5.32) that would correspmark-to-ond in default mode tothe loss distribution (eqn 5.22) Given the correlated transitions, the marketlosses are calculated For (simulated) default events, the recovery can bedrawn from a beta distribution and the losses are obtained The LGDs areassumed not to be correlated with each other or with the default probabil-ities or other risk factors The exposure at default is obtained via averageexposures, or using loan-equivalent average exposures for more complexinstruments (swaps, options, .).
5.6.2 Portfolio Manager
KMV’s Portfolio Manager is similar to the CreditMetrics model using adefault-mode approach It relies on a multivariate normal distribution of the
Trang 36whereby the one-factorη i is composed of sector and country factor indices
φ S,jandφ C,j reflecting the J S and J Csectors and countries in which the firmoperates The factor loadings satisfy J C
as in CreditMetrics and also the other model specificities are similar
5.6.3 Portfolio Risk Tracker
Portfolio Risk Tracker (PRT) is a recent ratings-based model with a lar structure as CreditMetrics It has been commercialized by Standard &Poor’s As it is a more recent model, new advanced functions are included.Apart from static computations that report loss distributions at the end ofthe time horizon, intermediate results at fixed time intervals can also beprovided It additionally includes stochastic interest rates, such that interest-rate-sensitive instruments like floating rate notes can be taken into account inthe loss distribution computation without using loan equivalence Stochasticspreads are also included, which is a novelty compared to the other models
simi-In this sense, this portfolio model is able to capture default risk, migrationrisk and spread risk
Other new elements include different ways to choose correlations, delling of sovereign ceilings, correlations between PD and LGD and thepossibility to include equities, Treasury bonds and interest rate options
mo-5.6.4 Credit Portfolio View
Credit Portfolio View (CPV) [518, 519] is a conditional macroeconomicmodel used by the international management consulting firm McKinsey &
Trang 37Company to support its projects in credit risk management It is a based and ratings-based portfolio model in which the default and migration
simulation-probabilities depend upon macroeconomic variables x like Gross Domestic
Product growth, inflation, interest rates, savings rates and unemployment
The whole portfolio is subdivided into N s segments that correspond to
sectors and or geographic zones In each segment s and at time index t, the
default rate PDs,t ∈ [0, 1] is dependent on a general macroeconomic index
y s,t∈ R via the logistic link function
1+ exp(−y s,t ) and y s,t = − ln(1/PD st − 1).
The logistic link function maps a real-valued variable into the interval[0, 1]:
y s,t→ ∞, PDs,t → 1 and y s,t→ −∞, PDs,t → 0
The macroeconomic index y s,t itself is obtained based upon the
macro-economic variables x i,t via
Each segment default rate PDs,tis compared to the average (unconditional)default rate PDs One defines the risk index r s,t = PDs,t /PD s The segment
is in recession when r s,t > 1 and in expansion when r s,t > 0 The risk index
determines the migration matrix
M s,t (i, j) = M s (i, j) + (r s,t − 1)M s (i, j),
Trang 38which in turn determines the mark-to-market loss due to migrations The
conditional migration matrix M s,t consists of an average part M s and of aconditional part(r s,t −1)M s The shift matrixM ssatisfiesM s (i, j) ≥ 0 for i < j and M s (i, j) ≤ 0 for i > j as upward migrations become more
plausible during expansions, whereas downward migrations become less
plausible The CPV algorithm ensures that M s,tis a stochastic matrix at alltimes with positive elements and rows summing up to one
There exist two model formulations: the CPV macro and CPV directmodel The CPV macro model was developed first and works as follows
First, it generates (possibly correlated) noise sequences e i,t+1, e i,t+2, and
ε s,t+1,ε s,t+2, for i = 1, , N x and s = 1, , N s The time indices can,e.g, be yearly data and if one wants to simulate over a 5-year predictionhorizon, one uses 5 lags for each sequence For each simulation, one com-putes the macroeconomic indices, segment default rates and risk indices.The 5-year conditional migration matrix is then obtained as the product
of the conditional migration matrices Simulating many sequences e and ε
yields the distribution of migrations and default probabilities for any initialrating Together with an assumption on the LGD, one can approximate theaverage loss distribution Observe, however, that the model returns aggre-gate default rates and not obligor-specific default probabilities The model
is a top-down model, while other models start bottom-up from obligor- andfacility-specific data
Although the macroeconomic variables are intuitive and relatively easy
to obtain, the model calibration of the CPV macro model can be a complextask as many parameters have to be estimated, while data may not always
be readily available Indeed, one has to estimate the parametersα i, j,β s, j given a time series of the macroeconomic variables x i,t and default rates
PD s,t Especially the latter may be less easy to obtain As an alternative tothe CPV macro, the CPV direct formulation has been developed to avoidall the difficulties of the calibration of the CPV macro model CPV directallows to obtain the segment specific default rates directly drawn from agamma distribution for which the calibration can be done via the method ofmoments as explained in book II
This CPV model description has to be considered as a general framework
It is tailored to the client’s needs in the implementation
5.6.5 CreditRisk+
The original CreditRisk+formulation focuses on default-mode portfolios.
It is an actuarial model that was developed by Credit Suisse Financial
Trang 39Products The mathematical formulation is conceived to obtain a fully ical description of the portfolio loss distribution No simulations are required.The model uses default intensitiesλ = − ln(1 − PD) that are approximately
analyt-equal for small PD values:λ PD The default intensities are made dent on sector factors S i The conditional default risk of obligor j is equal to
ings w ji ∈ [0, 1] denote the importance of the sector i for the risk evaluation
of obligor j The remainder
developers Other developers in industry and academics have made manyadaptations and extensions to the approach [223] A mathematical descrip-tion is provided in book II It is available from the technical documentation[121] and the literature [82, 132, 223, 514]
5.6.6 Structured product models
The asset pool of structured products is a small portfolio with hundreds
to thousands of assets, as depicted in Fig 1.17 Many models nowadaysuse Monte Carlo simulation techniques depicted in Fig 5.8 Some differ-ences with the classical portfolio models are the different time horizon,pre-payment risks and legal risks The maturity of the structured products
is typically larger than the one-year horizon for portfolio models For someproducts, like mortgage-backed assets, interest rate and pre-payment risksare also evaluated For CDOs, correlation and dependence modelling is thekey challenge For ABSs, the granularity aspect is less important and onecan use historical loss statistics of the originator There exists a rich literature
on alternatives for Monte Carlo simulation, a.o., [82, 133, 223, 382, 391] Acomparison between CDO models is made in [143]
Trang 40A well-known pioneering model for evaluating CDOs is the binomialexpansion technique (BET) from Moody’s [114] The portfolio loss distri-
bution of N assets is approximated by a binomial probability distribution
of D ≤ N assets, where D depends on the concentration and correlation When reducing N to D, an important step is the computation of the diversity
score that depends, a.o., on the correlation between the different sectors ofthe asset pool
The risk analysis also includes a legal risk analysis and an analysis ofthe different parties involved in the deal, especially for complex deals withmany parties involved
5.7 Basel II portfolio model
The Basel II risk weights (RW) are a function of the issuer and issue risk.The risk weight determines a lower floor on the minimum required capitalfor credit risk:
bank capital≥ 8%
i
risk weightsi
The 8% proportionality factor was introduced in the Basel I Capital Accord
to have sufficient capital buffers in the banking sector [49, 63] The 8%can be increased by local regulators The Basel II risk weight factors arederived from a specific portfolio model developed by the Basel Committee onBanking Supervision Given that the model and the corresponding formulaeare used for capital adequacy supervision, the derivation was developedsubject to an important restriction in order to fit supervisory needs
The model has to be portfolio invariant, i.e the capital required for anygiven loan should only depend upon the risk of that loan and should notdepend on the portfolio to which the loan is added or belongs Indeed, forsupervisory needs, it is too complex for banks and supervisors to take intoaccount the actual portfolio composition for determining capital for eachloan It has been proven that under certain conditions a one-factor portfoliomodel is portfolio invariant when the number of loans in the bank goes toinfinity [213]
Note that such a formulation does not take into account the tion of the portfolio, as is done with more sophisticated portfolio modelsmentioned in section 5.6 The Basel II model therefore assumes that thebank’s portfolios are well diversified The lack of diversification is expected
diversifica-to be taken indiversifica-to account under pillar 2 as explained in the next chapter The