fixed income performance attribution
Trang 1F IXED I NCOME P ERFORMANCE A TTRIBUTION
Diploma thesis submitted to
Swiss Federal Institute of Technology, Zürich University of Zürich, Swiss Banking Institute
for the degree of
Master of Advanced Studies in Finance
Trang 2Abstract
The two key asset classes available to investment managers are equities and bonds Equity attribution has been around for a while and well-established methods of attribution have been developed It is therefore tempting to generalize these methods to fixed income attribution However, in doing this the performance analyst ignores essential characteristics of fixed income investments In many points, risk factors in fixed income investments are fundamentally different from those in equity Some of them do not even have an equivalent in the equity attribution universe – these include yield curves and credit spreads Furthermore, the effect of yield curve moves and spread changes on bond value is non-trivial This paper proposes in the first part to review the different factor decompositions and methodologies used in the fixed income industry A special emphasis is put on the yield curve shift effects (parallel, twist, butterfly, reshape) which play a central role in performance attribution In the second part we discuss the practical problems of data quality that usually occur when implementing a fixed income performance attribution Then we will run a Fixed Income Performance Attribution analysis (FIPA) on a real portfolio and interpret the results obtained
We finish by checking which FIPA factors are the main driver of excess returns and if excess returns identified are still present under a risk-adjusted basis
Trang 31 Introduction 1
1.1 Performance attribution 1
1.2 Fixed income performance attribution 1
2 Theoretical framework 2
2.1 Fixed income return decomposition 2
2.1.1 Carry return – Coupon income 2
2.1.2 Carry return - Roll-down 3
2.1.3 Market return – Yield curve 4
2.1.4 Market return – Spread 6
2.1.5 Market return – Volatility 6
2.1.6 Market return – FX rate 7
2.1.7 Timing return 7
2.2 Yield curve construction 8
2.2.1 Yield to maturity (YTM) curve 8
2.2.2 Zero coupon yield curve 8
2.3 Yield curve decomposition 10
2.3.1 Principal component analysis method 10
2.3.2 Empirical method 13
2.3.3 Polynomial method 16
2.3.4 Duration based method 17
2.4 Linking return effects to multiple periods 18
2.4.1 The arithmetic model 18
2.4.2 The geometric model 19
3 Issues in practice 20
3.1 Data quality 21
3.1.1 Assets without price or with an incorrect price 21
3.1.2 Corporate actions 21
3.2 Cash flows and management fees 22
3.2.1 Management fees 22
3.2.2 Accounting of reclaimable withholding taxes 22
3.2.3 Reinvestment of coupons 22
3.3 Gross / Net basis 22
3.4 Replicating the benchmark in general 23
4 Characteristics of the portfolio analyzed 24
4.1 Constraints on the portfolio 24
4.2 Style of the portfolio manager 24
4.3 Set up of the fixed income performance analysis 25
4.3.1 The yield curve 25
4.3.2 The YC decomposition factors 26
4.3.3 Linking method 26
5 The results 27
5.1 The FIPA attribution for the global portfolio 27
5.1.1 Global return (TWR) 28
5.1.2 Direct return 28
5.1.3 Roll-down 28
Trang 45.1.4 YC shift 1 (parallel shift) 29
5.1.5 YC reshape 29
5.1.6 Sector spread return (credit spread) 30
5.1.7 YC spread return (issue spread) 31
5.1.8 Fixed income timing 31
5.1.9 Fixed income currency return 32
5.2 The key ratios 32
5.2.1 The Alpha 32
5.2.2 The Beta 33
6 Interpretation 34
6.1 Excess returns and FIPA factors 34
6.1.1 Distribution of excess returns 34
6.1.2 Interaction between FIPA factors and excess returns 34
6.1.3 Multivariate analysis 37
6.2 Performance on a risk-adjusted basis 39
6.2.1 Alpha and FIPA factors 39
6.2.2 Excess returns on a risk-adjusted basis 40
7 Conclusion 44
8 Acknowledgments 46
9 References 47
10 Appendix 48
Appendix 1: US government yield curve principal component analysis 48
Appendix 2: Multivariate analysis of the FIPA factors 49
Trang 6The purpose of performance attribution is to understand realized excess returns and to relate this information to the active decisions made in the investment organization, in order to understand the sources of out-performance and identify the active decisions that have generated the excess returns Attribution models are designed to identify the relevant factors that impact performance and to asses the contribution of each factor to the final result This information can then be communicated to clients, management and (not least) the portfolio managers that conducted the active bets In doing so the performance analysis can over time add value by assisting in the identification of the investment management particular skills and
of the areas where skills appear to be lagging
1.2 Fixed income performance attribution
The slump in equity markets during the last couple of years has changed many investors attitude towards fixed income From being a low returning low volatile asset class bond investments are now considered more than just a safe-haven Measured on a risk-adjusted basis the long-term returns from bond investments compare favorably with equity returns
In order to understand the active decisions made during the investment process it is essential
to understand the characteristics of the underlying asset classes and relevant risk factors that drive the investments, since it is these assets classes and risk factors that the portfolio manager analyzes when designing portfolios
Two key asset classes available to investment managers are equity and bonds Equity attribution has been around for a while and well-established methods of attribution have been developed It is therefore tempting to generalize these methods to fixed income attribution However, in doing this the performance analyst ignores essential characteristics of fixed income investments
In many points, risk factors in fixed income investments are fundamentally different from those in equity Some of them do not even have an equivalent in the equity attribution universe – these include yield curves and credit spreads Furthermore, the effect of yield curve moves and spread changes on bond value is non-trivial
For all these reasons, Fixed Income Attribution has been one of the key challenges in the portfolio management industry; though there is now an extensive set of research into differing methodologies, there is still no agreed industry standard
Trang 7This paper proposes in its first part to review the different factor decompositions and methodologies used in the fixed income industry A special emphasis is put on the yield curve shift effects (parallel, twist, butterfly, reshape) which play a central role in performance attribution In the second part we will discuss briefly the different problems that usually occur
in practice when implementing the attribution Then we will run a Fixed Income Performance Attribution analysis (FIPA) on a real portfolio and interpret the results obtained We finish by checking which FIPA factors are the main driver of excess returns and if the excess returns identified are still present under a risk-adjusted basis
2.1 Fixed income return decomposition
It is generally admitted that the value generated by holding bonds is composed of three different components Unlike the case for equities, the return generated from periodic cash flows is significant In addition to the periodic return, bond returns are sensitive to changes in the fundamental market variables or fixed income risk factors Finally the return is affected by
timing of trades These three different sources of return are usually denoted by carry, market and timing return:
Timing Market
Carry
Fig 1 Fixed income return components
2.1.1 Carry return – Coupon income
The carry return is composed of two components The central component is the (typically annual) coupon being paid out to the investor – we denote this component direct return This component is always positive This direct return is theoretically defined as:
Trang 8t y
P
t
C
rDirect = ⋅∆ = Current⋅∆
where C is the annual coupon, t is time passed, P is the initial price and y denotes yield
More generally a direct return is computed as follows within a “end of the day cash-flow /
⋅
⋅+
+
⋅
=
1 1 1
1
1 Coupon 1
1 Direct
,
1
t t t
t
t t
t t
t t
t
X AI P
N
X C
AI P
Fig 2 Direct return
2.1.2 Carry return - Roll-down
A less pronounced component of carry is the passage of time Bonds usually do not trade at par, but they are eventually redeemed at par, therefore at maturity the market price must converge towards par For longer-dated bonds this effect is minor, whereas it can be significant for shorter-dated bonds trading away from par This return component is called
roll-down return The effect is positive for discount bonds (the roll effect will pull the price
up towards par) and negative for premium bonds (the roll effect will pull the price down towards par)
The roll-down return can be interpreted as:
+
⋅
=
1 Coupon
1 1
1
1 Coupon 1
1 1
1 RollDown
,
1
,
t t
t t
t
t t
t t
t t t t
t
X C
AI P
N
X C
AI YCS
YC P N r
where t: time, N: nominal amount, P: price, X: FX rate, YC: yield curve, YCS: yield curve
spread
Remark: The artificial price “P t (.)” is calculated by a function of different factors like yield
curve, yield curve spread, volatility for example (the number of factors depends on the model complexity) Artificial prices are needed to sequentially calculate and decompose return
effects (see Fig 1.)
Trang 9Fig 3 Roll-down return when the bond is overvalued
2.1.3 Market return – Yield curve
In contrast to the carry return components the market return is less predictable The market return is driven by the market variables on which bond value depends In fixed income the yield curve is the central market variable Traditionally the yield curve is based on bonds issued by government entities The rationale has been that this provides a default free yield curve per country Therefore the market value of government bonds are normally driven entirely by movements in this curve
The basic approach to modeling yield curve movements is to calculate the difference between the final and the initial yield curve for the period for which performance is measured
Fig 4 Yield curve movements
Often portfolio managers decompose yield curve shifts further into basic movements Typically the number of basic movements vary between 2 and 5 This number is arbitrarily chosen by the portfolio manager who constructed the portfolio and who did bets on yield curve moves The number of basic movements is consequently a trade-off between the explanation power of the model and the complexity of the interpretation
Recent studies suggest that most of the yield curve shift can be explained pretty well by
essentially three factors: parallel shift, slope (or twist) and curvature (or butterfly) The unexplained shift left is normally statistically small and put in a residual factor called reshape
Trang 10Of course in market crisis situations these three first factors might be insufficient to leave the
reshape small and to do a good performance attribution
a) Parallel shift
A parallel shift appears when the rates at standard maturities move uniformly Note that
parallel shifts in yields are captured directly by the bond duration as rParallel ≅−D⋅∆ycParallelwhere r denotes return, D is modified duration and YC is the yield curve
b) Twist (steeping / flattening)
We can see a twist effect when short term and long term rates move in opposite direction but
proportionately in relation to the distance from some “pivot point” maturity (usually defined
at 5 years)
c) Curvature
The curvature or butterfly effect occurs when short term and long term rates move in same
direction while medium term rates move in an opposite direction, still proportionately
By decomposing the yield curve movements into contributions from these shifts the bond fixed income portfolio return that is due to the yield curve moves can be decomposed into:
Reshape Curvature
Twist Parallel
Fig 5 Example of parallel, slope (steepness) and curvature shifts
The financial literature identifies several methods to extract these factors and quantify them Four, at least, can be mentioned:
Trang 11• Statistical Principal Component Analysis (PCA)
• Empirically constructed user-defined factors
• Polynomial fit mimicking a Taylor decomposition of the yield curve function
• Factor model based on duration analysis
As the returns that are generated by the yield curve shifts are the heart of a fixed income attribution analysis, we are going to review these four methods in detail a bit later in the paper
2.1.4 Market return – Spread
In addition to the general yield levels, non-sovereign debt is also sensitive to credit risk The
market measure of credit risk is the spread This is the additional yield that an investor will
require in order to invest in such bonds The Implied Yield Curve Spread (YCS), which is the discounting spread necessary to add to the yield curve in order to match the market price of the given bond So the YCS of a bond is the solution to the equation:
C V
)1
where C t denotes cash flow at time t, V the value of the bond and y t is the zero yield for time t The return resulting from the yield curve spread is fully defined by the following equation:
⋅
=
+
1 1
1
1 1
YCS
,
1
,,
,,1
t t t
t t t t
t t t
t t t t t
t
X AI Vol
YCS YC P N
X AI Vol
YCS YC P N r
where t: time, N: nominal amount, P: price, X: FX rate, YC: yield curve, YCS: yield curve spread, Vol: volatility
The magnitude of the spread reflects the credit quality of the issue The spread is typically decomposed into two subcategories – the sector spread (industry specifics and rating specifics) and the issue spread (issuer specifics) The first subcategory reflects aspects
common across bonds issued by corporations with similar ratings and in similar industries;
the second category reflects issue/issuer specific considerations Therefore the spread can be
decomposed as:
Issue Sector
2.1.5 Market return – Volatility
For standard domestic bonds the previous factors are the main drivers For more complex instruments other market variables can add value An important category of bonds is bonds
Trang 12with embedded options Often asset-backed, mortgage-backed and corporate bonds have built-in options in the form of put, call or prepayment options For such bonds changes to
implied volatility is an important factor behind market value, since the volatility drives the
option value For most vanilla bond portfolios the volatility return is small compared to the direct, curve and spread return components However for portfolios with large options
positions or many mortgage bonds the volatility effect can be significant
Volatility is computed as follows:
+
⋅
⋅+
1 1
1
1 Coupon 1
1 1
1
t t
t t
t t t
t t
t t
t t t t t
t
X C
AI YCS
YC P N
X C
AI Vol
YCS YC P N r
where t: time, N: nominal amount, P: price, C: coupon, X: FX rate, YC: yield curve, YCS: yield curve spread, Vol: volatility
2.1.6 Market return – FX rate
For foreign investments the FX rate development is another key risk factor that impacts the
performance The currency effect is generic (not specific to fixed income) and it is treated exactly as for equity portfolios
The FX effect is calculated with:
Coupon Currency
,
1
1
t t
t t t
t t
t t t t
t
X C
AI P N
X C
AI P N r
where t: time, N: nominal amount, P: price, C: coupon, X: FX rate
2.1.7 Timing return
Timing return component arises due to the trading activities in a portfolio Performance
measurement is typically done based on end of day prices Usually trading is conducted during the trading hours and therefore some discrepancy will occur The effect of this trading
is compounded into the timing return component In case the trader has executed on attractive levels relative to end of day pricing the effect will show up as a positive return component
Timing return can be defined as:
1 ,
1
1 Coupon
t t
t t t t
t t
t t t t
t
X C
AI Vol YCS YC P N
X C
AI P N r
where t: time, N: nominal amount, P: price, C: coupon, X: FX rate, YC: yield curve, YCS: yield curve spread, Vol: volatility
Note that in this section 2.1 all formulas come from an “end of the day cash-flow / geometric model” The formulas change a bit if we are in a “beginning of the day and/or arithmetic”
Trang 13setting However the logic behind remains the same This concludes the study of fixed income
return decomposition as described in Fig 1 The following section is dedicated to the most
important component for fixed income attribution – the yield curve effect
2.2 Yield curve construction
The yield curves are extracted from bonds available on the markets Basically there are two
main types of yield curves - the yield to maturity curve and the zero coupon yield curve The
zero coupon yield curve is easier to model than the YTM curve Therefore a consequent
amount of academic research has been done on the zero coupon yield curve where maybe the most well known model is the stochastic model of Vasicek
2.2.1 Yield to maturity (YTM) curve
The yield to maturity (YTM) curve is computed with the yield to maturity, which is a
security’s internal rate of return, or the anticipated yield of the bond if held to maturity The YTM is the rate used when calculating the present value of all cash flows, so that they add up
to the current market price In other words, it is the compounded rate of return that investors receive if the bond is held to maturity and all cash flows are reinvested at the same rate of
interest If r is the current yield to maturity, then a bond price is given by:
C + − + + − + + + −n+ + −n =
11
2.2.2 Zero coupon yield curve
The zero coupon yield curve is computed with zero coupon yield, which is the return it would
show if all coupons were stripped out Note that for securities that do not pay coupons, such
as zero-coupon bonds or bills, there is only one repayment cash flow at maturity In this case, the yield to maturity is identical to the zero coupon yield
Thanks to its simplicity a lot of evaluation methods have been developed for the zero coupon
yield curve The following list is not exhaustive:
• Bootstrapping
• Cubic spline
• Nelson Siegel
• Cox Ingersoll Ross
• Cox Ingersoll Ross (inflation)
• Vasicek
• Longstaff Schwartz
• Maximum smoothness
• Natural spline
Trang 14The evaluation principles may be divided into three groups: bootstrapping methods, mathematical methods and term structure models
For all models, except the bootstrapping method, the underlying functional form is estimated using ordinary least squares Therefore, theoretical and observed prices on the bonds, which have provided data for the yield curve will usually deviate On the other hand, in the bootstrapping method theoretical and observed prices on the bonds that have provided data for the yield curve are always equal due to the calculation principle
In the bootstrapping method, the zero coupon yield curve is approximated using a continuous,
piece-wise linear, function The number of pieces are equal to the number of bonds (or money market, FRA/IRF and/or swap quotes) within the segment to provide data for the yield curve The break points are defined by the time to maturity of the bonds Therefore, if the segment includes 18 bonds, the yield curve is defined by 18 parameters, i.e the slopes of the 18 linear pieces
With the mathematical methods (cubic spline, natural spline, Nelson spline), estimation
techniques are used to create yield curves We invite the reader to refer to a statistical book for further details1
Finally an alternative is to use term structure models (Cox Ingersoll Ross, Cox Ingersoll Ross
[inflation], Vasicek, Longstaff Schwartz) They are descriptions of changes to interest rates
over time Some of these models are characterized by having closed form solutions to the price of zero bonds, which may be used in the yield curve estimation Basically the parameters in the interest rate process are used as variables in the estimation By varying these parameters, it is possible to find the process that fits the prices of the instruments used in the estimation best With theses approaches the interest rate models have good asymptotic behavior (such as converging, as term to maturity is large) and sometimes the parameters may have a financial interpretation However, the approach is rather pragmatic The models are simply used to produce zero curves with ideal features and no further interpretation is attempted
Vasicek and Longstaff-Schwartz are somewhat more complex than the rest of the models In
these models the zero coupon yield curve is approximated by an equation that is derived as a
solution to a stochastic differential equation The change in interest rates is decomposed into a
drift term and a stochastic term The Longstaff-Schwartz model even includes two stochastic
differential equations In these models the underlying stochastic differential equations relate
to so called factors, which are presumed to describe the pricing in the financial market
As an example we present here a brief model specification of the Vasicek model
Trang 15where r is the interest rate The other parameters are defined as follows:
• a > 0 : speed of mean-reversion
• r > 0 : level of mean-reversion (the average value where the interest rate converges)
• σ > 0 : absolute volatility
• W t : a standard Brownian motion at time t
Note that negative interest rates are possible with positive probability with these settings, which is a weakness of the model
The solution of the stochastic differential equation given above is for 0≤s<t:
s u t a s
e a t
r
Var
r s r e
=
2 2
12
σ
F
F
The zero coupon yield curve can then be modeled with r(t)
For a more detailed discussion concerning the characteristics of these term structure models, please refer to relevant financial literature covering these models3
2.3 Yield curve decomposition
As we saw in the sections above, the yield curve can be decomposed in 3 main factors in order to explain the global shift – the factors being the parallel shift, twist, butterfly, plus a residual Now we are going to review the four methods that people usually use in the industry
to decompose the yield shift These methods are generally applied on zero coupon yield curves
2.3.1 Principal component analysis method
To explain all the possible distortions by using n maturity points to define the curve, n
scenarios on each yield curve are required PCA is a coordinate transformation that reduces the redundancy contained within the data by creating a new series of components in which the axes of the new coordinate systems point in the direction of decreasing variance The resulting components are often more interpretable than the original images The mean of the original data is the origin of the transformed system with the transformed axes of each component mutually orthogonal
3
See for example [9] Hughston L., “Vasicek and Beyond: Approaches to Building and Applying Interest Rate Models”, Risk Books, 1997
Trang 16The methodology is as follows:
1 Import the interest rate series For example daily yield curves of selected maturities up
to 30 years
2 Compute the stationary series of differences
3 Compute the eigenvalues and eigenvectors of the series The eigenvectors represent the factor loadings, while the eigenvalues represent the significance of the factors Eigenvalues are reported in descending order
4 The relative weight of the eigenvalues gives the explanatory power of the various factors
5 The matrix of components is created By construction because of orthogonality, the components are mutually uncorrelated
6 Every element of the original series can be reconstructed using the components and the loading matrix
7 As reported in the interest rate literature4, the three factors represent different aspects
of interest rate movements Typically, the first factor is responsible for parallel shifts, the second one for twist changes and the third one for butterfly adjustments
Below you see a PCA analysis of the US Government yield curve The time period chosen
goes from January 1997 to August 2005 We took monthly data The Fig 6 shows the
different indexes that compose the US Government yield curve (1 month, 3 months, 6 months, 9 months, 1 year, 2 years, 3 years, 5 years, 10 years and 30 years)
USD Govt Yield Curve Indexes
Fig 6 Indexes that compose the USD Government yield curve from Jan 1997 to August 2005
This period is particularly interesting to analyze because it encloses a yield curve reversion (shorter rates higher than longer rates) from May 2000 to January 2001, then a steeping of the curve for 2001 and finally a flattening from May 2004
4
See for example [10] James J & Webber N., “Interest Rate Modelling“, Ed J Wiley, 2001
Trang 17By applying a standard PCA5 analysis, we obtain the following factor loadings and the cumulative explained variance:
-0.2 0.0 0.2 0.4
Factor 2 loadings
USD.INDEX.1M USD.INDEX.3M USD.INDEX.6M USD.INDEX.9M USD.INDEX.1Y USD.INDEX.2Y USD.INDEX.3Y USD.INDEX.5Y USD.INDEX.10Y USD.INDEX.30Y
Fig 8 Cumulative variance explained by the first three factors
The factor loadings of the first principal component are as expected typically large and similar for all variables An upward shift in the first principal component therefore induces a roughly
parallel shift in all variables For this reason the first principal component is called parallel
shift With PCA method, the parallel component is not strictly speaking a translation of yield
returns but rather a level change impacting the short and long term slightly differently Here for example the shorter rates are proportionally less affected than the longer rates The first component explains here 97% of the variation during the data period in consideration
5
The code is available in Appendix 1
Trang 18In this example, an upward movement in the second principal component induces a change in slope of the yield curve, where short maturities move up but long maturities move down with
an unchanged point at approximately 2 years This second component is called twist and
explains about 2.5% of the variation
The third principal component influences the convexity of the yield curve The factor weights are positive for the short rates, but decreasing and becoming negative for the medium term rates and then increasing and becoming positive again for the longer maturities This is the
butterfly effect This effect explains 0.4% of the variation
The unexplained variation (less than 0.1%) is sometimes called reshape and considered as the
residual of the PCA decomposition
Note that the PCA method is a pure statistical decomposition and does not involve making strong assumptions on the magnitude and direction of yield changes occurring on a given period The principal components are perfectly uncorrelated, making the performance numbers attached to each curve effect additive and clearly definable and explain most of the yield changes variance
PCA does not require that functional form of the parallel, twist and butterfly are defined a priori We generally observe that the first component identified as the parallel shift is not even over the term structure of the yield curve and shows more movement at the short end than the long end However a method exists to force the first component to be strictly parallel by reprocessing the components to orthogonalize them
Furthermore, we still have to make assumptions on the horizon length There is a fine line between statistical data relevance and explanatory relevance Statistically speaking, the longer the horizon the better, however the changes in yield curve shape from a past period may be less relevant than recent events For a performance attribution, a time window of 3 months seems to be appropriate
2.3.2 Empirical method
The empirical method decomposes returns of the portfolio in a very similar manner to the PCA method The difference is that instead of using a statistical analysis to define the parallel, twist and butterfly components, changes in zero-coupon yield at the beginning and end periods are measured empirically It consists in a decomposition of the yield curve changes into a combination of three basic components: parallel, twist and butterfly
Unlike the PCA, the components are not statistically determined through a set of axis rotations in the spot rates, but rather by an empirical analysis of the yield curve A method developed by Lehman brothers6 uses a piecewise function with 5 maturity points on the yield curve (2, 3, 5, 10, 30 years), the pivot point being at 5 years
6
See: [5] Dynkin L., Hyman J & Konstantinovsky V., “A return Attribution Model for Fixed Income Securities”, Handbook
of Portfolio Management, 1998
Trang 19The method used by Lehman brothers is:
1 First define the beginning and end of the period and compute the changes in coupon yields between those two dates for the five reference maturities
zero-2 The three piecewise functions for parallel, twist and butterfly are defined:
• Parallel shift called p is intuitively set to equal the average yield changes over the
five references maturities:
5
1
y y
y y y
p= ∆ +∆ +∆ +∆ +∆
• Twist returns are defined with a pivot point set at the year maturity point The
5-year point is consequently not affected by the twist change The twist magnitude is defined as:
One way to exploit at best this flexibility would be to calibrate the pivot point by using a PCA Hence we will keep the flexibility of the empirical method whilst leveraging the PCA to describe the yield curve environment in a pertinent manner
Trang 20When the shift factors (parallel, twist, butterfly, ) are well defined, a factor loading is computed:
N N
F I YC
5
1
1 1
1
1
1 1 1
Yield curve, t Yield Curve, t-1 Shift 1 Shift 5 Reshape (residual)
where N represents the number of maturity points, F the factors and I the factor loadings
With these factor loadings we can therefore quantify the yield curve shift explained by each
factor Two approaches are broadly used to define these loadings - the first is the sequential
OLS, the second one the standard OLS
a) Sequential OLS
In the Sequential Ordinary Least Square procedure, the loadings are calculated for each factor
using simple algebra factor by factor For example loading 1 is calculated by maximizing:
1 1
1 , 1
1 1 1 , 2
2
F F
F I YC
This process continues until all loadings are calculated (sequential OLS), or the process stops
at the desired number of factors and the remaining unexplained yield curve shift is the residual (reshape) change The basic idea behind this method is that the first factor explains the most yield curve variance as possible and leaves a residual Then the second factor only explains the residual as best as possible and gives another smaller residual and so on
By analyzing factors loadings computed with a sequential OLS, we have to be careful because
a factor effect can offset another one For example a yield curve shift can be partially offset
by a twist effect However in practice, portfolio managers first think in term of duration (i.e a parallel shift) and then with a twist for example Therefore, even though this method seems to
be mathematically less correct, it fits better the methodology of portfolio managers
Trang 21b) Standard OLS
Alternatively all loadings can be calculated directly using an Ordinary Least Square
procedure Here the vector of loadings is the maximized solution to the following problem:
The polynomial approach relies on defining coefficients from fitting polynomials at the beginning and end of the period and using them to estimate magnitude changes in the portfolio yield
End 1 Begin 1 End 0 Begin 0
End 0 Begin 0
,,
,,
,
,,
,,
γ γ γ γ γ γ
β β
β β
α α
2 Compute for each yield curve component the magnitude of change due to each effect:
a The parallel magnitude is defined as:
Begin 0 End
β
Trang 22c The butterfly magnitude is defined as:
2 End 2 Begin
1 End 1 Begin 0 End
t
where t is the time period on the yield curve term structure
These parameters are proxies for the parallel (p), twist (s) and butterfly (b) effects at each maturity point of the zero-coupon yield curve
In practice, relying on zero-degree polynomials to measure the parallel shift effects leads to a poor outcome and a minuscule attribution of the return to the parallel component This method attributes returns mainly to the twist and more predominantly to the residual, emphasizing a redundancy or double-counting
To explain these deficiencies of the polynomial decomposition we have to understand that every one of the polynomial fits is independent from the others and explains as much of the variance as possible in a non-orthogonal space We understand then that without a correlation effect correction an over-estimation of the total return and a disproportionate residual is obtained
The only applicable way to use this method is to measure the parallel shift with the empirical method and then apply the first order polynomial for the twist and the second-order polynomial for the butterfly This sequential attribution will ensure that parallel shift explains most of the return
2.3.4 Duration based method
The duration approach decomposes returns of the portfolio based on its yield, duration and convexity The calculation can be applied at every level of the portfolio and is very intuitive and easy to implement
Following the method detailed in Fong7, the duration method breaks down the yield curve movement using the duration and convexity measures The duration component explains the parallel effect and the convexity component captures the twist
The parallel shift can be simply calculated as follows:
Trang 232 Twist
where C is the effective convexity
The advantage of the duration approach is that it does not require the definition of terms and conditions of securities Secondly portfolio managers and traders have the intuition for YTM, duration and convexity values, as these measures are widely accepted and used in fixed income analytics
The key assumption is that the distributed cash flows of a fixed income instrument are approximated by a concentrated cash flow at the duration of the security Consequently this method may not work well with a bond featuring big distributed cash flows scattered about the full term structure
2.4 Linking return effects to multiple periods
Two broadly used methods are available to link returns over multiple time period: the geometric model and the arithmetic model.8
2.4.1 The arithmetic model
In arithmetic attributions the daily excess return contribution is simply obtained by addition of the different factors:
t t
t
r 1, 1,
where i is used as indicator for the factors, i={Direct, Roll-down, YC shift 1,…, Currency}
We can compound this return into multiple periods with:
• Arithmetic linking + a residual
• Geometric linking + a residual
• Logarithmic linking + a residual distributed along each effect with a repartition key
• Optimized approach (similar to the logarithmic one)
8
For a deeper analysis on multiple periods see [14] Spaulding D., “Investment Performance Attribution”, McGraw-Hill, 2003
Trang 24From the unpublished white papers we read to write this thesis, it appears that many attribution vendors typically use the geometric method to link the sub-period effects Geometric method has the merit to be simple and easy to comprehend
2.4.2 The geometric model
Geometric attribution is not as “linking challenged” as arithmetic
In geometric attributions the daily return contribution is obtained by multiplication and the return can be decomposed as:
r 1, 1 1,
1
where i is used as indicator for the factors, i={Direct, Roll-down, YC shift 1,…, Currency}
Compounding into multiple periods is really straight forward:
i t
For example, let’s say our portfolio had a return of 11% versus a benchmark of 10% Arithmetically, we would have an excess return of 1% Likewise, if our portfolio was 25% versus 24% benchmark, we would show an excess return of 1%
Geometrically, we get different numbers:
%991.01
125
++
The differences occur because the 1% addition earned relative to 10% counts a whole lot more than it does relative to 24% Make sense?
b) Convertibility
Another benefit of the geometric approach is that it reports the same excess return, regardless
of the currency
Trang 25For example, let’s say on January 1, our portfolio starts out at $100 On that date, the conversion rate to Euro was 1.13305 (i.e for $1, we get €1.13305) Our conversion to Pounds Sterling is 0.696676 (i.e for $1 we get roughly 69 pence)
Twelve months go by and our US portfolio has gone up 10%, to $110 The benchmark (in US dollars) has gone up 8% during this time The new FX rates are for Euro 1.18970 (i.e for $1
we get €1.18970) and for Pounds Sterling 0.710610 (i.e for 1$ we get £0.710610)
The following table shows the starting and ending values in the three currencies for thee portfolio and benchmark We also show the returns and excess returns
Portfolio Index Portfolio Index Portfolio Index Arithmetic Geometric
Pounds 69.67 69.67 78.17 76.75 12.20% 10.16% 2.04% 1.85%
Starting values Ending values Return Excess return
Fig 9 Comparison of the arithmetic and geometric returns
For example, the geometric and arithmetic excess return for Euro is computed as:
%10.21340.01550
0
%85.111340.01
1550.01
With these words we are ending the theoretical parts of this master thesis After having reviewed the theoretical framework underlying a fixed income performance attribution we would like to continue this thesis by describing shortly the different problems usually encountered in practice
To perform a fixed income performance attribution the entire portfolio has to be recalculated each day in order to extract the different return effects Furthermore, a decomposition is typically done with subcategories like for example currency and maturity Consequently index benchmarks provided by the market are not sufficient, internal benchmarks on security level have to be constructed as well to match each subcategory These internal benchmarks have to replicate exactly the index benchmark to which they depend
We then understand that the IT system, price sources, price quality, cash out- and inflows must be handled in a very rigorous way
A good performance attribution without an excellent performance measurement is nothing! Database maintenance is therefore the first obligatory step prior to any performance
Trang 26attribution And we would highly recommend people not to underestimate this data quality issue! The next paragraph gives a quick view of the principal issues that will usually arise when implementing a fixed income performance analysis
3.1 Data quality
Performance attribution requires a very high data quality, which is certainly the most sensible part of this kind of analysis because it is costly and time consuming to monitor and maintain a high-quality database To give a time indication, it is not unusual that firms invest more of a year for cleaning the data history For example daily prices coming in the system must be closely monitored Here you find a non-exhaustive list of inputs that require a close monitoring:
• The different price sources that feed the FIPA analysis
• The booking of non-standard corporate actions
• The booking of management fees
• The treatment of withholding taxes
• The reinvestment of coupon
• The dynamic changes in ratings
• The dynamic changes of maturity buckets (e.g for multi-step bonds or callable bonds)
• The dynamic changes of business classes
You find next a more complete description of some issues one will get for sure by implementing a FIPA analysis:
3.1.1 Assets without price or with an incorrect price
Generally databases get prices from different sources like Morgan Stanley, Merrill Lynch, Pictet, Lehman, JP Morgan,… Priority lists are set up to prioritize the prices A problem comes when delivered prices with the highest priority are false And this will happen for sure
To remedy this problem a daily process with the back office should be put in place to correct the wrong prices
Another source of incorrect prices can be caused by banking holidays abroad and not in the home country of the portfolio This causes an important number of assets to have no price although it was a working day in the home country Here again a close monitoring has to be put in place
3.1.2 Corporate actions
Other minor price errors can be caused by special corporate actions (principally for stocks) like splits, new issue rights, dividends in stocks, bond convertible issues A timing error of the corporate action booking is often responsible for the error In fact, in most cases the database gets 3 dates: the ex-date, the recording date and the payable date which are not always standardized and may cause errors
Trang 273.2 Cash flows and management fees
Another issue is the booking of the different cash in- and outflows of the portfolio, which must be performance neutral Here are listed the main sources of cash flows:
3.2.1 Management fees
Whether you want to compute a performance attribution on a gross or net basis, management fees have to be taken into account or not If you choose a net performance, management fees have to be added and your relative performance to the benchmark will be a bit less This reflects the point of view of your client If, in the contrary, you choose a gross attribution without management fees, your portfolio will be directly comparable with the benchmark portfolio, which is what a portfolio manager wants But by removing management fees, on a cumulative basis, a linear trend will appear between the portfolio value calculated for your performance attribution and the real accounting value
3.2.2 Accounting of reclaimable withholding taxes
Returns should be calculated net of non-reclaimable withholding taxes Reclaimable withholding taxes should consequently be accrued The main problem here is that taxes policy may differ with the country where the bond was issued, but also with the owner of the bond
3.2.3 Reinvestment of coupons
The methodology for the reinvestment of coupon concern principally benchmark portfolios
In fact if one decides to create benchmark portfolios on a security level, portfolios have to replicate exactly their benchmark index Unfortunately different practices are used by the main benchmark index providers, for example,
• Lehman Brothers records coupons on an account without interest rate and reinvests them every month
• Merrill Lynch records coupons on an account with interest rate and reinvest every month
• JP Morgan and Morgan Stanley aggregate the coupon with the daily returns of the according security
3.3 Gross / Net basis
What is treated as a cash flow should be performance neutral Programs can generally calculate performance with or without taxes and fees, which means gross or net
SPPS, which stands for Swiss Performance Presentation Standards is the Swiss version of the international recognized Global Investment Performance Standards (GIPS) The aim of these standards is to provide fair performance presentations for clients, which allows an objective
Trang 28comparison between investment fund companies When a company is a member of SPPS, the company is asked to follow these SPPS standards
The differences between gross and net performance is summarized in the Fig 7
Federal Direct Tax
& anticipatory Tax
Index Gross Div reinvested
Index Net Div reinvested Reclaimable
Gross (SPPS)
Fig 10 Gross / Net performance
3.4 Replicating the benchmark in general
The main problem with security-level benchmark data supplied by most vendors is that rates
of return are not available on security-level To calculate returns of individual securities, one needs to know their prices and all their cash flows This in turn requires knowledge of the pricing formula used, the ex-coupon conventions, treatment of cash coming from a coupon payment and non-standard features such as non-uniform first coupon payment, multi-coupons, step-up coupons, callable and so on
Perhaps surprisingly, the main problem in replication of fixed income benchmark returns lies
in the calculation of coupon timing and amounts The timing of coupons depends on the bond issuer, the ex-period convention used and the frequency of coupon In addition, the amount of coupon paid can depend on whether the bond has a non-standard first coupon period, in which case the first coupon may be more or less than the standard payment The same considerations may apply to other coupon payments
In principle, these coupons may be recalculated from first principles if we know the inception date, first and last coupon dates, maturity date, annual coupon payment and coupon frequency and ex-date convention for each bond In practice, this imposes a substantial burden on the index calculator, who has to obtain and verify large amounts of bond data In addition, the ex-day conventions for many bonds are obscure There is no easy answer to these problems and the person who wants to implement a FIPA analysis is strongly advised to consult and expert
in this field
In our case, it took us more than one year to replicate almost perfectly every benchmark used
in the company But we will spare the reader the explanations of this tedious work