1. Trang chủ
  2. » Luận Văn - Báo Cáo

Statistical Methods of Valuation and Risk Assessment: Empirical Analysis of Equity Markets and Hedge Fund Strategies

59 477 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Statistical Methods of Valuation and Risk Assessment: Empirical Analysis of Equity Markets and Hedge Fund Strategies
Tác giả Adam Czub
Người hướng dẫn Prof. Alexander McNeil
Trường học Swiss Federal Institute of Technology, Zürich
Chuyên ngành Finance
Thể loại Thesis
Năm xuất bản 2004
Thành phố Zürich
Định dạng
Số trang 59
Dung lượng 912,68 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Riskometer backtests itself daily and updates a violation count table that allows a user to see the historical performance of the method since start of 2001.

Trang 1

Swiss Federal Institute of Technology, Zürich University of Zürich, Swiss Banking Institute

Master of Advanced Studies in Finance

Master Thesis

Statistical Methods of Valuation and Risk Assessment:

Empirical Analysis of Equity Markets

and Hedge Fund Strategies

Adam Czub

*****

January 2004

Trang 2

A master thesis is usually thought as entirely individualistic work This is hardly ever thecase Constant support, understanding and enlightenment are required from different peopleduring the process I will be forever grateful for the emotional support received from myparents, Elzbieta and Wojciech, and the encouragement from my brother Tomasz

Many thanks to my supervisor, Prof Alexander McNeil, for his guidance and help on mymaster thesis I would like to thank also my co-supervisors and colleagues Valerie Chavez-Demoulin, Bernhard Brabec and Michael Heintze for their explanations and comments.Finally, thanks to all of those who in one way or another helped me make this master thesisbecome a reality

Trang 3

The purpose of this paper is first to describe a web-based tool called Riskometer Wedesigned and implemented its second version which has a statistical methodologyimplemented in S-Plus This tool, called Riskometer, returns the different Value-at-Risk andrelated measures of risk (Expected Shortfall, volatility) for major equity market indices usingstandard methods as well as the most recent state-of-the-art methods This internet toolcontinually backtests its own performance against the latest data We analyse the riskmeasures calculated by Riskometer on September 24, 2003 and January 9, 2004

In the second part of the paper, we analyse hedge fund strategies over a six years sampleperiod using the database of indices compiled by Morgan Stanley Capital International For abetter understanding about dependence structures in hedge fund strategies we focus onanalysing their bivariate distributions using Archimedean copulas To identify style exposures

to relevant risk factors we conduct a return-based style analysis of hedge fund strategies byrelaxing the constraints of the Sharpe’s style analysis, and examine the significance of thestyle weights Finally, we compare these results with those obtained by applying the Kalmanfilter and smoother technique

Trang 4

1 INTRODUCTION 4

2 QUANTIFICATION OF EQUITY MARKET RISK: RISKOMETER 4

2.1 I NTRODUCTION 4

2.2 D ATA 4

2.3 M ETHODS 4

2.4 R ISK M EASURES 5

2.4.1 Volatility 6

2.5 B ACKTESTING 8

3 EMPIRICAL CHARACTERISTICS OF HEDGE FUND STRATEGIES 10

3.1 I NTRODUCTION 10

3.2 D ATA 10

3.3 R ISK -R ETURN C HARACTERISTICS 15

3.4 D EPENDENCE S TRUCTURE A NALYSIS 21

3.4.1 Linear Correlation as Dependence Measure 22

3.4.2 Alternative Correlation Measures 22

3.4.3 Archimedean Copulas 24

3.4.4 Statistical Significance of the Copula Parameter 27

3.4.5 Tail Dependences 28

3.5 G ENERALISED S TYLE A NALYSIS 30

3.5.1 Return-Based Style Analysis Model 30

3.5.2 Statistical Significance of Style Weights 32

3.5.3 Analysis of Style Weights 32

3.6 T IME -V ARYING E XPOSURES A NALYSIS 34

3.6.1 Kalman Filter and Smoother Algorithm 35

3.6.2 Graphical Analysis of Time-Varying Exposures 37

4 CONCLUDING REMARKS 39

5 REFERENCES 41

6 GLOSSARY 43

7 FITTING COPULAS 46

8 DIGITAL FILTERING 49

Trang 5

1 Introduction

Much of the financial decision making by financial institutions is focuses on riskmanagement Measuring risk and analysing ways of controlling and allocating it require awide range of sophisticated mathematical and computational tools Indeed, mathematicalmodels of modern finance practice contain some of the most complex applications ofprobability, optimisation, and estimation theories

Mathematical models of valuation and risk assessment are at the core of modern riskmanagement systems Every major financial institution in the world depends on these modelsand none could function without them Although indispensable, these models are by necessityabstractions of the complex real world Although there is continuing improvement in thosemodels, their accuracy as useful approximations varies significantly across time and situation

2 Quantification of Equity Market Risk: Riskometer

2.1 Introduction

The ETHZ Riskometer consists of a web-based tool which was designed and implementedusing a statistical methodology implemented in S-Plus It returns the different Value-at-Riskand related measures of risk (expected shortfall, volatility) using standard methods and themost recent state-of-the-art methods This educational tool continually backtests its ownperformance against the latest data

In the present version Riskometer focuses principally on three major stock indices: the DowJones Industrial Average (DJIA), the Standard&Poors 500 (S&P500), and the DeutscheAktienindex (DAX) The data are collected daily and added to the historical daily time seriesdataset, providing risk measures that may be interpreted as prognoses for a one-day timehorizon The underlying methods included in the Riskometer may be applied to any stockprice, exchange rate, commodity price or portfolio comprising combinations of theseunderlying risk factors, whether linear or non-linear They may also be applied to daily data

or to both higher or lower frequency time series data

In the market risk area, Value-at-Risk estimation involves portfolios of more than one asset.The Riskometer can be extended to such multivariate series

2.2 Data

The Riskometer works with return data since the beginning of 1998 what representssomewhat less than 1500 days of historical returns We believe that our time window is longenough for not losing statistical accuracy in measuring risk and includes relevant datarepresenting a period characterised by important market up and down moves additionally tohigh and low volatility times Furthermore, we make the assumption that our amount of datamay be considered to be a realisation of a stationary time series model

2.3 Methods

The underlying methods implemented in the Riskometer provide either unconditional quantileestimation or conditional quantile estimation Unconditional methods for calculation ofmarket risk measures are

Trang 6

1 Plain Vanilla: Variance – Covariance

2 Historical Simulation

And the conditional methods are

3 EWMA (Exponentially Weighted Moving Average)

4 GARCH modelling

5 GARCH-style time series modelling with extreme value theory

6 Point process approach

Standard methods (variance-covariance, historical simulation, EWMA, and GARCHmodelling) are described in the technical document of Riskmetrics Group, Inc (2001)available on-line Concerning more sophisticated methodologies, GARCH-style time seriesmodelling with extreme value theory, which still needs to be implemented in the presentversion of Riskometer, is detailed in the McNeil and Frey (2000) and Point process approach

is detailed in the Chavez-Demoulin, Davison and McNeil (2003)

2.4 Risk Measures

We describe the risk measures calculated by the different methods of the Riskometer onSeptember 24, 2003 and January 9, 2004 On September 24, 2003 Riskometer gave the resultsshown in Table 1 For each index and each method five numbers have been calculated,excepting for Historical Simulation, GARCH modelling and Point process approach methodswhich do not provide volatility figures The first four are estimates of Value-at-Risk (VaR)and Expected Shortfall (ES) at probability levels of 95% and 99%, respectively Each of thesenumbers may be interpreted as potential daily percentage losses for September 24, beingbased on closing data up to September 23

Table 1: Risk Measures on September 24, 2003

Trang 7

If we concentrate on the DJIA, for the Exponentially Weighted Moving Average method(method no 3), a 95% VaR number of 2.04% indicates that the estimated 5th percentile of thepredictive return distribution form day was –2.04%; we estimate that there is one chance in 20that the return is a loss of magnitude greater than 2.04% An ES number of 2.56% indicatesthat, in the event that such a 20-day loss occurs, this will be its expected size The 99% VaRand ES estimates are 2.89% and 3.31%, respectively.

The VaR and ES estimates are all driven by the annualised volatility estimate It is obtained

by taking the standard deviation of the one-day distribution and multiplying it by the squareroot of 260 representing the approximate number of trading days in a year This number isbest interpreted in relation to other annualised volatility numbers and not necessarily as anabsolute measurement From Table 1 we see that annualised volatility of the DAX onSeptember 24 is considerably larger than that of the two American indices, showing that theGerman market was more turbulent on this date

Table 2 shows the daily summary for January 9, 2004, by which time American and Europeanmarkets had calmed down

Table 2: Risk Measures on January 9, 2004

2.4.1 Volatility

Riskometer allows us to explore the recent historical development of the annualised volatility.Daily annualised volatilities estimates of our equity market indices since the start of 1998 aregraphically represented online and in Figure 1

Trang 8

Figure 1: Annualised Volatility Figures of Equity Market Indices

The graph ends with the volatility estimates of January 8, 2004 The forecasts for January 9are not shown, but the decrease of volatilities for the three indices after an extremely volatileperiod is obvious The peak volatility for the year 2003 for both American indices occurs onMarch 24 The peak volatility for the DAX occurs on April 7, so it is clear that the twoAmerican indices follow each other closely, but the DAX has a somewhat differentbehaviour Furthermore, we clearly observe that the DAX has been the most volatile indexduring 2003

Concentrating on the American indices, we see that the relatively calm present period comesafter a long period of extreme volatility Indeed, it is the first low volatility period sinceautumn of 2002 Throughout the second part of 2002 and the first three quarters of 2003,volatilities attained spectacular levels The highest peaks on both the DJIA and S&P500indices occurs on July 29, 2002 and August 8, 2002 respectively On those days, volatilityfigures reached around 40% Then, during the fourth quarter of 2002 we peaks of 15 Octoberfor S&P500 and October 17 for DJIA Once again, after a slight decrease, volatility figuresreached levels of 40%

Concerning DAX, in the second part of 2002, its volatility peaked on August 8, as S&P500and on October 17, as DJIA with values over 60% The last high volatility period of March –April 2003 of American indices has been followed by the DAX around two weeks later.Indeed, on April 7, the German index annualised volatility attained once again impressingfigures over 50% Since autumn 2003 Equity markets indices volatilities decayed to moremodest levels and settle again below 20%

Trang 9

As an example for DJIA, the violations of method 3 are shown graphically in Figure 2 Weobserve that the last violation of the 95% VaR was on May 19, 2003 and this of the 99% VaR

on March 24, 2003 Note that in this picture negative returns are shown as positive values,and positive returns as negative values

Figure 2: EWMA 95% and 99% VaR estimates

If VaR is being estimated successfully violations of the 95% VaR should occur once every 20days on average, and violations of the 99% VaR once every 100 days Whether this isapproximately true is more easily judged in Table 3

Trang 10

Table 3: Backtesting since start of 2001

Method observed 95% expected P-value observed 99% expected P-value

at the 99% level Although the 95% number is slightly higher than expected, we observe thatVaR is being estimated accurately A binomial test has been carried out and expressed in thetable as a p-value A p-value less than or equal to 0.05 would be interpreted as evidenceagainst the null hypothesis of reliable VaR estimation For our example this is not the case

In the case of DJIA for method 1 we observe that, for VaR 99%, the binomial test p-valueindicates evidence against the null hypothesis

For S&P500, we notice that all methods succeed the binomial test This confirms a generalgood performance of the Riskometer methods at the 95% level and at the 99% level

Finally, for DAX, we observe that only the GARCH modelling method succeed the binomialtest and only at the 99% level Concerning the remaining VaR estimates we observe thatRiskometer methods perform very badly at the 95% level and at the 99% level presenting asystematic underestimation of the VaR and too many violations

Trang 11

3 Empirical Characteristics of Hedge Fund Strategies

3.1 Introduction

The hedge fund industry has considerably increased over the last ten years Indeed, it is now amore than $700 billion industry with more than 7000 funds worldwide1 Seen its importantdevelopment and a widespread acceptance, this alternative investment asset class needsparticular attention concerning its risk exposures

Actually, understanding the risk exposures of hedge fund strategies has become a ratherimportant area of research for several reasons First, a better understanding of hedge fundrisks is needed for individuals and institutions desiring to make investment decisionsinvolving hedge funds A detailed analysis of hedge fund risks and returns is also importantfrom the standpoint of asset pricing theory Understanding the hedge fund risks exposures isalso a key feature to the design of optimal risk-sharing contracts between hedge fundmanagers and investors

Issues regarding the nature of risks associated with different hedge fund strategies arechallenging because of the complex nature of the strategies in particular, since hedge fundsreturns exhibit non-linear option-like exposures to traditional asset classes2 Furthermore,partly due to less stringent disclosure requirements and partly due to the freedom granted tothe manager about his or her investment strategy it is difficulty to obtain detailed informationconcerning a particular hedge fund risk exposures Thus, their identification remains quiteproblematic

3.2 Data

One purpose of hedge fund indices is to serve as proxy for returns to the hedge fund assetclass Major hedge fund indices particularly known are the Hedge Fund Research Indices(HFR) and Credit Suisse First Boston/Tremont Indices (CSFB/Tremont Indices) HFR Indexwas launched in 1994 with data going back to1990 and the CSFB/Tremont Index, which is anasset weighted hedge fund index, was launched in 1999 with data going back to 1994

In this study we analyse hedge fund strategies using the database of indices compiled byMorgan Stanley Capital International (MSCI) The MSCI Hedge Fund Indices, launched inJuly 2002, consist of over 190 indices based on the MSCI Hedge Fund ClassificationStandard The MSCI hedge fund database currently contains more than 1600 hedge fundsrepresenting more than $175 billion in assets

Our data sample covers a six years period from October 1997 to September 2003 (N=72) Asshown in Figure 1 this period has been characterised by important market up and down movesadditionally to high and low volatility periods

It is well known that using a specific sample from an unobservable universe of hedge fundsintroduces two types of bias, Selection bias and Survivorship bias, characterising mainsources of difference between the performance of hedge funds in the database and theperformance of hedge funds in the population It is extremely difficult to completely eliminatethe problem of survivorship bias and thus the problem of over-estimation of the true returns in

1 See Henessee group research paper (2002)

2 See Fung and Hsieh (1997)

Trang 12

hedge fund strategies This difficulty is due to the fact that the database only contains thereturns of the successful funds, or at least of those that are still in existence In our study, wefocus on style analysis, dependencies between strategies and with traditional asset classes andless on performance measurement Therefore, the effect of exclusion of funds that did notsurvive becomes relatively small.

We analyse monthly returns of ten hedge fund strategies representing five different processgroups3

Table 4: Hedge Fund Strategies

Directional Trading Discretionary Trading

Systematic Trading Directional Strategies

Fixed Income Arbitrage Merger Arbitrage

Non-Directional Strategies

Short Bias Variable Bias

Directional Strategies

Specialist Credit Distressed Securities Directional Strategies

The selected strategies can be principally segregated in two categories Thus, six of them areconsidered as directional strategies (Discretional Trading, Systematic Trading, Long Bias,Short Bias, Variable Bias, and Distressed Securities) and three of them as non-directionalstrategies (Convertible Arbitrage, Fixed Income Arbitrage, and Merger Arbitrage) EventDriven may contain elements of directional and non-directional strategies

Returns of our strategies are captured through the corresponding MSCI indices, which areequally weighted performance summary of funds from the MSCI database It is important tonote that equal weighting of returns of hedge funds gives more weight to the smaller funds

We keep in mind that it could affect our analysis to the extent that the inferred style exposuresrepresent the risk exposure of smaller funds to a greater extent as compared to those of thelarger funds

To represent the broad range of asset classes in which hedge funds invest we use globalmarket indices Thus, to incorporate the exposure to the US market, European and Japaneseequity markets (developed markets) and emerging equity markets, we include MSCI USindex, MSCI EU index, MSCI JP index, and the IFC emerging markets index To assess theexposure to bonds, we use the JP Morgan Government Bonds Index To account for returnsarising from exposure to currencies and commodities, we include the Trade-Weighted Dollar

3 For definitions of the process groups and strategies see the Glossary

Trang 13

Index and the Goldman Sachs Commodity Index We also include CSFB High Yield index toincorporate returns available from investing in high yield securities Finally we include thevolatility index VIX to incorporate the exposure to volatility In Tables 5 and 6 we provideinformation on the descriptive statistics and the Spearman’s correlation matrix4 of the globalindices Furthermore, we add pairs scatter plots in Figure 3 for a better understanding of thedifferent relationships between the asset classes

Table 5: Descriptive Statistics of Traditional Asset Classes

Table 6: Traditional Asset Classes Rank Correlations

MSCI US MSCI EU MSCI JP EM DWI GSCI JPMGBI CSFBHY VIX MSCI US 1.00

Trang 14

Figure 3: Relationships between Traditional Asset Classes

Before analysing in more details the risk-return characteristics of the chosen strategies it isusually good practice to test for stationarity of the time series Therefore, we use the KPSS5 to

test the null that our time series are integrated processes of order 0, i.e I(0) in the case of a

constant and time trend We also test for normality using the Jarque-Bera statistical test andfor the presence of autocorrelation using the Ljung-Box test Finally we test for the presence

of ARCH effects in the time series using a Lagrange Multiplier test The different teststatistics and p-values are reported in Table 7

Results of KPSS test show that the null hypothesis is not rejected at any significant level forany strategy For the Jarque-Bera test only in the case of Systematic Trading strategy we donot reject the null that data is normally distributed at a level of 5% Furthermore, results of theLjung-Box test show that the null of no autocorrelation cannot be rejected at a level of 5% forany strategy Additionally, correlograms and partial correlograms in Figure 4 provide moreinformation about the sample Autocorrelation Function and Partial Autocorrelation Function

of the strategies returns These diagnostics plots show for instance that for modellingConvertible Arbitrage and Fixed Income Arbitrage strategies we could fit a AR(1) model.Finally, LM test, testing for the null that there are no ARCH effects, results in cases ofDiscretionary Trading and Merger Arbitrage in a p-value smaller than 0.05 Thus, it rejectsthe null hypothesis

5 Kwiatkowski, Phillips, Schmidt and Shin test For details see Zivot and Wang (2003)

Trang 15

Table 7: Statistical Tests

p-value Test stat p-value Test stat p-value Test stat p-value

6 Under the null hypothesis Jarque-Bera test statistic is asymptotically distributed χ 2 (2)

7 Under the null hypothesis Ljung-Box test statistic is asymptotically distributed χ 2 (12)

8 Under the null hypothesis LM test statistic is asymptotically distributed χ 2 (12)

Trang 16

3.3 Risk-Return Characteristics

To obtain a general idea about performances of our hedge fund strategies in good and badtimes, we report their returns during seven large down moves and seven large up moves of theMSCI World index over the sample period in Tables 8 and 9

Table 8: Large Down Moves

Aug-98 Feb-01 Mar-01 Sep-01 Jun-02 Jul-02 Sep-02 Average

Table 9: Large Up Moves

Feb-98 Oct-98 Dec-99 Mar-00 Apr-01 Oct-02 Apr-03 Average

Trang 17

Clearly, the returns of non-directional strategies are much less exposed to the impact ofmarket turbulence than those of their directional counterparts, in line with our findings on theaggregate return volatility Moreover, the returns of non-directional strategies are less alignedwith the direction of underlying market move This provides a first confirmation of theclaimed neutrality with respect to market factors

The returns of the directional strategies are without exception much larger reflecting the factthat they carry systematically a net exposure to major market factors We have to distinguishhowever between those strategies which are either predominantly net long like Long Bias orDistressed Securities or net short from those strategies that constantly change the direction oftheir exposure like Systematic and Discretionary Trading The returns of these strategies arenot or even negatively correlated to the MSCI World Index Interesting enough, theSystematic Trading strategies, by focusing predominantly on systematic trend-followingstrategies, managed to make substantial positive returns in all down periods and to keep thelosses in up periods under control This behaviour motivates the adjunction of SystematicTrading to any hedge-fund portfolio despite their relatively low risk-adjusted return profile

Table 10 reports the summary statistics for our MSCI Hedge Fund indices Directionalstrategies have higher monthly returns than the non-directional ones but are obviously alsomore volatile as measured in terms of the standard deviation of monthly returns Indeed,during the sample period, the average monthly return of the directional strategies was 0.93%and that of the non-directional was 0.76% The average standard deviation was 3.28% and1.14% respectively

We also notice that non-directional strategies show a higher left side asymmetry in thedistribution functions than those of the directional strategies Indeed, the average skewness ofnon-directional strategies is –2.34 and compares to –0.31 for the directional strategies, whereonly Discretionary Trading and Distressed Securities show a left side asymmetry indistribution functions Concerning the tails of the distribution function, once again non-directional strategies’ distributions present heavier tails than those of directional strategies’functions The average kurtosis of non-directional strategies is 14.71 and this of directionalstrategies 5.96 Event Driven strategy characteristics being clearly in between those describedabove, it confirms its particular status of a multi-process investment strategy Please note thatthe averages for the statistical parameters mentioned above are taken on very disparatepopulations (i.e number of funds or assets within a strategy class) and thus do not have anyeconomic meaning They are given only for illustrative purposes

Fixed Income Arbitrage and Merger Arbitrage show particular high negative skewness andhigh kurtosis values The high kurtosis is induced mainly by large negative returns In thecase of Fixed Income Arbitrage, these high figures are associated with a few large returnfigures realised during turbulent market situations like the LTCM crisis or the July-August

2003 fixed-income sell-offs In fact, fixed-income arbitrageurs provide usually relativelysmall but regular returns However, by capturing the small relative movements betweendifferent assets or rates using high leverage, they expose themselves to losses when marketsmove beyond the usual fluctuation bands of the asset or interest-rate spreads they arbitrage.Merger arbitrage captures the price differential between the stock prices of the acquiringcompany and the target company with respect to the agreed conversion ratio The arbitrageurhas a high probability to gain the price difference known in advance However, he accepts that

in case the merger does not go through, he looses an amount substantially higher than the

Trang 18

premium to be made The strategy therefore resembles that of a short selling of a put option.This explains the asymmetry of the return distribution.

Table 10: Descriptive Statistics of Hedge Fund Strategies

The decreasing volatility after Q3 1998 for many of the strategies and Fixed Income Arbitrage

in particular are due to the wide-scale de-leveraging in the industry after the LTCM crisis aswell as increased emphasise on risk management

9 A negative skewness points to a higher probability for large negative values relative to positive values

10 A kurtosis value higher than 3 indicates fat-tailed returns relative to the normal distribution

11 Annualised volatilities are computed with the Exponentially Weighted Moving Average method with a decay parameter lambda equals to 0.94

Trang 19

Figure 5: Directional Trading Strategies

Figure 6: Relative Value Strategies

Trang 20

Figure 7: Security Selection Strategies

Figure 8: Specialist Credit and Multi-Process Strategies

Trang 21

To evaluate the risk-return tradeoff, we compute the Sharpe Ratio according to the followingformula

t r R N Ratio Sharpe

where R t represents the strategy return for the month t and r t rf the risk free rate value for the

month t N represents the total number of months and σS is the standard deviation of thestrategy monthly returns

Seen its definition, we note that Sharpe Ratio, using a standard deviation as measurement ofvolatility to adjust for risk, may actually punish a strategy for a month of exceptionally highperformance

To avoid this issue we compute also the Sortino Ratio Thus, instead of using a standarddeviation in the denominator, we use a downside deviation, which is a measurement ofstrategy return deviation below a minimal acceptable rate and is computed as follows

N

r RN

t

ma t t dd

where R t represents the strategy return for the month t and r t ma the minimal acceptable rate

value for the month t N represents the total number of months.

By σdd, Sortino ratio allows to determine a measurement of return per unit of risk on thedownside

As risk free rate we use the LIBOR 3 months and as minimal acceptable rates for the SortinoRatio we use the LIBOR 3 months and 0% We report monthly and annualised figuresrespectively for both ratios in Table 11

In terms of Sharpe ratio, we observe that the non-directional strategies exhibit better return tradeoffs compared to the directional ones Indeed, the non-directional strategiesexhibit on average an annualised ratio of 1.35 that is almost two times that for the directionalones which is equal to 0.76 In particular, we observe the highest annualised figure of 1.92 forthe Convertible Arbitrage strategy and in the opposite the lowest annualised figure of 0.19 forthe Short Bias strategy We notice the relatively high Sharpe ratio for Discretionary Tradingcompared to the other directional strategies Remembering that Discretionary Trading is theonly directional strategy with relatively high negative skewness and kurtosis, the higher returnper unit of volatility might be considered as a compensation for this additional risk

Trang 22

risk-Table 11: Sharpe and Sortino Ratios

of a minimal acceptable rate of 0% non-directional strategies exhibit on average an annualisedratio of 1.75 where the directional strategies exhibit a ratio of 1.22 In particular, we observelarge decreases in Sortino ratios of Fixed Income Arbitrage and Merger Arbitrage strategiescomparing to their Sharpe ratios This is mainly due to the important left asymmetriesobserved in their distribution functions This is consistent with the definition and purpose ofthe Sortino ratio

Seen the above risk-return metrics, the non-directional strategies seem to have delivered betterrisk-return tradeoffs compared to the directional strategies We believe, however, that theSortino ratio does not entirely capture the embedded risks, in particular event and liquidityrisk, related to these strategies

3.4 Dependence Structure Analysis

Understanding relationships among hedge fund strategies is a key issue for investorsespecially in the context of hedge fund portfolios To assess this specific problem,practitioners often use the Pearson’s linear correlation Although widely applicable, thismethod presents several deficiencies To avoid potential drawbacks and improve ourunderstanding about dependence structures in hedge fund strategies we focus on analysing thebivariate distributions of pairs of strategies using copulas These statistical tools allow us toseparate the dependence and the marginal behaviour of two hedge fund strategies, what isparticularly useful for analysing the behaviour of extreme values

Trang 23

3.4.1 Linear Correlation as Dependence Measure

To summarise dependence of two variables the most popular tool is the linear correlation.However, it is well known that it represents only one particular measure of dependency thatdoes not capture non-linear dependence relationships Furthermore, it has been proved thatlinear correlation presents serious deficiencies when one is working with models other thanthe multivariate normal model Thus, outside the elliptical world correlations must beinterpreted very carefully

Seen the characteristics of our strategies we can consider that none of the strategies has aunivariate normal distribution Furthermore, comparing to a normal distribution, our strategiesdistributions are characterised by fat tails In our situation, disadvantages of Pearson’scorrelation as measure of dependence between two strategies could be numerous Forinstance, the linear correlation could be undefined in the case where variances of strategieswere not finite This problem appears especially when we work with heavy taileddistributions Another problem is that independence of two random variables implies they areuncorrelated but zero correlation does not in general imply independence Only in the case ofmultivariate normal distributions it is permissible to interpret a zero correlation as implyingindependence Finally, a single observation could have an arbitrarily high influence on thelinear correlation which might decrease the robustness of the dependence measure

3.4.2 Alternative Correlation Measures

To avoid some potential problems concerning the dependence measure we use rankcorrelations Rank correlations are principally based on the concept of concordance anddiscordance which informally says that a pair of random variables are concordant if largevalues of one tend to be associated with large values of the other and small values of one withsmall values of the other

To analyse relationships among hedge fund strategies we use one rank correlation measure

known as Spearman’s rank correlation For two random variables X and Y with distribution functions F 1 and F 2 and a joint distribution F the sample estimator of Spearman’s rank correlation ρ S (X,Y) is defined as follows

1)

1(

12,

ˆ

1 2

N Y rank

N X rank N

N Y

dependence between X and Y The transformation of X 1 ,…, X N to 1,….,N has the consequence

that the marginal distributions of X and Y will be ignored Therefore Spearman’s rank

correlation is a distribution independent measure

Other advantages of Spearman’s rank correlation over linear correlation are that it is invariantunder monotonic transformations and the perfect dependence corresponds to correlations of 1and –1 Spearman’s rank correlation is quite robust against outliers Nevertheless, Spearman’srank correlation do not lend itself to the same elegant variance-covariance manipulations likelinear correlation, it is not a moment-based correlation

Additionally to the results regarding correlations among hedge fund strategies, reported inTable 12, we report in Table 13 Spearman’s rank correlations measuring dependence degreesbetween hedge fund strategies and traditional asset classes

Trang 24

Concerning rank correlations among hedge fund strategies we observe that DiscretionaryTrading seems to be mainly correlated with Variable Bias and Merger Arbitrage We alsonotice that Systematic Trading present a very low degree of dependence with the otherstrategies considered Regarding the non-directional strategies Convertible Arbitrage andFixed Income Arbitrage present a non-negligible dependence with each other This can beexplained by the fact that both strategies are penalised by turbulent fixed-income marketsoften characterised by flight-to-quality situations often generating low market liquidity andwide bid-offer spreads in a wide range of securities For similar reasons and given the factthat Convertible Arbitrage books hold significant credit risk, Convertible Arbitrage issignificantly correlated with Distressed Securities and Event Driven Merger Arbitragepresents relatively important degrees of dependence with Long Bias, Variable Bias and EventDriven Concerning Fixed Income Arbitrage, it presents very low degrees of dependence withthe other strategies Security Selection strategies present particularly high degrees ofdependence with Distressed Securities and Event Driven strategies Finally, without anysurprise, we observe that Short Bias strategy presents strong negative correlations with LongBias, Variable Bias, Distressed Securities and Event Driven strategies.

Table 12: Hedge Fund Strategies Rank Correlations

Discretionary

Trading Systematic Trading Convertible Arbitrage

Fixed Income Arbitrage

Merger Arbitrage Long Bias Short Bias Variable Bias Distressed Securities Driven Event

Trang 25

Table 13: Dependence Measures between Hedge Fund Strategies and Traditional Asset Classes

MSCI US MSCI EU MSCI JP EM DWI GSCI JPMGBI CSFBHY VIX

3.4.3 Archimedean Copulas

We saw that linear and rank correlations are not sufficient to measure dependence amonghedge fund strategies This is partly due to the absence of models for our strategies Thus, toget a deeper idea about hedge funds dependence structures we need to construct bivariatedistributions which are consistent with given marginal distributions and correlations.Knowing the joint distribution of two strategies provide us with information regarding theirmarginal behaviors and allow us to evaluate the conditional probabilities that one strategytakes a certain value given that the second strategy takes another value To reach this aim wedetermine an appropriate copula function chosen in a class of copulas known as Archimedeancopulas

Trang 26

A copula is said Archimedean copula if its distribution function can be written in thefollowing form

( )u v ( ( ) ( )u v )

C , =ϕ−1 ϕ +ϕ

with some function φ(t) : I → R+ which is continuous, strictly decreasing, convex and

satisfying φ(0) = ∞ and φ(1) = 0 The function φ is called the Archimedean generator φ(0) is defined as lim t→0+ φ(t) and φ-1(z) = 0 for all z > φ(0), if φ(0) < ∞ Gumbel copula, Frankcopula, Kimeldorf and Sampson copula, Clayton copula and Joe copula are some of the mostfamous Archimedean copulas

Thus, Archimedean copulas can be constructed using the above formula For this we need theArchimedean generators To define Archimedean generators we consider the followingcharacterization of Archimedean copulas, which is given by the following Abel’s criterion

Therorem

A copula C is an Archimedean copula if it is twice differentiable and if there exits an integrable function f : (0,1) → (0,∞) s.t.

1,0)

,()(),()

u C v u f v u C u v f

In this case the generator φ is given by

)(

ϕ

In Table 14 are presented some important one-parameter families of Archimedean copulas

C θ (u,v) with their generators and parameters’ domain of validity.

Thus, to analyse our pairs of hedge fund strategies we fit to the bivariate data a member of theone-parameter Archimedean copula class Recall that our data consists of 72 bivariate

observations of hedge fund strategies S 1 and S 2, ( 1

1

S , 2 1

S ),…, ( 1

72

S , 2 72

S ) which have been

generated from an unknown bivariate joint distribution we will call F(S 1, S 2) with continuousmarginals F 1 (S 1 ) and F 2 (S 2 ) and the Archimedean copula C(F 1(s 1), F 2(s 2)) To determine thecopula parameter θ we first estimate the marginal distributions of hedge fund strategies using

an empirical distribution Then the copula parameter is estimated using the maximumlikelihood estimator

Trang 27

Table 14: One-parameter Families of Archimedean copulas

1 max[u−θ +v−θ −1]−1/θ,0 1(−θ− 1)

1 1 1 1

1 1

θ θ θ

u v

1 1 1 1

1

/ 1

− + +

− −−

1 2 1 1

e /u+ /v

exp exp

For the development of the likelihood equation, we need continuous marginal distributions,but the empirical distribution is of course discrete This comes from the fact that thederivative of some copulas used in the likelihood are not defined if u,v ∈{1}, whereas thecontinuous empirical distributions do not allow an output probability of 1 The way wemodify the empirical distribution functions is defined by

Let a ≤ min(X 1 ,…, X N ) ≤ max(X 1 ,…, X N ) ≤ b be given numbers Let Z 1 ,…,Z N denote the values

of X 1 ,…, X N in increasing numeric order Define Z 0 ≡ a and Z N +1 ≡ b Then the continuous empirical d.f is the d.f G N (x : a, Z 1 ,…, Z N , b) which is 0 if x ≤ a, 1 if x ≥ b and in between as the value of the straight line segments that join the successive midpoints of the successive bars (intervals [Z i ,Z i +1]) that constitute the empirical d.f.; the midpoint of the left most bar [Z 1 ,Z 2 ] is joint to the point (a,0), while the right most bar [Z N -1,Z N ] is joint to the point (b,1).

Following this definition we obtain continuous empirical marginals F 1(s 1) and F 2(s 1) and thecorresponding density functions f 1(s 1) and f 2(s 2) According to the Sklar’s theorem we define

Trang 28

( ) ( ( ) ( )2 )

2

1 1 2

1,s C F s ,F s s

2 2

1 1 2

1,s f s f s C F s ,F s s

where

v u v u

U L

1

12 ,,

V U L

1

,

;logmaxarg

θ

θ

where R is the parameter θ validity domain.

We note that the likelihood procedure assumes that the derivative C12(F(s1),G(s2)) exists This

is the case only if the copula is continuous

To examine the goodness of our models we use the Akaike information criterion, which isdefined by

(negative likelihood) k

where k is the number of parameters of the model and in our case equals to 1 The best copula

is determined by the lowest AIC value

3.4.4 Statistical Significance of the Copula Parameter

To determine the statistical significance of the selected copulas parameters we employ thebootstrap technique proposed by Efron and Tibshirani (1993) Thus, we estimate the standarderror of the copula parameter by its bootstrap estimate

The bootstrap estimate of the standard error of the copula parameter is a plug-in estimate thatuses the empirical distribution function in place of the unknown distribution It can be definedby

( )*

ˆ θˆ

F se

Trang 29

and is called the ideal bootstrap estimate of the standard error of the copula parameter Toobtain an approximation to its numerical value we use the computational way described next.The bootstrap algorithm for estimating standard errors consists of selecting B independent

bootstrap samples denoted by s* 1 , s* 2 ,…, s* B, each consisting of N data values drawn withreplacement from the data set This step is realized with the help of a random number devicewhich selects integers i1, i2,…, iN each of which equals any value between 1 and N withprobability 1/N The bootstrap sample consists of the corresponding members of the actualdata set Then the bootstrap algorithm evaluates the bootstrap replication corresponding toeach bootstrap sample,

1

ˆˆ

s

B

b B

θθ

where

Concerning the number of bootstrap replications12 B we decide to fix it at 300 This choice ismainly dictated by the constraint of computer time that is needed to evaluate the bootstrapreplications

The fitted Archimedean copulas to pairs of hedge fund strategies are reported in Tables 18 to

26 We report the best fitted copulas according to the AIC criterion, the corresponding values

of the AIC criterion, the estimates of the parameters θ and the bootstrap estimates of theirstandard errors Furthermore, we indicate in bold font style which fitted copula parameters aresignificant at a level of 95%

3.4.5 Tail Dependences

One advantageous characteristic of copulas is that we can model tail dependences Theconcept of bivariate tail dependence relates to dependence in extreme values, and dependsmainly on the tails Thus, to analyse dependence in extreme values of hedge fund strategies

we consider the coefficients of tail dependence for copulas of the Archimedean class Theupper tail dependence coefficient is defined as follows

Ngày đăng: 16/04/2013, 20:00

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w