11 4 Volatility forecasting using high frequency data.. Thereare now several volatility measures that are computed from high frequency data.High frequency data greatly improve the foreca
Trang 4High Frequency Financial Data
Stavros Degiannakis and Christos Floros
Trang 5publication may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted save with written permission or in accordance with the provisions of the Copyright, Designs and Patents Act 1988, or under the terms of any licence permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this work
in accordance with the Copyright, Designs and Patents Act 1988.
First published 2015 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited, registered in England, company number 785998, of Houndmills, Basingstoke, Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States, the United Kingdom, Europe and other countries
This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources Logging, pulping and manufacturing processes are expected to conform to the environmental regulations of the country of origin.
A catalogue record for this book is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Degiannakis, Stavros, author.
Modelling and forecasting high frequency financial data /
Stavros Degiannakis, Christos Floros.
pages cm
1 Finance–Mathematical models 2 Speculation–Mathematical models.
3 Technical analysis (Investment analysis)–Mathematical models.
I Floros, C (Christos), author II Title.
HG106.D44 2015
ISBN 978-1-349-56690-7 ISBN 978-1-137-39649-5 (eBook)
DOI 10.1057/9781137396495
Trang 6To Ioanna, Vasilis-Spyridon, Konstantina-Artemis and Christina-Ioanna
Christos Floros
Trang 8List of Figures xi
List of Tables xiv
Acknowledgments xvii
List of Symbols and Operators xviii
1 Introduction to High Frequency Financial Modelling 1
1 The role of high frequency trading 2
2 Modelling volatility 10
3 Realized volatility 11
4 Volatility forecasting using high frequency data 14
5 Volatility evidence 14
6 Market microstructure 15
2 Intraday Realized Volatility Measures 24
1 The theoretical framework behind the realized volatility 24
2 Theory of ultra-high frequency volatility modelling 27
3 Equidistant price observations 31
3.1 Linear interpolation method 31
3.2 Previous tick method 32
4 Methods of measuring realized volatility 32
4.1 Conditional – inter-day – Variance 32
4.2 Realized variance 34
4.3 Price range 35
4.4 Model-based duration 37
4.5 Multiple grids 37
4.6 Scaled realized range 37
4.7 Price jumps 37
4.8 Microstructure frictions 37
4.9 Autocorrelation of intraday returns 38
4.10 Interday adjustments 38
5 Simulating the realized volatility 42
6 Optimal sampling frequency 47
3 Methods of Volatility Estimation and Forecasting 58
1 Daily volatility models – review 58
vii
Trang 91.1 ARCH(q) model 59
1.2 GARCH(p, q) model 59
1.3 APARCH(p, q) model 60
1.4 FIGARCH(p, d, q ) model 60
1.5 FIAPARCH(p, d, q) model 60
1.6 Other methods of interday volatility modelling 61
2 Intraday volatility models: review 61
2.1 ARFIMA k, d, l model 61
2.2 ARFIMA k, d, l - GARCH p, q model 62
2.3 HAR-RV model 62
2.4 HAR-sqRV model 63
2.5 HAR-GARCH(p, q) model 63
2.6 Other methods of intraday volatility modelling 64
3 Volatility forecasting 64
3.1 One-step-ahead volatility forecasting: Interday volatility models 64
3.2 Daily volatility models: program construction 67
3.3 One-step-ahead volatility forecasting: intraday volatility models 67
3.4 Intraday volatility models: program construction 70
4 The construction of loss functions 70
4.1 Evaluation or loss functions 70
4.2 Information criteria 72
4.3 Loss functions depend on the aim of a specific application 73
4 Multiple Model Comparison and Hypothesis Framework Construction 110
1 Statistical methods of comparing the forecasting ability of models 110
1.1 Diebold and Mariano test of equal forecast accuracy 111
1.2 Reality check for data snooping 111
1.3 Superior Predictive Ability test 112
1.4 SPEC model selection method 112
2 Theoretical framework: distribution functions 113
3 A framework to compare the predictive ability of two competing models 115
4 A framework to compare the predictive ability of n competing models 119
4.1 Generic model 119
4.2 Regression model 121
4.3 Regression model with time varying conditional variance 121
4.4 Fractionally integrated ARMA model with time varying conditional variance 122
5 Intraday realized volatility application 123
Trang 106 Simulate the SPEC criterion 128
6.1 ARMA(1,0) simulation 128
6.2 Repeat the simulation 130
6.3 Intraday simulated process 133
5 Realized Volatility Forecasting: Applications 161
1 Measuring realized volatility 161
1.1 Volatility signature plot 162
1.2 Interday adjustment of the realized volatility 165
1.3 Distributional properties of realized volatility 174
2 Forecasting realized volatility 176
3 Programs construction 178
4 Realized volatility forecasts comparison: SPEC criterion 190
5 Logarithmic realized volatility forecasts comparison: SPA and DM Tests 200
5.1 SPA test 200
5.2 DM test 202
6 Recent Methods: A Review 217
1 Modelling jumps 217
1.1 Jump volatility measure and jump tests 218
1.2 Daily jump tests 219
1.3 Intraday jump tests 220
1.4 Using OxMetrics (Re@lized under G@RCH 6.1) 221
2 The RealGARCH model 230
2.1 Realized GARCH forecasting 232
2.2 Leverage effect 234
2.3 Realized EGARCH 234
3 Volatility forecasting with HAR-RV-J and HEAVY models 235
3.1 The HAR-RV-J model 235
3.2 The HEAVY model 236
4 Financial risk measurements 238
4.1 The method 238
7 Intraday Hedge Ratios and Option Pricing 243
1 Introduction to intraday hedge ratios 243
2 Definition of hedge ratios 246
2.1 BEKK model 248
2.2 Asymmetric BEKK model 248
2.3 Constant Conditional Correlation (CCC) model 249
2.4 Dynamic Conditional Correlation (DCC) model 250
2.5 Estimation of the models 251
3 Data 251
4 Estimated hedge ratios 253
Trang 115 Hedging effectiveness 256
6 Other models for intraday hedge ratios 259
7 Introduction to intraday option pricings 259
8 Price movement models 260
8.1 The approach of Merton 261
8.2 The approach of Scalas and Politi 261
8.3 Relation between the distributions of the epochs and durations 262
8.4 Price movement 263
9 Option pricing 265
9.1 The approach of Merton 265
9.2 The approach of Scalas and Politi 265
9.3 Time t is an epoch 266
9.4 Time t is not an epoch 267
9.5 Other models for intraday option pricing 269
Index 274
Trang 121.1 Flash crash of May 6, 2010 81.2 Knight Capital collapse (August 2012) 92.1 Determination of realized variance for day t,
4.1 The cumulative density function of the tri-variate minimum
multivariate gamma distribution, F X (1) (x;a,C123) =
252RV (τ)
t(HL∗)of EURONEXT 100 against the
one-trading-day-ahead realized volatility forecasts, for the period
from 25 January 2005 to 23 March 2006 1274.4 ARMA(1,0) data generated process, with the number of points in
time that the ARMA(1,0) and ARMA(0,1) models are selected by
the SPEC algorithm, for various values ofT , and in particular for
T= 1, ,70 1305.1 Average daily squared log-returns, T−1T
volatility), for 200 iterations, excluding at each iteration either the
highest value of the closed-to-open interday volatility or the highest
value of the intraday volatility 1675.4 The annualized one-trading-day interday adjusted realized standard
deviation,
252RV (τ)
t (HL∗) 170
5.5 The estimated density of annualized one-trading-day interday
adjusted realized daily variances, 252RV (τ)
t (HL∗) 171
xi
Trang 135.6 The estimated density of annualized one-trading-day interday
adjusted realized daily standard deviations,
252RV (τ)
t (HL∗) 172
5.7 The estimated density of annualized interday adjusted realized daily
logarithmic standard deviations, log
252RV (τ) t(HL∗) 172
5.8 The estimated density of log-return series, y t 1735.9 The estimated density of standardized log-return series,
standardized with the annualized one-trading-day interday
adjusted realized standard deviation, y t /252RV (τ)
t (HL∗) 173
5.10 The log
252RV (τ)
t +1(HL∗)against the annualized interday adjusted
realized daily logarithmic standard deviation forecasts,
the squared standardized prediction errors 1945.13 The estimated density of the standardized one-step-ahead
prediction errors, z t +1|t , from the ARFIMA(1, d, 1)-TARCH(1,1)
model 1975.14 The estimated density of the standardized one-step-ahead
prediction errors, z t +1|t, from the HAR-TARCH(1,1) model 1985.15 The estimated density of the standardized one-step-ahead
prediction errors, z t +1|t, from the AR(2) model 1996.1 Simulated DAX30 instantaneous log-prices, 1-min log-prices and
daily prices of a continuous-time GARCH diffusion model 2226.2 Simulated DAX30 instantaneous returns, 1-min returns and daily
returns of a continuous-time GARCH diffusion model 2236.3 DAX30 volatility measures: simulated volatilityσ2
t +, integrated
volatility IV t, conditional volatility from GARCH(1,1) on daily
returns, and daily squared log-returns 2236.4 DAX30 continuous-time GARCH diffusion process with jumps 2246.5 DAX30 simulated and detected jumps 2256.6 DAX30 integrated volatility and realized volatility for a
continuous-time GARCH model with jumps 2256.7 DAX30 integrated volatility and Bi-power variation for a
continuous-GARCH jump process 2276.8 DAX30 integrated and realized jumps (using bipower variation) 2276.9 DAX30 integrated volatility and realized outlyingness
weighted variance 228
Trang 146.10 DAX30 integrated and realized jumps (using realized outlyingness
weighted variance) 2296.11 DAX30 simulated and detected jumps (Intraday jumps) 2297.1 Evolution of the hourly hedge ratios of the DAX 30 index, from 3
May 2000 to 29 November 2013, (9:00 a.m to 5:00 p.m.) 255
Trang 152.1 Values of the MSE loss functions The data generating process is
the continuous time diffusion
log
2.2 Averages of the values of the MSE loss functions of the 200
simulations The data generating process is the continuous time
diffusion log
4.1(A) The probability
1− pthat the minimum X (1)of a trivariategamma vector is less than or equal toω1−pfor 2≥ ω1−p≥ 50,
5≥ a ≥ 50, and ρ1,2= 30%, ρ1,3= 60% and ρ2,3= 95%, the
non-diagonal elements of C123 1164.1(B) The probability
1− pthat the minimum X (1)of a trivariategamma vector is less than or equal toω1−pfor 2≥ ω1−p≥ 50,
5≥ a ≥ 50, and ρ1,2= 95%, ρ1,3= 95% andρ2,3= 95%, the
non-diagonal elements of C123 1174.2 The half-sum of squared standardized one-day-ahead prediction
errors of the three estimated realized volatility models,
t=1z t2+1|t (m i ) , for i= 1,2,3 1284.3 Selected values of the cumulative density function,
ω1−p; a= 30,C123
1284.4 ARMA(1,0) data generated process The half sum of the squared
standardized one-step-ahead prediction errors:121000
t=71z t2+1|t 129
4.5 The ARMA(1,0) data generated process The average values (100
iterations) of the loss functions and the percentages of times a
model achieves the lowest value of the loss function 1324.6 The HAR-RV data generated process, with the half-sum of the
squared standardized one-step-ahead prediction errors:
1
2
1000
t=1 z t2+1|t 133
4.7 The HAR-RV data generated process, with the average values (100
iterations) of the loss functions and the percentages of times a
model achieves the lowest value of the loss function 1345.1 Information for the intraday data 1625.2 Estimation of the interday adjusted realized volatility, RV (τ)
t (HL∗) 169
xiv
Trang 165.3 Descriptive statistics of annualized one-trading-day interday
adjusted realized daily variances, 252RV (τ)
t(HL∗) 175
5.4 Descriptive statistics of annualized one-trading-day interday
adjusted realized daily standard deviations,
252RV (τ)
t (HL∗) 175
5.5 Descriptive statistics of annualized interday adjusted realized daily
logarithmic standard deviations, log
5.8 The half-sums of squared standardized one-trading-day-ahead
prediction errors of the three models, X m i ≡ 2−1˜T
Then compute the half-sum of z2ˆT+1| ˆT, for
the total of ˜T − ˘T − T trading days Each z2
ˆT+1| ˆTis computed
from the model with min
1 2
ˆT
at each trading dayT
The first column presents the loss function
1 2
˜T− ˘T−T
from the strategy of selecting at each trading day the model
proposed by the SPEC criterion The last three columns present
the loss function 12˜T− ˘T−T
model with min
¯ (m i ) (MSE)
is statistically superior to its competitors 201
Trang 175.17 The DM test statistic for testing the null hypothesis that the m1
model has equal predictive ability with m2model, or
E
that E
(MSE)t (m1) − (MSE)t (m2) < 0 203
6.1 Z jump statistic on log(RV) 226
6.2 Z jump statistic on RV-ROWVar 226
6.3 Lee & Mykland test for jumps 228
7.1 Summary statistics for the hourly returns, from 3 May 2000 to 29 November 2013 (9:00 a.m to 5:00 p.m.) 252
7.2 Estimates of the different models for the sample of hourly observations from 9:00 a.m to 5:00 p.m of the DAX 30 index and its corresponding future contracts (3 May 2000 to 29 November 2013) 254
7.3 Summary statistics for the hourly hedge ratios 255
7.4 Effectiveness analysis of the strategies The variance reduction, the percentage of VaR (a) t violations and the 10T t =1 t(VaR)loss function, for 1− a at 5%, 1% and 0.1%, are computed for the hourly DAX 30 index for the period 21 June 2001 to 29 November 2013 258
Trang 18Dr Stavros Degiannakis and Dr Christos Floros acknowledge the supportfrom the European Community’s Seventh Framework Programme (Marie CurieFP7-PEOPLE-IEF & FP7-PEOPLE-RG) funded under grant agreements no.PIEF-GA-2009-237022 & PERG08-GA-2010-276904 We would like to thank
Dr Enrique Salvador and Dr Thomas Poufinas for their constructive contribution
to the 7th chapter We are also grateful to the many people (colleagues atthe university, researchers, traders and financial market practitioners), who byoccasional informal exchange of views have had an influence on these aspects aswell Most importantly, we wish to express our gratitude to our families for theirsupport
xvii
Trang 19d(.) indicator function; i.e d(y t > 0) = 1 if y t > 0, and
¯ (.) average of predictive loss/evaluation function, i.e
ˆσ2
Trang 20F T n (t) cumulative distribution function of epoch T n.
d exponent of the fractional differencing operator(1 − L) d
inARFIMA models
FIGARCH & FIAPARCH models
ˆy (m) t forecasts of y t from model m.
L T (.) full sample log-likelihood function based on a sample of
y t −i|t in-sample fitted value of conditional mean at time t − i
based on information available at time t.
log p (t) instantaneous logarithmic asset price
[a,b] integrated quarticity
[a,b] integrated variance over the interval [a, b].
Trang 21t (.) loss/evaluation function that measures the distance between
volatility and its forecast
ε t j = logP t j − logp t j market microstructure noise
t explanatory variables
information available at time t.
information available at time t (unbiased estimator).
information available at time t.
information available at time t of model m i
information available at time t.
h2t +1|t one-step-ahead estimate of integrated quarticity given the
information available at time t.
information available at time t.
t + 1based on information available at time t.
Trang 22P pre,t j previous tick price.
K
frequency
RV [a,b][2q] realized power variation of order 2q
RV [a,b] realized volatility for the time interval [a, b].
and variance
Trang 23θ (t) vector of estimated parameters for the conditional mean
and variance
at time t.
w vector of estimated parameters for the density function f
t−1 vector of explanatory variables of m iregression model
β vector of parameters for estimation in regression model
β (m i ) vector of parameters for estimation of m iregression model.
υ t vector of predetermined variables included in I t
t (ES) loss/evaluation function for Expected Shortfall
t Expected Shortfall of a portfolio at confidence level a.
t Value-at-Risk of a portfolio at confidence level a.
r h,t log-return of the hedged portfolio at time t.
zt vector of standardized error term (residuals)
ρ ij constant correlation of spot and future price returns
f ,t variance of future price returns at time t.
σ s,t σ f ,t covariance of spot and future price returns at time t.
s,t variance of spot price returns at time t.
t (.) conditional variance-covariance matrix
ε t vector of error term (residuals)
μt(.) vector of conditional mean
diag (.) diagonal matrix
Trang 24Frequency Financial
Modelling
The chapter presents an introduction to High Frequency Trading (HFT) andfocuses on the role of volatility using case studies Further, we discuss recentempirical researches on volatility forecasting and market microstructure
Figlewski (2004) argues that a financial market is an institution set up byhuman beings, and the behaviour of security prices established in it depend onhuman activity Since financial markets change continuously due to the uncertainbehaviour of investors, the accuracy of forecasting market behaviour is possibleonly to the extent that the change of a financial instrument is relatively gradualmost of the time; forecasting the financial market (and its products) is a challengefor financial modellers and economists A basic financial instrument is referred to
as equity, stock or shares Further, security is an instrument representing ownership(stocks), a debt agreement (bonds), or the rights to ownership (derivatives) Stock,for example, is the ownership of a small price of a company or firm; i.e it givesstockholders a share of ownership in a company Its price is determined by the value
of the company and by the expectations of the performance of the company Theseexpectations (i.e behaviour of bid and ask prices) give an uncertainty to the futureprice development of the stock The stock value is either higher or lower than theexpected value Therefore, the amount in which the stock value can differ from the
expected value is determined by the so-called volatility.
Volatility of returns is a key variable for researchers and financial analysts Mostfinancial institutions make volatility assessments by monitoring their risk exposure.Volatility defines the variability of an asset’s price over time (measured in percentageterms) According to Figlewski (2004), volatility “is simply synonymous with risk:high volatility is thought of as a symptom of market disruption [it] means thatsecurities are not being priced fairly and the capital market is not functioning aswell as it should”
Volatility is a statistical measure of the tendency of a market or security price
to rise or fall sharply within a period of time It can be measured by using thevariance of the price returns Return is the gain or loss of a security in a particularperiod, quoted as a percentage The return consists of the income and the capital
1
Trang 25gains relative to an investment A highly volatile market means that prices havehuge swings (moves) in very short periods of time (Tsay, 2005) Volatility dynamicshave been modelled to account for several features (stylized facts): clustering, slowlydecaying autocorrelation, and nonlinear responses to previous market information
of a different type (Corsi et al., 2012) Moreover, according to Foresight (2011),price volatility is an indicator of financial instability in the market
Financial volatility is time-varying, and therefore is a key term in asset pricing,portfolio allocation and market risk management Financial analysts are concernedwith modelling volatility, i.e the covariance structure of asset returns Further,the subject of financial econometrics pays high attention to the modelling andforecasting of time-varying volatility, i.e the measurement and management ofrisk According to Tsay (2005, pp 97–98), modelling the volatility of a timeseries can improve the efficiency in parameter estimation and the accuracy ininterval forecast The finance literature examined the so-called AutoregressiveConditional Heteroscedasticity1 (ARCH) class of models of volatility (see Engle,1982; Bollerslev, 1986), while in recent years this literature has benefited from theavailability of high-frequency data (1-second, 1-minute, 1-hour data, etc.) Sincethe seminal paper of Andersen and Bollerslev (1998), much of the literature dealswith the development of the realized volatility as well as bi-power variation andjumps tests These techniques improved the measures of volatility, but also theefficiency of the financial markets (i.e market prices reflect the true underlyingvalue of the asset) In particular, the huge amount of intraday data providesimportant information regarding fluctuations of assets and their co-movements;this helps in understanding dynamics of financial markets and volatility behaviour,while it may yield to more accurate measurements of volatility However, the use ofhigh-frequency data (and its trading algorithms) may give several problems such asthe observation asynchronicity and/or market microstructure noise; i.e academicsnow are interested in estimating consistently the variance of noise, which can beregarded as a measure of the liquidity2of the market, or the quality of the tradeexecution in a given exchange or market structure (see Ait-Sahalia and Yu, 2009)
In other words, new models (techniques) for describing high frequency strategiesunder several trading conditions are necessary
1 The role of high frequency trading
High frequency trading (henceforth HFT) strategies update orders very fast andhave no over night positions HFT realizes profits per trade, and hence focusesmainly on highly liquid instruments Therefore, HFT relies on high speed access
to markets and their data using advanced computing technology In other words,
Trang 26dealing with high frequency strategies is quite complex due to the nature of highfrequency data.
There is a debate about how to define HFT precisely; for instance, a EuropeanCommission (2010) study reports that HFT is typically not a strategy in itself butthe use of very sophisticated technology to implement traditional trading strategies.Moreover, HFT is a subset of algorithmic trading (AT), but not all AT is HFT AT3isdefined as the use of computer algorithms to automatically make trading decisions,submit orders, and manage those orders after submission (see Hendershott andRiordan, 2009; Brogaard, 2010) Studies report a possible impact of AT on marketprices and volatility (Gsell, 2008)
Trading algorithms rely on high-speed market data, while HFT participantsconduct a form of arbitrage based on fast access to market data Using data onHFT, one may examine if HFT activity is correlated with bid-ask spreads, temporaryand/or permanent volatility, trading volume, and other market activity and marketquality measures (Jones, 2013)
Moreover, high-frequency data are observations taken daily or at a finer timescale; these data are important for a variety of issues related to the tradingprocess and market microstructure.4 Due to the nature of high-frequency data,special characteristics should be considered by financial modellers, i.e thenon-synchronous trading, bid-ask spread, etc High-frequency financial data helps
in solving a variety of issues related to the efficiency of the trading process In otherwords, it can be used to compare the efficiency of different trading systems in pricediscovery (i.e the market process whereby new information is impounded into assetprices) as well as to model dynamics of bid and ask quotes of a particular stock Tsay(2005) explains the idea of non-synchronous trading, which can induce erroneousnegative serial correlations for a single stock
According to Jones (2013), many financial markets have abandoned humanintermediation via floor trading or the telephone, replacing it with an electroniclimit order book or another automated trading system In response to an automatedtrading process, market participants began to develop trading algorithms In fact,buy and sell automated orders are appearing and matching at a faster rate than everbefore
According to Iati et al (2009) from TABB Group, “We define high frequencytrading as fully automated trading strategies that seek to benefit from marketliquidity imbalances or other short-term pricing inefficiencies And that goes acrossasset classes, extending from equities and derivatives into currencies and littleinto fixed income” HFT is responsible for 75% of equity trading, according to
research by the Tabb Group cited by the Financial Times They further define
three types of firms that generally are high-frequency traders5 who execute trades
in milliseconds on electronic order books and hold new equity positions possibly
down to a sub-second (CESR, 2010).
Trang 27First, there are the traditional broker-dealers undertaking high-frequency gies on their proprietary trading desks, separate from their client business Second,
strate-we have the high-frequency hedge funds Third are proprietary trading firms thatare mainly using private money
Further, HFT refers to a type of ultra-fast electronic trading that seeks totake advantage of tiny discrepancies between prices in equities, foreign exchangeand derivatives by trading rapidly across multiple trading venues, often usingcomputer algorithms O’Hara (2014, p 26) argues that high frequency algorithmsoperate across markets, using the power of computing technology to predict pricemovements based on the behaviour of correlated assets
According to the SEC (2010) report, high-frequency traders are “professionaltraders acting in a proprietary capacity that engage in strategies that generate alarge number of trades on a daily basis” (SEC Concept Release on Equity MarketStructure, 75 Fed Reg 3603, January 21, 2010) The SEC concept release goes on toreport (p 45) that HFT is often characterized by: (1) the use of extraordinarilyhigh-speed and sophisticated computer programs for generating, routing, andexecuting orders; (2) use of co-location services and individual data feeds offered
by exchanges and others to minimize network and other types of latencies; (3) veryshort time-frames for establishing and liquidating positions; (4) the submission ofnumerous orders that are cancelled shortly after submission; and (5) ending thetrading day in as close to a flat position as possible (that is, not carrying significant,unhedged positions overnight)
It is important to note that high-frequency traders (as market makers) are able toprovide liquidity during periods of market stress
Hence, HFT is one of the latest major developments in financial markets Severalpapers (Zhang, 2010; Brogaard, 2011a, 2011b) estimate that HFT accounts forabout 70% of trading volume in the U.S capital market as from 2009 According
to Castura et al (2010), HFT encompasses professional market participants thatpresent some characteristics: high-speed algorithmic trading, the use of exchangeco-location services along with individual data feeds, very short investmenthorizons and the submission of a large number of orders during the continuoustrading session that are often cancelled shortly after submission
The existing literature shows that HFT activity improves market quality In otherwords, HFT activity helps in the reduction of the spread, liquidity improvement andreduction of intraday volatility (Castura et al., 2010; Angel et al., 2010; Hasbrouckand Sarr, 2011) Specific characteristics of HFT include (Gomber et al., 2011):
1 Very high number of orders
2 Rapid order cancellation
3 Proprietary trading
4 Profit from buying and selling
5 No significant position at end of day
Trang 286 Very short holding periods
7 Extracting very low margins per trade
8 Low latency requirement
9 Use of co-location/proximity services and individual data feeds
10 Focus on high liquid instruments
Hasbrouck and Saar (2011) argue that technology allows orders to be submitted(cancelled) instantaneously, and dynamic strategies use this functionality toimplement complex trading strategies The major feature of HFT is that it considerssophisticated and complicated algorithms (Clark, 2011; Scholtus and Van Djik,2012) Hence, HFT is one of the most significant financial market structuredevelopments in the last ten years or so; however, it has come under increasingscrutiny especially after the crash of the US equity market on May 6, 2010(Kirilenko et al., 2011) Even though we know most advantages of HFT, i.e quickreactions to new information and reduction of monitoring and execution costs,empirical studies find negative effects on market liquidity and price volatility.Recent empirical literature on HFT shows that trading at a very high speed providesboth benefits and risks
This is due to the behaviour of automated traders, who employ strategies thatcan potentially overload exchanges with trade-messaging activity (Egginton et al.,2012) Further, HFT may generate systematic market disruptions and increasesystematic risk (Barker and Pomeranets, 2011) Recent studies on HFT, such asKirilenko et al (2011) and Biais et al (2013), find that HFT enables “fast” traders
to process information before other traders; this strategy generates profits at theirexpense (Liu et al., 2013) Further, Baron et al (2012) suggest that most of theprofits from HFT are generated from the interaction with fundamental traders,small and other traders who are unlikely to access strategies that are carried out at
a very high speed (Liu et al., 2013) Similarly, Jarrow and Protter (2011) show that
’fast’ traders may increase volatility However, investors that are exposed to HFTstrategies are likely to face a higher risk during high HFT activity in comparisonwith other investors who are not exposed to HFT; see Liu et al (2013)
Over the last decade the easy access to high frequency financial data has providedmuch activity to financial modelling In particular, our knowledge of financialvolatility and its dynamic properties is an area that has been extensively examineddue to the availability of large datasets of high frequency prices The introduction ofmodern empirical techniques to measure the forecast variation of asset prices is anexample; this is a key research area in the subject of financial econometrics Thereare now several volatility measures that are computed from high frequency data.High frequency data greatly improve the forecast accuracy because volatility ishighly persistent; an accurate measure of volatility from high frequency data isvaluable for forecasting future volatility Next, we list six ways that high frequencydata have improved volatility forecasting, as given by Hansen and Lunde (2011)
Trang 291 High frequency data improve the dynamic properties of volatility, necessary forforecasting.
2 Realized measures are valuable predictors of future volatility in reduced formmodels
3 Realized measures have enabled the development of new volatility models thatprovide more accurate forecasts
4 High frequency data have improved the evaluation of volatility forecasts inimportant ways
5 Realized measures can facilitate and improve the estimation of complex volatilitymodels (e.g continuous time volatility models) This also improves predictionsbased on the development of such models
6 High frequency data (i.e large volumes of information) improve the analysis forthe effect of news announcements on the financial markets
HFT, an automated process, involves fast or ultra-fast trading into and out ofpositions to take advantage of what may be very small and short-term opportunities(see Lindenbergh et al., 2013) According to their report, critics on HFTprovide evidence that it destabilizes markets, hinders price discovery and placesconventional investors at a serious disadvantage However, they argue that HFTpromotes efficiency and increases market liquidity Hence, it is possible that HFTmay create significantly greater intraday volatility in the market In other words,trades which automatically follow trends can lead to excessive market movements.Gomber et al (2011) conclude the following about the HFT,
1 HFT is a technical means to implement established trading strategies
2 HFT is a natural evolution of the securities markets instead of a completely newphenomenon
3 A lot of problems related to HFT are rooted in the US market structure
4 The majority of HFT strategies contributes to market liquidity (market-makingstrategies) or to price discovery and market efficiency (arbitrage strategies)
5 Academic literature mostly shows positive effects of HFT strategies on marketquality and price formation (most studies find positive effects on liquidity andshort term volatility)
6 HFT market making6 strategies face relevant adverse selection costs as theyprovide liquidity on lit7markets without knowing their counterparties
7 Any assessment of HFT based strategies has to take a functional rather than aninstitutional approach
8 The high penetration of HFT based strategies underscores the dependency
of players in today’s financial markets on reliable and thoroughly supervisedtechnology
9 Any regulatory interventions in Europe should try to preserve the benefits ofHFT while mitigating the risks as far as possible
Trang 3010 The market relevance of HFT requires supervision but also transparency andopen communication to assure confidence and trust in securities markets.Further, recent empirical studies show that financial economic research provides
no direct evidence that HFT increases volatility HFT came under several sions because of the “Flash Crash” in US equity markets in 2010 For instance,O’Hara (2014, p 31) concludes that “HFT is not just about speed, but insteadreflects a fundamental change in how traders trade and how markets operate”
discus-CASE STUDY 1: Flash Crash of May 6, 2010
On May 6, 2010 an unusually turbulent day for the markets mainly due to theEuropean debt crisis took less than 30 minutes for the Dow Jones IndustrialAverage to fall by nearly 1,000 points According to the U.S Commodity FuturesTrading Commission and the U.S Securities and Exchange Commission Report(CFTC-SEC, 2010), “major equity indices in both the futures and securitiesmarkets, each already down over 4% from their prior-day close, suddenlyplummeted a further 5–6% in a matter of minutes before rebounding almost asquickly” This is known as the “Flash Crash” The substantial, largely negative mediacoverage of HFTs and the “Flash Crash” raised significant interest and concernsabout the fairness of markets and the role of HFTs in the stability and priceefficiency of markets (Brogaard et al., 2014)
Moreover, according to the CFTC-SEC (2010) Report, The events of May 6 can be separated into 5 phases (shown in Figure 1.1):
• During the first phase, from the open through about 2:32 p.m., prices werebroadly declining across markets, with stock market index products sustaininglosses of about 3%
• In the second phase, from about 2:32 p.m through about 2:41 p.m., the broadmarkets began to lose more ground, declining another 1–2%
• Between 2:41 p.m and 2:45:28 p.m in the third phase lasting only about fourminutes or so, volume spiked upwards and the broad markets plummeted afurther 5–6% to reach intra-day lows of 9–10%
indices recovered while at the same time many individual securities and ETFsexperienced extreme price fluctuations and traded in a disorderly fashion atprices as low as one penny or as high as $100,000.14
• Finally, in the fifth phase starting at about 3:00 p.m., prices of most individualsecurities significantly recovered and trading resumed in a more orderly fashion.Further, around 1:00 p.m the degree of volatility pauses, while by 2:30 p.m.the S&P 500 VIX was up to 22.5% from the opening level At 2:32 p.m., a largefundamental trader initiated a sell program to sell a total of 75,000 E-Mini contracts
Trang 31Figure 1.1 Flash crash of May 6, 2010
as a hedge to an existing position This sell pressure (via a “Sell Algorithm”) wasinitially absorbed by (see CFTC-SEC, 2010, p 3): “high frequency traders and otherintermediaries in the futures market; fundamental buyers in the futures market; andcross-market arbitrageurs who transferred this sell pressure to the equities markets
by opportunistically buying E-Mini contracts and simultaneously selling productslike SPY,8or selling individual equities in the S&P 500 Index”
Between 2:45:13 and 2:45:27, HFTs traded over 27,000 contracts used, whichaccounted for 49% of the total trading volume; from 2:41 through 2:45:27 p.m.,prices of the E-Mini (SPY) had fallen by more than 5% (6%) By 2:32 p.m.and 2:45 p.m., the Sell Algorithm sold about 35,000 E-Mini contracts, while allfundamental sellers (buyers) combined sold more than 80,000 (50,000) contractsnet At 2:45:28 p.m., trading on the E-Mini was paused for five seconds fromCME; trading resumed at 2:45:33 p.m (prices stabilized), and then E-Mini began
to recover, followed by the SPY The Sell Algorithm continued to execute the sellprogram until about 2:51 p.m., as the E-Mini and SPY prices were rapidly rising Bythe end of the day, major futures and equities indices “recovered” to close at losses
of about 3% from the prior day
What we learn from the above is the following (according to CFTC-SEC, 2010;
p 6):
(i) Under stressed market conditions, the automated execution of a large sell order cantrigger extreme price movements, especially if the automated execution algorithm doesnot take prices into account Moreover, the interaction between automated execution
Trang 32Figure 1.2 Knight Capital collapse (August 2012)
Note: Knight shares dropped 63%.
Sources: Reuters.
programs and algorithmic trading strategies can quickly erode liquidity and result indisorderly markets As the events of May 6 demonstrate, especially in times of significantvolatility, high trading volume is not necessarily a reliable indicator of market liquidity.(ii) Many market participants employ their own versions of a trading pause – eithergenerally or in particular products – based on different combinations of market signals.While the withdrawal of a single participant may not significantly impact the entiremarket, a liquidity crisis can develop if many market participants withdraw at the sametime This, in turn, can lead to the breakdown of a fair and orderly price-discoveryprocess, and in the extreme case trades can be executed at stub-quotes used by marketmakers to fulfill their continuous two-sided quoting obligations
To sum up, the CFTC-SEC (2010) Report concludes that “Unregulated orunconstrained HFT market makers exacerbated price volatility in the Flash Crash”.Further, Cliff and Northrop, as presented in Foresight (2011), argue that the FlashCrash is a result of normalisation of deviance, a process they define as one whereunexpected and risky events come to be seen as ever more normal (e.g extremelyrapid crashes), until a disaster occurs
Kirilenko et al (2011) point out that high frequency traders during the FlashCrash initially acted as liquidity providers but as prices crashed some HFT withdrewfrom the market while others turned into liquidity demanders
CASE STUDY 2: Knight Capital collapse (August 2012)
Knight Capital Group (KCG) lost $440 million in 30 minutes on August 1, 2012 due
to a software bug (a technology error) KCG, one of the biggest executors of stocktrades in the US by volume, reported huge losses associated with a glitch (computermalfunction) Due to a faulty test of a new trading software (a bad algorithm),Knight Capital’s software paid the ask price and then sold at the bid price instantly.This increased the losses resulting from the trades, and Knight Capital lost $440million from this new software As can be seen from Figure 1.2, the Knight shareprice dropped from $10.33 to $2.58
Trang 332 Modelling volatility
Financial markets behave differently depending, for example, on the economicsituation, but also across quiet and turbulent periods Due to this uncertainfinancial environment, volatility (as discussed previously) is a key variable inempirical finance (financial time series analysis and modelling) It refers to theprice fluctuation over a period of time Volatility is a central variable in financialeconometrics as it represents the standard deviation in asset pricing and riskmanagement techniques (e.g Value-at-Risk method; Option Pricing) Further,modelling and forecasting the return volatility of financial assets have drawnsignificant attention from both academia and the financial industry especially aftermarket crashes (e.g the 2008 financial crisis)
Volatility was used as a constant parameter in finance textbooks and researchpapers in the 1970s, but these days it is widely accepted as a time-varying variable,which is not directly observed Volatility, an important variable in the financialrisk management area, is not a straightforward task Therefore, financial analystshave to discover ways to estimate volatility accurately (under specific assumptions).Its characteristics that are common in asset returns include: (1) volatility clusters,i.e volatility may be high for certain time periods and low for other periods;(2) volatility jumps are rare; (3) volatility varies within some fixed range; and(4) leverage effect, i.e volatility reacts differently to a huge price increase or adrop
Model assumptions for asset price dynamics and the choice of financial data(low- or high-frequency data) employed in the estimation of volatility are generallynot independent of modelling decisions Most papers employ low-frequency data,i.e one data point per trading day; however, recently they have started to usehigh-frequency data, i.e one data point per second, minute, hour etc., when therecould be thousands of intra-day observations This is a scientific challenge for thearea of financial modelling
Further, volatility modelling, under several assumptions, has received a lot
of attention in the finance literature over the past five years mainly because ofthe 2008 financial crisis As volatility relates directly to the profits of traders,various models and methods have been developed (extended) for measuring andpredicting volatility over turbulent periods; each one depends on data frequencyand assumptions
Looking at the basic volatility estimates, five major types of volatility measuresare as following:
Historical volatility (HV) measure refers to procedures that are solely data
driven (see Xekalaki and Degiannakis, 2010) Widely applied HV methods are theRiskmetrics and the price range HV techniques filter the historical prices in order
to compute volatility through the past variation of the price process
Trang 34Implied volatility (IV) is the volatility implied by the observed option prices
of the asset, based on a theoretical option pricing model; i.e the originalBlack-Scholes-Merton model (Black and Scholes, 1973a; Merton,1973) or itsvarious extensions (see Day and Lewis, 1992; Noh et al., 1994; and Hull and White,
1987 among many others)
The Autoregressive Conditional Heteroscedasticity framework (ARCH stochastic
process) assesses the volatility process based on the return series of a financial asset.ARCH assumes a deterministic relationship between the current volatility with itspast and other variables The volatility estimate is conditional on the available
information set (named as conditional volatility) The ARCH model for volatility
modelling provides a systematic framework of volatility clustering, under whichlarge shocks tend to be followed by another large shock
The stochastic volatility (SV) model extends the ARCH model by including
randomness in the intertemporal relationship of the volatility process; see Hull andWhite (1987), Bollerslev et al (1994), Ghysels et al (1995) and Shephard (1996)who provide reviews on the ARCH and SV models
Realized volatility (RV) uses intradaily high frequency data to directly measure
the volatility under a general semi-martingale model setting, using differentsub-sampling methods (see Andersen and Bollerslev, 1998; Andersen et al., 2001;Barndorff-Nielsen and Shephard, 2002; Dacorogna et al., 2001; Zhang et al.,2005; Barndorff-Nielsen et al., 2008) RV is a popular measure of volatility; ityields a perfect estimate of volatility in the hypothetical situation where prices areobserved in continuous time and without measurement error RV directly sums therealized log-returns in a given dimension, i.e 1-second, 1-minute, 1-hour data, oreven tick-by-tick data This leads to market microstructure noise, a well-knownbias problem The presence of market microstructure noise in high-frequencyfinancial data complicates the estimation of financial volatility and makes standardestimators, such as the realized variance, less accurate
According to Corsi et al (2012), the stylized facts on realized volatility are asfollows:
1 Long-range dependence (realized volatility displays significant autocorrelationeven at very long lags),
2 Leverage effect (returns are negatively correlated with realized volatility), and
3 Jumps (jumps have a strong positive impact on future volatility; but they areunpredictable)
3 Realized volatility
RV refers to the volatility measure based on high-frequency intraday returns Thefoundation of RV modelling is the theory of continuous time semi-martingale
Trang 35stochastic processes (measured by the sum of squared intradaily returns) Manyempirical studies document the superiority of RV when discussing topics such asportfolio construction, Value-at-Risk (VaR), and derivatives pricing For example,Fleming et al (2003) find that RV has significant economic value in volatility timing
in asset allocation decisions in the equity and bond markets Further, Giot andLaurent (2004) study 1-day ahead VaR based on daily realized volatility
High-frequency financial data are now widely available and, therefore, theliterature has recently introduced a number of realized measures of volatility Theseare the realized variance, bipower variation, the realized kernel, etc (see Andersenand Bollerslev, 1998; Andersen et al., 2001; Barndorff-Nielsen and Shephard, 2002,2004; Andersen et al., 2008; Barndorff-Nielsen et al., 2008; Hansen and Horel,2009; among others), which are useful for detecting jumps These approaches makerealized measures very useful for modelling and forecasting future volatility (seeAndersen et al., 2004; Ghysels et al., 2006; Hansen et al., 2003)
The main challenges in univariate volatility estimation are dealing with (i) jumps
in the price level and (ii) microstructure noise Multivariate volatility estimation isadditionally challenging because of (i) the asynchronicity of observations betweenassets and (ii) the need for a positive semidefinite covariance matrix estimator(Boudt et al., 2012)
Many studies have documented the fact that daily realized volatility estimatesbased on intraday returns provide volatility forecasts that are superior to forecastsconstructed from daily returns only
Several papers study further the properties of realized volatility Andersen et al.(2003) propose time series models for realized volatility in order to more accuratelypredict volatility Examples include Forsberg and Bollerslev (2002) on joint modelsfor returns and realized volatility ignoring the contribution of jumps, and Bollerslev
et al (2009) by considering jumps Other studies on the extension of standardARCH-class models by incorporating the information from realized volatilityinclude Hansen et al (2012), and Shephard and Sheppard (2010), and Engle andGallo (2006)
Moreover, Andersen and Bollerslev (1998), Andersen et al (2001a, 2001b),Andreou and Ghysels (2002), Barndorff-Nielsen and Shephard (2002a, 2002b), andMeddahi (2002), among others, have further discussed the empirical properties
of the estimation of quadratic variation by applying several stochastic processes inapplied finance
Further, empirical research has focused on the time-series properties and forecastimprovements that RV provides Examples include Andersen et al (2003, 2004,2005), Ghysels and Sinko (2006), Ghysels et al (2006), Koopman et al (2005),Maheu and McCurdy (2002) and Taylor and Xu (1997) For instance, Andersen,
et al (2003) and Giot and Laurent (2004) assume that RV is a sufficient statistic forthe conditional variance of returns when forecasting RV and VaR measure
Trang 36The importance of intraday returns for measuring RV is demonstrated forForeign Exchange (FX) data by Taylor and Xu (1997), Andersen and Bollerslev(1998) and Andersen et al (2000), and for equities by Andersen et al (2000).Andersen and Bollerslev (1998) show that intraday FX returns can be used toconstruct an RV series that essentially eliminates the noise in measurements of dailyvolatility.
A key issue in modelling RV is the information relevant to trading andnon-trading hours Due to the fact that stock exchanges are open only for
a limited number of hours during a trading day, information available toinvestors accumulates around the clock only The overnight period is becomingimportant due to the integration of global financial markets, and many newsreleases are also timed to occur during non-trading hours (Ahoniemi and Lanne,2011)
The existing literature gives attention to how prices evolve during non-tradinghours and trading hours French and Roll (1986) and Stoll and Whaley (1990)document that returns over trading hours are more volatile than non-tradinghour returns Tsiakas (2008) documents that the information accumulatedovernight contains substantial predictive ability for both US and European stockmarkets
A method for estimating volatility is based on the use of high-frequency data
to calculate the sum of intraday squared returns This applies to foreign exchangemarkets, where trading takes place around the clock (see e.g Andersen andBollerslev, 1998) However, in stock markets we have to account for the periodwhen the market is closed In order to ignore the overnight period, one can sumthe intraday squared returns (see Andersen et al., 2001; Corsi et al., 2008) Hansenand Lunde (2006) argue that such an estimator is not a proper proxy of the truevolatility because it does not span a full 24-hour period An alternative way is tosubtract each day’s close value from the next day’s open, and then add this squaredreturn as one equally weighted factor in the sum of intraday returns (see Ahoniemiand Lanne, 2011; Bollerslev et al., 2009 among others) Another method is tocalculate RV by scaling the resulting value upward so that the volatility estimatecovers an entire 24-hour day (see Angelidis and Degiannakis, 2008; Koopman et al.,2005) Finally, Fleming and Kirby (2011), Fuertes and Olmo (2013) and Hansen andLunde (2005b) consider a weighting scheme for the overnight return and the sum ofintraday returns The aim in modelling and forecasting the volatility using intradaydata is to achieve a better risk management, more accurate asset prices, and moreefficient portfolio allocations Good financial decision-making relies on accuratepredictions of the underlying financial instrument given a reliable measurementand method Therefore, financial analysts should give extra effort to provide goodand reliable real-time estimates and forecasts of current and future volatility usinghigh frequency data
Trang 374 Volatility forecasting using high
frequency data
Figlewski (2004) argues that “volatility forecasting is vital for derivatives trading, but
it remains very much an art rather than a science, particularly among derivatives traders” As mentioned earlier, the predominant approach in modelling and
forecasting the conditional distribution of returns was represented by the ARCHmodels proposed by Engle (1982) and Bollerslev (1986) and followed by severalother sophisticated extensions to the original model
The ARCH model has been successful in explaining several empirical features ofasset returns, such as fat tails and the slowly decaying autocorrelation in squaredreturns The recent availability of high-frequency data has sparked a growingliterature of predicting volatility estimators Early studies (e.g Andersen andBollerslev, 1998; Andersen et al., 2001a, 2001b among others) use high-frequencydata to proxy for the volatility of lower frequency returns Several recent studiesconsider a parametric volatility model for the dynamics of daily returns (seeShephard and Sheppard, 2010; Brownlees and Gallo, 2010; Maheu and McCurdy,2011; Hansen et al., 2012)
Volatility forecasting using high frequency data is divided into two mainapproaches: (1) reduced form volatility forecasting, and (2) model based forecast-ing The reduced-form approach refers to the constructing of simple projections
of volatility on past volatility measures The model-based approach constructsefficient volatility forecasts that rely on the model for returns, see Sizova (2011).She compares model-based and reduced form forecasts of financial volatility for5-minutes of DM/USD exchange rates and finds that the reduced-form approach
is generally better for long-horizon forecasting and for short-horizon forecasting inthe presence of microstructure noise
Various volatility modelling techniques are available to explain the stylizedfacts of the financial return volatility, i.e persistence, mean reversion, and theleverage effect, but they also provide good forecasts of the conditional volatility.Recent studies within the ARCH models area use intraday returns to explain thedynamic properties of intradaily volatility of financial markets under an ARFIMAframework Empirical studies such as Andersen and Bollerslev (1998), Andersen
et al (1999), Fuertes et al (2009) and Martens et al (2009) have shown that the use
of intraday high frequency data can improve the measurement and forecastability
of the daily volatility substantially
5 Volatility evidence
Recent empirical research shows mixed results on the link of HFT with ity Some empirical studies provide evidence that high frequency algorithms(strategies) increase volatility Martinez and Rosu (2011) and Foucault et al (2012)
Trang 38volatil-focus on HFTs that demand liquidity HFTs generate a large fraction of thetrading volume and price volatility (Jones, 2013) According to Martinez andRosu (2011), this volume and volatility is desirable, as HFT makes market pricesextremely efficient by incorporating information as soon as it becomes available.
RV is unaffected by the entry of the high-frequency market-maker (Jovanovic andMenkveld, 2010)
Boehmer et al (2012) show that co-location9 increases algorithm trading andHFT, and improves liquidity and the informational efficiency of prices They claimthat it increases volatility
Brogaard (2011a) finds that HFT participation rates are higher for stocks withhigh share prices, large market caps, narrow bid-ask spreads, or low stock-specificvolatility He argues that HFT contributes to price discovery and efficient stockprices The results reported are very similar when days are separated into highervolatility and lower volatility days
In addition, Hendershott and Riordan (2011) find that HFT has a beneficial role
in the price discovery process HFT reduces transitory pricing errors, and thereforestabilizes prices; they report evidence for low-volatility and high-volatility days.However, there is no clear evidence that HFT increases market volatility, as reported
by Brogaard (2012)
HFT may not help to stabilize prices during unusually volatile periods The flashcrash of May 6, 2010, is an example Hagströmer and Norden (2013) use data fromNASDAQ-OMX Stockholm, and report that HFTs mitigate intraday price volatility.Linton and Atak, as presented in Foresight (2011), argue that HFT contributes tovolatility, and therefore the ratio of intraday volatility to overnight volatility may beincreased This is true during the crisis period, but the opposite has happened sincethe end of 2009
of liquidity, and transaction costs and their implication for efficiency, welfare, andregulation of alternate trading mechanisms (Krishnamurti, 2009)
High frequency data brings microstructure effects and hence the volatilitycalculated with short time intervals is no longer an unbiased and consistentestimator of the daily integrated volatility (see Goodhart and O’Hara, 1997).Therefore, several bias correction techniques have been proposed to solve thisproblem, such as the volatility signature plot (Fang, 1996), the moving average
Trang 39filter (Andersen et al., 2001), the autoregressive filter (Bollen and Inder, 2002), thesubsample approach (Zhang et al., 2002), and the kernel-based approach (Hansenand Lunde, 2003).
Market microstructure effects are important as intraday data contains too muchnoise to be useful for longer horizon forecasting Hansen and Lunde (2006) reportthat volatility estimation in the presence of market microstructure noise is currently
a very active area of research, because microstructure noise is an ugly fact They
report the following facts about Dow Jones Industrial Average (DJIA) stock marketmicrostructure noise (Hansen and Lunde, 2006, p 127):
1 The noise is correlated with the efficient price
2 The noise is time-dependent
3 The noise is quite small in the DJIA stocks
4 The properties of the noise have changed substantially over time
The RV, which is a sum-of-squared returns, should be based on returns that aresampled at the highest possible frequency (tick-by-tick data) (Hansen and Lunde,2004) This may lead to a well-known bias problem due to market microstructurenoise, see Andreou and Ghysels (2002) and Oomen (2002) The bias is particularly
evident from volatility signature plots; see Andersen et al (2000) The presence
of noise has recently been examined by Hansen and Lunde (2006) They studymarket microstructure noise in high-frequency data and analyse its implicationsfor the realized variance Their empirical analysis of the DJIA stocks reveals thatmarket microstructure noise is time-dependent and correlated with increments inthe efficient price
Hansen and Lunde (2004) argue that there is a trade-off between bias andvariance when choosing the sampling frequency To handle this problem, one coulduse bias correction techniques, such as the filtering techniques (Andersen et al.,2001; Bollen and Inder, 2002) Other bias corrections include time-independentnoise (Zhang et al., 2002), and time-dependence in the noise process using akernel-based approach (Hansen and Lunde, 2003, 2005a, 2005b, 2006) Forexample, Hansen and Lunde (2006, p 154) report that “kernel-based estimatorsrevealed several important properties about market microstructure noise, and wehave shown that kernel-based estimators are very useful in this context”
Further, according to Zhang et al (2009), volatility estimation from highfrequency data, i.e realized volatility or realized variance, may be unreliable ifthe microstructure noise in the data is not explicitly taken into account Marketmicrostructure effects are surprisingly prevalent in high frequency financial data.Market microstructure noise refers to imperfections in the trading process offinancial assets causing observed prices to deviate from the underlying ‘true’ priceprocess (Bannouh et al., 2012) It implies that RV and realized range measures areinconsistent estimators for the integrated variance (IV), with the impact becomingmore pronounced as the sampling frequency increases
Trang 40Market microstructure noise has many sources, including the discreteness ofthe price (see Harris, 1990; 1991), and properties of the trading mechanism (seeO’Hara, 1995; Madhavan, 2000; Hasbrouck, 2004; Biais et al., 2005).
As the sampling frequency increases, the noise becomes progressively moredominant (Zhang et al., 2009) They argue that sampling a typical stock price everyfew seconds can lead to volatility estimates that deviate from the true volatility by afactor of two or more
Notes
1 The Royal Swedish Academy of Sciences awarded the 2003 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel to Robert Engle “for methods of analyzing economic time series with time-varying volatility (ARCH)”.
2 Liquidity refers to the ability to buy or sell an asset without greatly affecting its price.
3 For more details about algorithmic trading (including a literature overview), see Gomber et al (2011) Hendershott et al (2011) report that algorithmic trading contributes more to the discovery
of the efficient price than human trading.
4 Market microstructure is of special interest to practitioners because of the rapid transformation of the market environment by technology, regulation and globalization (Krishnamurti, 2009).
5 According to an ASIC (2010) report, high-frequency traders follow different strategies (e.g arbitrage, trading on prices which appear out of equilibrium, trading on perceived trading patterns, etc.) but they are generally geared towards extracting very small margins from trading financial instruments between different trading platforms at hyper-fast speed.
6 A market maker buys from sellers and sells to buyers; market making is providing liquidity to buyers and sellers by acting as a counterparty.
7 A lit market is one where orders are displayed on order books and are therefore pre-trade transparent.
8 SPY is an exchange-traded fund which represents the S&P 500 index.
9 Exchanges are building huge data centres where traders place computers with their trading algorithms next to the exchange’s machine engine in order to avoid a delay of one millisecond to complete the trade Co-location reduces latency and network complexity, while it provides proximity
to the speed and liquidity to the markets.
Bibliography
Ahoniemi, K., and Lanne, M (2011) Overnight Returns and Realized Volatility SSRNWorking Paper
Ait-Sahalia, Y and Yu, J (2009) High frequency market microstructure noise estimates
and liquidity measures The Annals of Applied Statistics 3(1), 422–457.
Andersen, T.G., Bollerslev, T., Diebold, F.X and Ebens, H (2001) The distribution of
realized stock return volatility Journal of Financial Economics, 61(1), 43–76.
Andersen, T.G., Bollerslev, T., Diebold, F.X and Ebens, H (2001) The distribution of
realized stock return volatility Journal of Financial Economics, 61, 43–76.
Andersen, T.G and Bollerslev, T (1998) Answering the skeptics: Yes, standard volatility
models do provide accurate forecasts International Economic Review, 39, 885–905.
... are computed from high frequency data .High frequency data greatly improve the forecast accuracy because volatility ishighly persistent; an accurate measure of volatility from high frequency data. .. (2012), and Shephard and Sheppard (2010), and Engle andGallo (2006)Moreover, Andersen and Bollerslev (1998), Andersen et al (2001a, 2001b),Andreou and Ghysels (2002), Barndorff-Nielsen and. .. extraordinarilyhigh-speed and sophisticated computer programs for generating, routing, andexecuting orders; (2) use of co-location services and individual data feeds offered
by exchanges and others