1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

High risk scenarios and extremes a geometric approach

389 15 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 389
Dung lượng 2,48 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Apart from looking at the asymptotics of the conditional distributions given theexceedance over a linear threshold – the so-called high risk scenarios – one may look at the behaviour of

Trang 1

EMS

SE

EMS

EE

Trang 2

Zurich Lectures in Advanced Mathematics

as contributions from researchers in residence at the mathematics research institute, FIM-ETH Moderately priced, concise and lively in style, the volumes of this series will appeal to researchers and students alike, who seek an informed introduction to important areas of current research.

Previously published in this series:

Yakov B Pesin, Lectures on partial hyperbolicity and stable ergodicity Sun-Yung Alice Chang, Non-linear Elliptic Equations in Conformal Geometry Sergei B Kuksin, Randomly forced nonlinear PDEs and statistical hydrodynamics in 2 space dimensions Pavel Etingof, Calogero-Moser systems and representation theory

Published with the support of the Huber-Kudlich-Stiftung, Zürich

Trang 3

Guus Balkema Paul Embrechts

High Risk Scenarios and Extremes

A geometric approach

Trang 4

A A Balkema Department of Mathematics University of Amsterdam Plantage Muidergracht 24

1018 TV Amsterdam Netherlands guus@science.uva.nl

The cover shows part of the edge and of the convex hull of a realization of the Gauss-exponential point process This point process may be used to model extremes in, for instance, a bivariate Gaussian or hyperbolic distribution The underlying theory is treated in Chapter III.

2000 Mathematics Subject Classification 60G70, 60F99, 91B30, 91B70, 62G32, 60G55

of the copyright owner must be obtained

©2007 European Mathematical Society Contact address:

European Mathematical Society Publishing House Seminar for Applied Mathematics

ETH-Zentrum FLI C4 CH-8092 Zürich Switzerland Phone: +41 (0)44 632 34 36 Email: info@ems-ph.org Homepage: www.ems-ph.org Typeset using the authors’ TEX files: I Zimmermann, Freiburg Printed on acid-free paper produced from chlorine-free pulp TCF°°

Printed in Germany

9 8 7 6 5 4 3 2 1

P Embrechts Department of Mathematics ETH Zurich

8092 Zurich Switzerland embrechts@math.ethz.ch

Trang 5

mixture of indulgence and respect Thank you for your patience.

Guus

For Gerda, Krispijn, Eline and Frederik Thank you ever so much for the wonderful love and support over the many years.

Paul

Trang 7

These lecture notes describe a way of looking at extremes in a multivariate setting Weshall introduce a continuous one-parameter family of multivariate generalized Paretodistributions that describe the asymptotic behaviour of exceedances over linear thresh-olds The one-dimensional theory has proved to be important in insurance, financeand risk management It has also been applied in quality control and meteorology.The multivariate limit theory presented here is developed with similar applications inmind Apart from looking at the asymptotics of the conditional distributions given theexceedance over a linear threshold – the so-called high risk scenarios – one may look

at the behaviour of the sample cloud in the given direction The theory then presents ageometric description of the multivariate extremes in terms of limiting Poisson pointprocesses

Our terminology distinguishes between extreme value theory and the limit theoryfor coordinatewise maxima Not all extreme values are coordinatewise extremes! Inthe univariate theory there is a simple relation between the asymptotics of extremesand of exceedances One of the aims of this book is to elucidate the relation betweenmaxima and exceedances in the multivariate setting Both exceedances over linearand elliptic thresholds will be treated A complete classification of the limit laws isgiven, and in certain instances a full description of the domains of attraction Ourapproach will be geometrical Symmetry will play an important role

The charm of the limit theory for coordinatewise maxima is its close relationshipwith multivariate distribution functions The univariate marginals allow a quick check

to see whether a multivariate limit is feasible and what its marginals will look like.Linear and even non-linear monotone transformations of the coordinates are easilyaccommodated in the theory Multivariate distribution functions provide a simplecharacterization of the max-stable limit distributions and of their domains of attrac-tion Weak convergence to the max-stable distribution function has almost magicalconsequences In the case of greatest practical interest, positive vectors with heavytailed marginal distribution functions, it entails convergence of the normalized sampleclouds and their convex hulls

Distribution functions are absent in our approach They are so closely linked tocoordinatewise maxima that they do not accommodate any other interpretation ofextremes Moreover, distribution functions obscure an issue which is of paramountimportance in the analysis of samples, the convergence of the normalized samplecloud to a limiting Poisson point process Probability measures and their densities

and in handling applications The theory presented here may be regarded as a usefulcomplement to the multivariate theory of coordinatewise maxima

Trang 8

These notes contain the text of the handouts, substantially revised, for a diplom course on point processes and extremes given at the ETH Zurich in the springsemester of 2005, with the twenty sections of the book roughly corresponding toweekly two-hour lectures.

Nach-Acknowledgements Thanks to Matthias Degen, Andrea Höing and Silja Kinnebrock

for taking care of the figures, to Marcel Visser for the figures on the AEX, and toHicham Zmarrou for the figures on the DAX We thank Johanna Nešlehová for herassistance with technical problems We also thank her for her close reading of theextremal sections of the manuscript and her valuable comments A special word ofthanks to Nick Bingham for his encouraging words, his extensive commentary on

an earlier version of the text, and his advice on matters of style and punctuation.The following persons helped in the important final stages of proofreading: DanielAlai, Matthias Degen, Dominik Lambrigger, Natalia Lysenko, Parthanil Roy andJohanna Ziegel Dietmar Salamon helped us to understand why discontinuities inthe normalization are unavoidable in certain dimensions We would also like tothank Erwin Bolthausen and Thomas Kappeler, who as editors of the series gave ususeful input early on in the project Thomas Hintermann, Manfred Karbe and IreneZimmermann did an excellent job transforming the MS into a book Guus Balkemawould like to thank the Forschungsinstitut für Mathematik (FIM) of the ETH Zurichfor financial support, and the Department of Mathematics of the ETH Zurich for itshospitality He would also like to express his gratitude to the Korteweg–de VriesInstituut of the University of Amsterdam for the pleasant working conditions and theliberal use of their facilities

Trang 9

Foreword vii

Introduction 1

Preview 13

A recipe 13

Contents 31

Notation 36

I Point Processes 41

1 An intuitive approach 41

1.1 A brief shower 41

1.2 Sample cloud mixtures 43

1.3 Random sets and random measures 44

1.4 The mean measure 45

1.5* Enumerating the points 46

1.6 Definitions 47

2 Poisson point processes 48

2.1 Poisson mixtures of sample clouds 48

2.2 The distribution of a point process 49

2.3 Definition of the Poisson point process 50

2.4 Variance and covariance 51

2.5* The bivariate mean measure 52

2.6 Lévy processes 54

2.7 Superpositions of zero-one point processes 56

2.8 Mappings 58

2.9* Inverse maps 58

2.10* Marked point processes 62

3 The distribution 63

3.1 Introduction 63

3.2* The Laplace transform 64

3.3 The distribution 65

3.4* The distribution of simple point processes 67

4 Convergence 69

4.1 Introduction 69

4.2 The state space 70

Starred sections may be skipped on a first reading

Trang 10

4.4 Radon measures and vague convergence 76

4.5 Convergence of point processes 78

5 Converging sample clouds 81

5.1 Introduction 81

5.2 Convergence of convex hulls, an example 83

5.3 Halfspaces, convex sets and cones 84

5.4 The intrusion cone 87

5.5 The convergence cone 89

5.6* The support function 92

5.7 Almost-sure convergence of the convex hulls 93

5.8 Convergence to the mean measure 96

II Maxima 100

6 The univariate theory: maxima and exceedances 100

6.1 Maxima 100

6.2 Exceedances 101

6.3 The domain of the exponential law 101

6.4 The Poisson point process associated with the limit law 102

6.5* Monotone transformations 104

6.6* The von Mises condition 105

6.7* Self-neglecting functions 108

7 Componentwise maxima 110

7.1 Max-id vectors 111

7.2 Max-stable vectors, the stability relations 112

7.3 Max-stable vectors, dependence 114

7.4 Max-stable distributions with exponential marginals on 1; 0/ 117

7.5* Max-stable distributions under monotone transformations 119

7.6 Componentwise maxima and copulas 121

III High Risk Limit Laws 123

8 High risk scenarios 123

8.1 Introduction 123

8.2 The limit relation 125

8.3 The multivariate Gaussian distribution 126

8.4 The uniform distribution on a ball 128

8.5 Heavy tails, returns and volatility in the DAX 130

8.6 Some basic theory 131

9 The Gauss-exponential domain, rotund sets 135

9.1 Introduction 136

Trang 11

9.2 Rotund sets 138

9.3 Initial transformations 140

9.4 Convergence of the quotients 143

9.5 Global behaviour of the sample cloud 146

10 The Gauss-exponential domain, unimodal distributions 147

10.1 Unimodality 147

10.2* Caps 149

10.3* L1-convergence of densities 152

10.4 Conclusion 154

11 Flat functions and flat measures 156

11.1 Flat functions 156

11.2 Multivariate slow variation 157

11.3 Integrability 159

11.4* The geometry 160

11.5 Excess functions 166

11.6* Flat measures 167

12 Heavy tails and bounded vectors 170

12.1 Heavy tails 170

12.2 Bounded limit vectors 173

13 The multivariate GPDs 176

13.1 A continuous family of limit laws 176

13.2 Spherical distributions 178

13.3 The excess measures and their symmetries 179

13.4 Projection 180

13.5 Independence and spherical symmetry 180

IV Thresholds 182

14 Exceedances over horizontal thresholds 183

14.1 Introduction 183

14.2 Convergence of the vertical component 185

14.3* A functional relation for the limit law 186

14.4* Tail self-similar distributions 187

14.5* Domains of attraction 190

14.6 The Extension Theorem 192

14.7 Symmetries 193

14.8 The Representation Theorem 195

14.9 The generators in dimension d D 3 and densities 196

14.10 Projections 198

14.11 Sturdy measures and steady distributions 200

14.12 Spectral stability 203

14.13 Excess measures for horizontal thresholds 204

Trang 12

14.14 Normalizing curves and typical distributions 205

14.15 Approximation by typical distributions 209

15 Horizontal thresholds – examples 211

15.1 Domains for exceedances over horizontal thresholds 211

15.2 Vertical translations 211

15.3 Cones and vertices 218

15.4 Cones and heavy tails 222

15.5* Regular variation for matrices inAh 227

16 Heavy tails and elliptic thresholds 230

16.1 Introduction 230

16.2 The excess measure 235

16.3 Domains of elliptic attraction 240

16.4 Convex hulls and convergence 243

16.5 Typical densities 245

16.6 Roughening and vague convergence 247

16.7 A characterization 251

16.8* Interpolation of ellipsoids, and twisting 256

16.9 Spectral decomposition, the basic result 258

17 Heavy tails – examples 263

17.1 Scalar normalization 264

17.2 Scalar symmetries 268

17.3* Coordinate boxes 273

17.4 Heavy and heavier tails 275

17.5* Maximal symmetry 278

17.6* Stable distributions and processes 282

17.7* Elliptic thresholds 285

18 Regular variation and excess measures 295

18.1 Regular variation 295

18.2 Discrete skeletons 299

18.3* Regular variation inAC 300

18.4 The Meerschaert spectral decomposition 304

18.5 Limit theory with regular variation 312

18.6 Symmetries 314

18.7* Invariant sets and hyperplanes 316

18.8 Excess measures on the plane 318

18.9 Orbits 320

18.10* Uniqueness of extensions 326

18.11* Local symmetries 329

18.12 Jordan form and spectral decompositions 333

18.13 Lie groups and Lie algebras 336

18.14 An example 344

Trang 13

V Open problems 348

19 The stochastic model 349

20 The statistical analysis 356

Bibliography 361

Index 369

Trang 15

Browsing quickly through the almost 400 pages that follow, it will become ately clear that this book seems to have been written by mathematicians for mathe-maticians And yet, the title has the catchy “High Risk Scenarios” in it Is this onceagain a cheap way of introducing finance related words in a book title so as to sellmore copies? The obvious answer from our, the authors’ point of view, must be no.This rather long introduction will present our case of defense: though the book isindeed written by mathematicians for a mathematically inclined readership, at the

is facing problems where new mathematical theory is increasingly called for It will

be difficult to force the final product you are holding in your hands into some specificcorner or school From a mathematical point of view, techniques and results fromsuch diverse fields as stochastics (probability and statistics), analysis, geometry andalgebra appear side by side with concepts from modern mathematical finance andinsurance, especially through the language of portfolio theory At the same time,risk is such a broad concept that it is very much hoped that our work will eventu-ally have applications well beyond the financial industry to areas such as reliabilityengineering, biostatistics, environmental modelling, to name just a few

The key ingredients in most of the theory we present relate to the concepts ofrisk, extremes, loss modelling and scenarios These concepts are to be situated within

a complex random environment where we typically interpret complexity as dimensional The theory we present is essentially a one-period theory, as so often en-countered in QRM Dynamic models, where time as a parameter is explicitly present,are not really to be found in the pages that follow This does not mean that such alink cannot be made; we put ourselves however in the situation where a risk manager

high-is judging the rhigh-iskiness of a complex system over a given, fixed time horizon der various assumptions of the random factors that influence the performance of thesystem the risk manager has to judge today how the system will perform by the end

Un-of the given period At this point, this no doubt sounds somewhat vague, but later inthis introduction we give some more precise examples where we feel that the theory

as presented may eventually find natural applications

A first question we would like to address is

“Why we two?”

There are several reasons, some of which we briefly like to mention, especially asthey reflect not only our collaboration but also the way QRM as a field of researchand applications is developing Both being born in towns slightly below or above sealevel, Amsterdam and Antwerp, risk was always a natural aspect of our lives For the

Trang 16

second author this became very explicit as his date of birth, February 3, 1953, wasonly two days after the disastrous flooding in Holland In the night of January 31

to February 1, 1953, several 100 km of dykes along the Dutch coast were breached

in a severe storm The resulting flooding killed 1836 people, 72 000 people needed

to be evacuated, nearly 50 000 houses and farms and over 200 000 ha of land were

The words of the Dutch writer Marsman from 1938 came back to mind: “En inalle gewesten, wordt de stem van het water, met zijn eeuwige rampen, gevreesd en

build up a long-lasting coastal protection through an elaborate system of dykes andsluices Though these defense systems could never guarantee 100% safety for thepopulation at risk, a safety margin of 1 in 10 000 years for the so-called Randstad (thelarger area of land around Amsterdam and Rotterdam) was agreed upon Given thesesafety requirements, dyke heights were calculated, e.g 5.14 m above NAP (NormaalAmsterdams Peil) A combination of environmental, socioeconomic, engineeringand statistical considerations led to the final decision taken for the dyke and sluiceconstructions For the Dutch population, the words of Andries Vierlingh from thebookTractaet van Dyckagie (1578) “De meeste salicheyt hangt aen de hooghte van

Project is very much related to the analysis of extremes Several research projects

related to the modelling of extremal events emerged, examples of which include ourPhD theses Balkema [1973] and Embrechts [1979] Indirectly, events and discussionsinvolving risk and extremes have brought us together over many years

Ex-treme Value Theory (EVT), has become a most important field of research, with

numerous key contributors all over the world Excellent textbooks on the subject

of EVT exist or are currently being written Moreover, a specialized journal solely

author (Balkema) continued working on fundamental results in the realm of heavytailed phenomena, the second author (Embrechts) became involved more in areasrelated to finance, banking, insurance and risk management Banking and financehave their own tales of extremes So much so that Alan Greenspan in a presentation

to the Joint Central Bank Research Conference in Washington D.C in 1995 stated(see Greenspan [1996]):

“From the point of view of the risk manager, inappropriate use of thenormal distribution can lead to an understatement of risk, which must be

1 “Spring tide and hurricane cause a national disaster The Netherlands in severe water peril.”

2 “And in every direction, one hears and fears the voice of the water with its eternal perils.”

3 “Most of the happiness depends on the height of a dyke.”

Trang 17

balanced against the significant advantage of simplification From thecentral bank’s corner, the consequences are even more serious because

we often need to concentrate on the left tail of the distribution in lating lender-of-last resort policies Improving the characterization ofthe distribution of extreme values is of paramount concern.”

in the wake of the LTCM hedge fund crisis:

“Extreme, synchronized rises and falls in financial markets occur quently but they do occur The problem with the models is that they didnot assign a high enough chance of occurrence to the scenario in whichmany things go wrong at the same time – the ‘perfect storm’ scenario.”Around the late nineties, we started discussions on issues in QRM for which fur-ther methodological work was needed One aim was to develop tools which could

infre-be used to model markets under extreme stress scenarios The more mathematicalconsequence of these discussions you are holding in your hands

It soon became clear to us that the combination of extremes and high dimensions,

in the context of scenario testing, would become increasingly important So let usturn to the question

“Why in the first part of the title High Risk Scenarios and Extremes?”

The above mentioned Delta Project and QRM have some obvious methodologicalsimilarities Indeed protecting the coastal region of a country from sea surges through

a system of dykes and sluices can be compared with protecting the financial system(or bank customers, insurance policy holders) from adverse market movementsthrough the setting of a sufficiently high level of regulatory risk capital or reserve

In the case of banking, this is done through the guidelines of the Basel Committee

on Banking Supervision For the insurance industry, a combination of internationalguidelines currently under discussion around Solvency 2 and numerous so-called lo-cal statutory guidelines have been set up The concept of dyke height in the DeltaProject translates into the notion of risk measure, in particular into the widely used

of one million euro means that the probability of incurring a portfolio loss of onemillion euro or more by the end of a two-week (10 trading days) period is 1% The

10 000 year return period in the dyke case is to be compared with the 99% confidencelevel in the VaR case Both sea surges and market movements are complicated func-tions of numerous interdependent random variables and stochastic processes Equallyimportant are the differences The prime one is the fact that the construction of a dykeconcerns the modelling of natural (physical, environmental) processes, whereas fi-nance (banking) is very much about the modelling of social phenomena Naturalevents may enter as triggering events for extreme market movements but are seldom

Trang 18

a key modelling ingredient An example where a natural event caused more thanjust a stir for the bank involved was the Kobe earthquake and its implications forthe downfall of Barings Bank; see Boyle & Boyle [2001] For the life insuranceindustry, stress events with major consequences are pandemics for instance Alsorelevant are considerations concerning longevity and of course market movements,especially related to interest rates Moving to the non-life insurance and reinsuranceindustry, we encounter increasingly the relevance of the modelling of extreme naturalphenomena like storms, floods and earthquakes In between we have for instance acts

of terrorism like the September 11 attack The “perfect storm scenario” where manythings go wrong at the same time is exemplified through the stock market declineafter the New Economy hype, followed by a longer period of low interest rates whichcaused considerable problems for the European life insurance industry This period ofeconomic stress was further confounded by increasing energy prices and accountingscandals

In order to highlight more precisely the reasons behind writing these lectures, wewill restrict our attention below to the case of banking Through the Basel guide-lines, very specific QRM needs face that branch of the financial industry For a broaddiscussion of concepts, techniques and tools from QRM, see McNeil, Frey & Em-brechts [2005] and the references therein Besides the regulatory side of bankingsupervision, we will also refer to the example of portfolio theory Here relevant refer-

(www.bis.org/bcbs) for market risk banks calculate VaR; this involves a holding riod of 10 days at the 99% confidence level for regulatory (risk) capital purposes and

pe-1 day 95% VaR for setting the bank’s internal trading limits Banks and regulatorsare well aware of the limitations of the models and data used so that, besides theinclusion of a so-called multiplier in the capital charge formula, banks complementtheir VaR reporting with so-called stress scenarios These may include larger jumps

in key market factors like interest rates, volatility, exchange rates, etc The resulting

such an extreme move occurs Other stress scenarios may include running the bank’strading book through some important historical events like the 1987 crash, the 1998LTCM case or September 11 Reduced to their simplest, but still relevant form, theabove stress scenarios can be formalized as follows Suppose that the market (to beinterpreted within the CAPM-framework, say; see Cochrane [2001]) moves stronglyagainst the holder of a particular portfolio Given that information, what can be saidabout risk measurement features of that portfolio Another relevant question in thesame vein is as follows Suppose that a given (smaller) portfolio moves against theholder’s interest and breaches a given risk management (VaR) limit How can onecorrect some (say as few as possible) positions in that portfolio so that the limit is notbreached anymore For us, motivating publications dealing with this type of problemare for instance Lüthi & Studer [1997], Studer [1997] and Studer & Lüthi [1997]

Trang 19

Thehigh-dimensionality within our theory is related to the number of assets in the

portfolio under consideration Of course, in many applications in finance, dimensionreduction techniques can be used in order to reduce the number of assets to an effectivedimensionality which often is much lower and indeed more tractable The decision to

be made by the risk manager is to what extent important information may have beenlost in that process But even after a successful dimension reduction, an effectivedimensionality between five and ten, say, still poses considerable problems for theapplication of standard EVT techniques By the nature of the problem extremeobservations are rare The curse of dimensionality very quickly further complicatesthe issue

In recent years, several researchers have come up with high-dimensional ket) models which aim at a stochastic description of macro-economic phenomena.When we restrict ourselves to the continuous case, the multivariate normal distribu-tion sticks out as the benchmark model par excellence Besides the computationaladvantages for the calculation of various relevant QRM quantities such as risk mea-sures and capital allocation weights, it also serves as an input to the construction of

as a random mixture of multivariate normals Various other examples of this type can

be worked out leading to the class of elliptical distributions as variance mixture mals, or beyond in the case of mean-variance mixture models Chapter 3 in McNeil,Frey & Embrechts [2005] contains a detailed discussion of elliptical distributions; anice summary with emphasis on applications to finance is Bingham & Kiesel [2002]

nor-A useful set of results going back to the early development of QRM leads to the

con-cerning risk measurement and capital allocation are well understood and behave much

as in the exact multivariate normal case For a concrete statement of these results,see Embrechts, McNeil & Straumann [2002] A meta-theorem, however, says that

as soon as one deviates from this class of elliptical models, QRM becomes muchmore complicated It also quickly becomes context- and application-dependent Forinstance, in the elliptical world, VaR as a risk measure is subadditive meaning thatthe VaR of a sum of risks is bounded above by the sum of the individual VaRs Thisproperty is often compared to the notion of diversification, and has a lot to do withsome of the issues we discuss in our book As an example we briefly touch upon the

Measurement Approach (AMA) which is based on the Loss Distribution Approach(LDA); once more, for detailed references and further particulars on the background,

we refer to McNeil, Frey & Embrechts [2005] For our purposes it suffices to realizethat, beyond the well-known risk categories for market and credit risk, under the newBasel Committee guidelines (so-called Basel II), banks also have to reserve (i.e allo-

Risk is defined as the risk of loss resulting from inadequate or failed internal

Trang 20

pro-cesses, people or systems or from external events This definition includes legal risk,but excludes strategic and reputational risk; for details on the regulatory framework,see www.bis.org/bcbs/ Under the LDA, banks are typically structured into eightbusiness lines and seven risk categories based on the type of operational loss Anexample is corporate finance (business line) and internal fraud (risk type) Depend-ing on the approach followed, one has either a 7-, 8-, or 56-dimensional problem tomodel Moreover, an operational risk capital charge is calculated on a yearly basisusing VaR at the 99.9% level Hence one has to model a 1 in 1000 year event This

higher, is obvious The subadditivity question stated above is highly relevant; indeed

a bank can add up VaRs business line-wise, risk type-wise or across any relevantsubdivision of the 8  7 loss matrix A final crucial point concerns the reduction

of these sums of VaRs taking “diversification effects” into account This may (andtypically does) result in a rather intricate analysis where concepts like risk measurecoherence (see Artzner et al [1999]), EVT and copulas (non-linear dependence) enter

in a fundamental way Does the multivariate extreme value theory as it is presented

on the pages that follow yield solutions to the AMA-LDA discussion above? Thereader will not find ready-made models for this discussion However, the operationalrisk issue briefly outlined above makes it clear that higher dimensional models arecalled for, within which questions on extremal events are of paramount importance

We definitely provide a novel approach for handling such questions in the future.Admittedly, as the theory is written down so far, it still needs a considerable amount

of work before concrete practical consequences emerge This situation is of coursefamiliar from many (if not all) methodological developments Besides the referencesabove, the reader who is in particular interested in the operational risk example,may consult Chavez-Demoulin, Embrechts & Nešlehová [2006] and Nešlehová, Em-brechts & Chavez-Demoulin [2006] From information on operational risk lossesavailable so far, one faces models that are skew and (very) heavy-tailed Indeed, it isthe non-repetitive (low-frequency) but high-severity losses that are of main concern.This immediately rules out the class of elliptical distributions Some of the modelsdiscussed in our book will come closer to relevant alternatives We are not claim-ing that the theory presented will, in a not too distant future, come up with a useful56-dimensional model for operational risk What we are saying, however, is that thetheory will yield a better understanding of quantitative questions asked concerningextremal events for high-dimensional loss portfolios

Mathematicians are well advised to show humbleness when it comes to modelformulation involving uncertainty, especially in the field of economics In a speechentitled “Monetary Policy Under Uncertainty” delivered in August 2003 in JacksonHole, Wyoming, Alan Greenspan started with the following important sentence: “Un-certainty is not just an important feature of the monetary policy landscape; it is thedefining characteristic of that landscape.” He then continued with some sentences

Trang 21

which are occasionally referred to, for instance by John Mauldin, asThe Greenspan Uncertainty Principle:

“Despite the extensive efforts to capture and quantify these key economic relationships, our knowledge about many of the importantlinkages is far from complete and in all likelihood will always remain

macro-so Every model, no matter how detailed and how well designed ceptually and empirically, is a vastly simplified representation of theworld that we experience with all its intricacies on a day-to-day basis.Consequently, even with large advances in computational capabilitiesand greater comprehension of economic linkages, our knowledge base

con-is barely able to keep pace with the ever-increasing complexity of ourglobal economy.”

in our global economy, in my judgment, compel such a conclusion.”

For many questions in practice, and in particular for questions related to the economy

one can say that so often the road towards finding a model is far more important thanthe resulting model itself We hope that the reader studying the theory presented inthis book will enjoy the trip more than the goals reached so far We have alreadydiscussed some of the places we will visit on the way

One of the advantages of modern technology is the ease with which all sorts of formation on a particular word or concept can be found We could not resist googling

which we obtained in 0.34 seconds It is somewhat disturbing, or one should perhapssay sobering, that our book will add just one extra entry to the above list The morecorrect search, keeping the three words linked as in the title of our book, yielded

a massive reduction to an almost manageable 717 Besides the obvious tions with the economic and QRM literature, other fields entering included terrorism,complex real-time systems, environmental and meteorological disasters, biosecurity,medicine, public health, chemistry, ecology, fire and aviation, Petri nets or softwaredevelopment Looking at some of these applications it becomes clear that there is

connec-no common understanding of the termiconnec-nology From a linguistic point of view, onecould perhaps query the difference between “High-Risk Scenario” and “High Risk-Scenario” Rather than doing so, we have opted for the non-hyphenated version Inits full length “High Risk Scenarios and Extremes” presents a novel mathematical

Trang 22

theory for the analysis of extremes in multi-dimensional space Especially the metric literature is full of attempts to describe such models Relevant for our purposesare papers like Pesaran, Schuermann & Weiner [2004], Pesaran & Zaffaroni [2004],Dees et al [2007], and in particular the synthesis paper Pesaran & Smith [2006] onthe so-called global modelling approach From a more mathematical finance point

econo-of view, Platen [2001] and Platen [2006], and Fergusson & Platen [2006] describemodels to which we hope our theory will eventually be applicable Further relevantpublications in this context are Banner, Fernholz & Karatzas [2006] and Fernholz[2002]

In the preceding paragraphs, we explained some of our motivations behind the firstpart of the title: “High Risk Scenarios and Extremes” The next, more mathematicalquestion is

“Why the second part of the title, A Geometric Approach?”

A full answer to this question will become clear as the reader progresses through thepages that follow There are various approaches possible towards a multivariate theory

of extremes, most of these being coordinatewise theories This means that, startingfrom a univariate EVT, a multivariate version is developed which looks at coordinatemaxima and their weak limit laws under appropriate scaling Then the key question

to address concerns the dependence between the components of the nondegeneratelimit In the pages that follow, we will explain that, from a mathematical point ofview, a more geometrical, coordinate-free approach towards the stochastic modelling

is not only mathematically attractive, but also very natural from an applied point ofview For this, first recall the portfolio link stated above A portfolio is merely a linear

The hopefully rare event that the value of the portfolio

management considerations A value below q should only happen with a very small

iD1wiXi  q

has an immediate

in portfolio language, one moves from a long to a short position Further, the world

of financial derivatives allows for the construction of portfolios, the possible values

to have a broad theory that yields the description of rare events over a wide range

Trang 23

of portfolio positions The geometry enters naturally through the description of this

about the stochastic behaviour of X1; : : : ; Xd/given that such a rare event has curred? Thus a theory is needed which yields results on the conditional distribution

oc-of a random vector (the risk factors) given that a linear combination oc-of these factors(a portfolio position or market index) surpasses a high (rare) value

The interpretation of high or rare value depends on the kind of position taken,hence in the first instance, the theory should allow for resulting halfspaces to drift toinfinity in a general (non-preferred) direction This kind of isotropic limit neverthelessyields a rich theory covering many of the standard examples in finance and insurance

At the same time, however, one also needs to consider theories for ate extremes where the rare event or high risk scenario corresponds to a “driftingoff” to infinity in one specific direction This of course is the case when one isinterested in one particular portfolio with fixed weights over the holding period (in-vestment horizon) of the portfolio Another example concerns the operational riskproblem discussed above Here the one-year losses correspond to random variables

lines, seven loss types or fifty-six combinations of these Under Basel II, banks have

question to ask is the limiting behaviour of the conditional distribution of the vector

given by the vector 1; : : : ; 1/ The mathematics entering the theory of multivariate

men-tioned above and translates into different invariance properties of classes of limit lawsunder appropriate transformations Examples of research papers where the interplaybetween geometrical thinking and the discussion of multivariate rare events are to befound include for instance Hult & Lindskog [2002] and Lindskog [2004] The latterPhD thesis also contains a nice summary of the various approaches available to themultivariate theory of regular variation and its applications to multivariate extremevalue theory Besides the various references on this topic presented later in the text,

we also like to mention Fougères [2004] and Coles & Tawn [1991] The necessarystatistical theory is nicely summarized in Coles [2001]

Perhaps an extra remark on the use of geometric arguments, mainly linked toinvariance properties and symmetry arguments is in order It is no doubt that one

of the great achievements of 19th century and early 20th century mathematics is theintroduction of abstract tools which contribute in an essential way to the solution ofapplied problems Key examples include the development of Galois Theory for thesolution of polynomial equations or Lie groups for the study of differential equations

By now both theories have become fundamental for our understanding of naturalphenomena like symmetry in crystals, structures of complex molecules or quantum

Trang 24

behaviour in physics For a very readable, non-technical account, see for instance,Ronan [2006] We strongly believe that geometric concepts will have an importantrole to play in future applications to Quantitative Risk Management.

By now, we made it clearer why we have written this text; the motivation comesdefinitely from the corner of QRM in the realm of mainly banking and to someextent insurance An alternative title could have been “Stress testing methodologyfor multivariate portfolios”, though such a title would have needed a more concreteset of tools for immediate use in the hands of the (financial) portfolio manager Weare not yet at that level On the other hand, the book presents a theory which cancontribute to the discussion of stress testing methodology as requested, for instance,gp

in statements of the type

“Banks that use the internal models approach for meeting market riskcapital requirements must have in place a rigorous and comprehensivestress testing program Stress testing to identify events or influences thatcould greatly impact banks is a key component of a bank’s assessment

of its capital position.”

taken from Basel Committee on Banking Supervision [2005] Over the years, ous applications of EVT methodology to this question of stress testing within QRMhave been worked out Several examples are presented in McNeil, Frey & Embrechts[2005] and the references therein; further references beyond these include Bensalah[2002], Kupiec [1998] and Longin [2000] There is an enormous literature on thistopic, and we very much hope that academics and practitioners contributing to andinterested in this ever-growing field will value our contribution and indeed help inbringing the theory presented in this book to full fruition through real applications.One of the first tasks needed would be to come up with a set of QRM questions whichcan be cast in our geometric approach to high risk stress scenarios Our experience

numer-so far has shown that such real applications can only be achieved through a close laboration between academics and practitioners The former have to be willing (andmore importantly, capable) to reformulate new mathematical theory into a languagewhich makes such a discussion possible The latter have to be convinced that several

col-of the current quantitative questions asked in QRM do require new methodologicaltools In that spirit, the question

“For whom have we written this book?”

should in the first instance be answered by: “For researchers interested in ing the mathematics of multivariate extremes.” The ultimate answer should be “Forresearchers and practitioners in QRM who have a keen interest in understanding theextreme behaviour of multivariate stochastic systems under stress” A key example

understand-of such a system would be a financial market At the same time, the theory sented here is not only coordinate-free, but also application-free As a consequence,

Trang 25

pre-we expect that the book may appeal to a wider audience of “extreme value adepts”.One of the consequences of modern society with its increasing technological skillsand information technology possibilities is that throughout all parts of science, largeamounts of data are increasingly becoming available This implies that also moreinformation on rare events is being gathered At the same time, society cares (or atleast worries) about the potential impact of such events and the necessary steps to

be taken in order to prevent the negative consequences Also at this wider level, ourbook offers a contribution to the furthering of our understanding of the underlyingmethodological problems and issues

It definitely was our initial intention to write a text where (new) theory and isting) practice would go more hand in hand A quick browse through the pages thatfollow clearly shows that theory has won and applications are yet to come This ofcourse is not new to scientific development and its percolation through the poroussponge of real applications The more mathematically oriented reader will hopefullyfind the results interesting; it is also hoped that she will take up some of the scientificchallenges and carry them to the next stage of solution The more applied reader,

(ex-we very much hope, will be able to sail the rough seas of mathematical results like asurfer who wants to stay near the crest of the wave and not be pulled down into thedepths of the turbulent water below That reader will ideally guide the former intoareas relevant for real applications We are looking forward to discuss with both

Trang 27

Typically the region is a halfspace, and one is concerned about the eventuality

of future data points lying far out in the region We shall use the terminology offinancial mathematics and speak of loss and risk The data cloud could just as wellcontain data of insurance claims, or data from quality control, biomedical research,

or meteorology In all cases one is interested in the extremal behaviour at the edge

of the sample cloud, and one may use the concepts of risk and loss In a multivariate

further out into the halfspace

In first instance the answer to the question above is: “Nothing” There are too fewpoints to perform a statistical analysis However some reflection suggests that onecould use the whole sample to fit a distribution, say a Gaussian density, and use thetails of this density to determine the conditional distribution on the given halfspace

In financial mathematics nowadays one is very much aware of the dangers of thisapproach The Gaussian distribution gives a good fit for the daily log returns, but not

in the tails So the proper recipe should be: Fit a distribution to the data, and checkthat the tails fit too If one can find a distribution, Gaussian say, or elliptic Student,that satisfies these criteria, then this solves the problem, and we are done In that casethere is no need to read further

What happens if the data cloud looks as if it may derive from a normal distribution,but has heavy tails? There is a convex central black region surrounded by a halo ofisolated points The cloud does not exhibit any striking directional irregularities

able to elicit information from bland clouds

Rather than fitting a distribution to the whole cloud, we shall concentrate on thetails We assume some regularity at infinity In finite points regularity is expressed

Trang 28

by the existence and continuity of a positive density at those points Locally the

the number of points in the sample increases We want to perform a similar analysis atinfinity Of course, in a multivariate setting there are many ways in which halfspacesmay diverge This problem is inherent to multivariate extremes In order to obtainuseful results, we have to introduce some regularity in the model setup

Ansatz Conditional distributions on halfspaces with relatively large overlap

asymp-totically have the same shape

Let us make the content of the Ansatz more precise

Definition Two probability distributions (or random vectors Z and W ) have the

sameshape or are of the same type if they are non-degenerate, and if there exists an

affine transformation ˛ such that Z is distributed like ˛.W / A random vector Z

linear functional  ¤ 0 and a real constant c such that Z D c a.s

For instance, all Gaussian densities have the same shape Shape (or type) is a

coordinates such that in these coordinates the distribution is standard Gaussian withdensity

Theorem 1 (Convergence of Types) If Zn) Z and Wn) W , where Wnand Zn

are of the same type for each n, then the limit vectors, if non-degenerate, are of the same type.

Proof See Fisz [1954] or Billingsley [1966]. 

At first sight the CTT may look rather innocuous In many applied probabilityquestions involving limit theorems it works like a magic hat from which new modelsmay be pulled: In the univariate setting the Central Limit Problem for partial sumsyields the stable distributions; the Extreme Value Problem for partial maxima yieldsthe extreme value distributions See Embrechts, Klüppelberg & Mikosch [1997],Chapters 2 and 3 In the multivariate setting, in Chapter II below, the CTT yields thewell-known multivariate max-stable laws; in Chapter III the CTT yields a continuous

Trang 29

one-parameter family of limit laws; and in Chapter IV the CTT yields two parametric families of high risk limit laws, one for exceedances over (horizontal)linear thresholds, one for exceedances over elliptic thresholds.

semi-Let us try to give the intuition behind the CTT in the case of a Gaussian limit

n Zn/) W;

where W is standard normal The validity of the term asymptotic normality wouldseem to derive from geometric insight In geometric terms one may try to associate

sample clouds from distributions that are asymptotically Gaussian are asymptoticallyelliptic, and that affine transformations that map the elliptic sample clouds into spher-ical sample clouds may be used to normalize the distributions The normalizations arethus determined geometrically The same geometric intuition forms the background

to these lectures Instead of convergence of the whole sample cloud we now assumeconvergence at the edge Since we want to keep sight of individual sample points,

we assume convergence to a point process

Affine transformations are needed to pull back the distributions as the halfspaces

D

1

D

1z



Trang 30

This representation makes it possible to apply standard results from linear algebrawhen working with affine transformations We usually write Z for the random vector

where id stands for the identity transformation Asymptotic equality is an equivalence

Warning If ˛n ! id then ˛1

n ,not even in dimension d D 1 Here is a simple counterexample:

Example 2 Let Xn be uniformly distributed on the interval 1; n C 1/ Properly

After this digression on shape, geometry and affine transformations, let us returnnow to the basic question of determining the distribution on a halfspace containingonly a few (or no) points of the sample, and to our Ansatz that high risk scenarios

_ _

3000 2000 1000 0 1000 2000 3000 _

21

32 28

Exceedances over linear thresholds with varying direction, for 40 000 points Boxed are thenumber of sample points in the halfplanes, and in the intersection In the Cauchy sample manypoints lie outside the figure; the highest is 27 000; 125 000/

Trang 31

on halfspaces with relatively large overlap have distributions with approximately the

is just the vector Z conditioned to lie in H For halfspaces far out this corresponds

to our interpretation of a rare or extreme event The reader may wonder whether theAnsatz implies that all high risk scenarios asymptotically have the same shape Notethat the condition of a relatively large overlap is different for light tails and for heavytails For a Gaussian distribution the directions of two halfspaces far out have to be

considerable overlap even if the directions of the halfspaces are orthogonal See thefigure above

In the univariate case the condition that high risk scenarios, properly normalized,

distributions These GPDs may be standardized to form a continuous one-parameter

has been applied in many fields It is our aim to develop a corresponding theory inthe multivariate setting

It presupposes a high degree of directional homogeneity in the halo of the samplecloud In order to understand multivariate tail behaviour, a thorough analysis of theconsequences of this strong assumption seems like a good starting point This analy-sis is given in Chapter III, the heart of the book As an illustration we exhibit below

stocks on the Dutch stock exchange AEX over the period from 2-2-04 until 31-12-05.The data were kindly made available by Newtrade Research

One may also start with the weaker assumption that the high risk scenarios verge for halfspaces which diverge in a certain direction This is done in Chapter IVfor horizontal halfspaces The Ansatz now holds only for horizontal halfspaces Write

horizontal halfspaces Hy D Rh Œy; 1/ correspond to exceedances over horizontal

Trang 32

_ _ _

_

4 2 0 2 4 6

_ _ _

_

Bland sample clouds: bivariate marginals of daily log-returns ING - Allianz - ASML

thresholds Let ˛.y/ be affine transformations mapping HC D fy  0g, the upper

halfspace, onto Hy The vectors Wy D ˛.y/1.ZHy/ live on HC Now supposethat the ˛.y/ yield a limit vector:

Assume the limit is non-degenerate What can one say about its distribution? It is notdifficult to see that ˛.y/ maps horizontal halfspaces into horizontal halfspaces, and

the corresponding normalization, converge to the vertical coordinate V of the limitvector, W D U; V / By the univariate theory the vertical coordinate of the limit

clouds converge in distribution to a Poisson point process:

is finite:

questions such as:

1) What limit laws are possible?

2) For a given limit law, what conditions on the distribution of Z will yieldconvergence?

Trang 33

The second relation is more geometric Here one may ask:

horizon-tal?

2) Will the convex hull of the normalized sample cloud converge to the convex

For the novice to the application of point process methodology to extreme value

its applications to more involved problems in risk management, however, needs thislevel of abstraction See McNeil, Frey & Embrechts [2005] for a good discussion

of these issues In the one-dimensional case one already needs such a theory for

follow the general scheme

from the trivial fact that a high risk scenario of a high risk scenario is again a high riskscenario, at least for horizontal halfspaces In the univariate setting, the exponential

property: Any tail of the distribution is of the same type as the whole distribution In

fact this tail property characterizes the class of GPDs It may also explain why thesedistributions play such an important role in applications in insurance and risk theory

In the multivariate setting the tail property is best formulated in terms of the infinite

Definition A measure on an open set in Rd is a Radon measure if it is finite for

Excess measures play a central role in this book They are infinite, but have

a simple probabilistic interpretation in terms of point processes The significance

of point processes for extreme value theory has been clear since the appearance of

4 One-parameter groups of matrices should not frighten a reader who has had some experience with finite state Markov chains in continuous time or with linear differential equations in R d of the form P x D Ax.

Trang 34

the book Resnick [1987] In our more geometric theory the excess measure is the

mean measure of the Poisson point process which describes the behaviour of the

sample cloud, properly normalized, at its edge, as the number of data points tends toinfinity An example should make clear how excess measures may be used to tackleour problem of too few sample points

Example 3 Suppose t, t 2R, is the group of vertical translations, tW u; v/ 7!

 is an excess measure Conversely one may show that any excess measure with

The product form of the excess measure in the example makes it possible to estimatethe spectral measure even if the upper halfspace contains few points One simplychooses a larger horizontal halfspace, containing more points Something similarmay be done for any excess measure for exceedances over horizontal thresholds

We shall not go into details here Suffice it to say that such an excess measure is

Representation Theorem for the limit vector:

Recipe Replace ZHby ˛H.W /and computePf˛H.W /2 Eg D .˛1

and the integralE'.˛H.W //DR

Given the symmetry group and the normalization ˛H one only needs to know the spectral measure  to compute these quantities The spectral measure may be estimated from data points lower down in the sample cloud. ˙

Trang 35

These allow us to replace a halfspace containing few observations by a halfspace

Given the recipe, it is clear what one should do to develop the underlying theory:

and Scheffler in their book Meerschaert & Scheffler [2001] on limit laws for sums ofindependent random vectors Let us give a summary of the theory in MS

power series There is a one-to-one correspondence between matrices C of size d and

Example 4 Lebesgue measure onRd satisfies (9) with t D diag.at

1; : : : ; adt/ for

half-spaces of measure one There are, if one restricts the measure to an orthant, or a

expansions, and the image of the open unit ball B D fkwk < 1g contains the closed

ball B In the second case there are many halfspaces of finite mass: .J / is finitefor any halfspace J that does not contain the origin Constructing excess measures

is not difficult!

5 Coles and Tawn, in their response to the discussion of their paper Coles & Tawn [1994] write: “Anderson points out that our point process model is simply a mechanism for relating the probabilistic structure within the range of the observed data to regions of greater extremity This, of course, is true, and is a principle which, in one guise or another, forms the foundation of all extreme value theory.”

Trang 36

halfspaces J may be reformulated as

In a slightly different terminology this states that ˇ varies regularly with index C ,

introduction to multivariate regular variation Regular variation of linear tions is treated in more detail in Chapter 4 of Meerschaert & Scheffler [2001] Thecentral result, the Meerschaert Spectral Decomposition Theorem, states that one may

Does one really need the theory of multivariate regular variation to handle highrisk scenarios?

We list four:

One gets back the original curve ˇ, or a curve asymptotic to the original curve, by

2) The theory contains a number of deep results that clarify important issues inapplications We give two examples of questions that may be resolved by the Meer-

theory of regular variation

linear in the new coordinates?

are diagonal Is it possible to choose coordinates in z-space and normalizationsN

and 16.13below This result explains why univariate extreme value theory is somuch simpler for heavy tails than for distributions in the domain of the Gumbel law.Univariate linear normalizations are non-zero scalars! The answer to ii) is “Yes” ifthe diagonal entries in  are distinct

Trang 37

3) Regular variation enables us to construct simple continuous densities in thedomain of attraction of excess measures with continuous densities By the transfor-

Example 5 The Gauss-exponential density eu T u=2ev=.2/h=2determines an

4) Multivariate regular variation has a strong geometric component This is

aEuclidean Pareto measure onRdn f0g These measures are spherically symmetric

Definition Bounded convex sets Fnand Enof positive volume areasymptotic if

Here jAj denotes the volume of the set A

Exercise 6 The reader is invited to investigate what a sequence of centered ellipses En

Trang 38

Example 7 Start with a sequence of open centered ellipsoids E0; E1; : : : such that

Pareto excess measure , where c D 2dC1= One may choose linear transformationsˇ.t /, depending continuously on t  0, such that ˇ.t /.B/ D E.t /, and such that ˇ

The reader will notice that densities occupy a central position in our discussions

In the multivariate situation densities are simple to handle Densities are ric: sample clouds tend to evoke densities rather than distribution functions If theunderlying distribution has a singular part, this will be reflected in irregularities inthe sample cloud Such irregularities, if they persist towards the boundary, call for

geomet-a different stgeomet-atisticgeomet-al geomet-angeomet-alysis In the theory of coordingeomet-atewise mgeomet-aximgeomet-a dfs plgeomet-ay geomet-anall-important role Densities have been considered too, see de Haan & Omey [1984]

or de Haan & Resnick [1987], but on the whole they have been treated as stepchildren

In our more geometric approach densities are a basic ingredient for understandingasymptotic behaviour From our point of view the general element in the domain ofattraction of an excess measure with a continuous density is a perturbation of a prob-ability distribution with a typical density From a naive point of view we just zoom

in on the part of the sample cloud where the vertical coordinate is maximal, adaptingour focus as the number of sample points increases Under this changing focus thedensity with which we drape the sample cloud should converge to the density of thelimiting excess measure

Proper normalization is essential for handling asymptotic behaviour and limitlaws in probability theory The geometric approach allows us to ignore numericaldetails, and concentrate on the main issues Let us recapitulate: In order to estimatethe distribution on a halfspace containing few sample points one needs some form

a given direction have the same shape If one assumes a limit law, then there is anexcess measure The symmetries of the excess measure make it possible to estimatethe distribution on halfspaces far out by our recipe above The symmetries also imposeconditions on the normalizations These conditions have a simple formulation in terms

Trang 39

of regular variation One may choose the normalizing curve ˇ in (11) to vary like t.

variation on the normalizations

The four arguments above should convince the reader that regular variation is notonly a powerful, but also a natural tool for investigating the asymptotic behaviour ofdistributions in the domain of attraction of excess measures

In these notes we take an informal approach to regular variation, dictated by itsapplications to extremes Attention is focussed on three situations:

coordinatewise affine transformations (CATs);

halfspaces;

The theory of coordinatewise extremes is well known, and there exist many goodexpositions Our treatment in Chapter II is limited to essentials Exceedances aretreated in Chapter IV Exceedances over horizontal thresholds describe high riskscenarios associated with a given direction; exceedances over elliptic thresholds may

be handled by linear expansions The theory developed in MS is particularly wellsuited to exceedances over elliptic thresholds Arguments for using elliptic thresholds

now reads

symmetry group of scalar expansions

normalizations ˇ.t / to be scalar, then the ellipsoids ˇ.t /.B/ are balls The limitrelation for the high risk scenarios simplifies:

is natural to use polar coordinates and write Z D R with R D kZk The distribution

Trang 40

@B Œ1; 1/, where is thespectral measure, and G a Pareto distribution on Œ1; 1/

of the tails

Here we have another example of the close relation between symmetry and pendence! In this model it is again obvious how to estimate the distribution of the

in the complement of the ball rB

on concepts like scale invariance, self-similarity and symmetry It is geometric andlocal Independence is a global analytic assumption It allows one to draw far-reaching conclusions about extremes, but the techniques are different from thosedeveloped here

So far we have assumed convergence of a one-parameter family of high risk

For simplicity assume Z has a density Assume convergence of the normalized high

have a simple form, and without norming constant, are

Ngày đăng: 21/01/2020, 09:06