1. Trang chủ
  2. » Luận Văn - Báo Cáo

EXTREME VALUES ANDPROBABILITY DISTRIBUTION FUNCTONSON FINITE DIMENSIONAL SPACES

63 164 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 63
Dung lượng 405,95 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Chapter 1 introduces the basic concepts related to Univariate Extreme Value Theory.This chapter concerns with the limit problem of determining the possible limits ofsample extremes and t

Trang 1

VIETNAM NATIONAL UNIVERSITYUNIVERSITY OF SCIENCE

FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS

Trang 2

VIETNAM NATIONAL UNIVERSITYUNIVERSITY OF SCIENCE

FACULTY OF MATHEMATICS, MECHANICS AND INFORMATICS

Advanced Undergraduate Program in Mathematics

Thesis advisor: Assoc.Prof.Dr Ho Dang Phuc

Hanoi - 2012

Trang 3

It would not have been possible to write this undergraduate thesis without the help,and support, of the kind people around me, to only some of whom it is possible togive particular mention here

This thesis would not have been possible without the help, support and patience of

my advisor, Assoc.Prof.Dr Ho Dang Phuc, not to mention his advice and passed knowledge of probability and statistic The advice, support and friendship

unsur-of his have been invaluable on both an academic and a personal level, for which I

am extremely grateful

I would like to show my gratitude to my teachers at Faculty of Mathematics, chanics and Informatics, University of Sciences, VietNam National University whoequip me with important mathematics knowledge during first four years at the uni-versity

Me-I would like to thank my parents for their personal support and great patience at alltimes My parents have given me their unequivocal support throughout, as always,for which my mere expression of thanks likewise does not suffice

Last, but by no means least, I thank my friends in K53-Advanced Math for theirsupport and encouragement throughout

Trang 4

List of abbreviations and symbols

Here is a glossary of miscellaneous symbols, in case you need a reference guide

∼ f(x) ∼ g(x)as x→ x0means that limx → x0 gf((xx)) =1

= X =d Y: X and Y have the same distribution

o(1) f(x) = o(g(x))as x→ x0means that limx → x0 gf((xx)) =0

f← The generalized inverse of a monotone function f defined by

[F >0] means the set{x : F(x) > 0

M+(E) The space of nonnegative Radon measures onE.

C(f) The points at which the function f is continuous

d.f Distribution function

r.v Random variable

DOA Domain of attraction

Trang 5

Acknowledgments i

List of abbreviations and symbols ii

Introduction v

Chapter 1 Univariate Extreme Value Theory 1

1.1 Introduction 1

1.1.1 Limit Probabilities for Maxima 2

1.2 Maximum Domains of Attraction 4

1.2.1 Max-Stable Distributions 12

1.3 Extremal Value Distributions 13

1.3.1 Extremal Types Theorem 13

1.3.2 Generalized Extreme Value Distributions 19

1.4 Domain of Attraction Condition 20

1.4.1 General Theory of Domains of Attraction 23

1.5 Condition for belonging to Extreme Value Domain 26

Chapter 2 Multivariate Extreme Value Theory 30

2.1 Introduction 30

2.2 Limit Distributions of Multivariate Maxima 32

2.2.1 Max-infinitely Divisible Distributions 33

2.2.2 Characterizing Max-id Distributions 36

2.3 Multivariate Domain of Attraction 39

2.3.1 Max-stability 39

2.4 Basic Properties of Multivariate Extreme Value Distributions 41

2.5 Standardization 45

Conclusion 49

Trang 6

Chapter A Appendix 50

A.1 Modes of Convergence 50

A.2 Inverses of Monotone Functions 53

A.3 Some Convergence Theorems 54

Bibliography 55

Trang 7

Extreme value theory developed from an interest in studying the behavior of themaximum or minimum (extremes) of independent and identically distributed ran-dom variables Historically, the study of extremes can be dated back to NicholasBernoulli who studied the mean largest distance from the origin to n points scat-tered randomly on a straight line of some fixed length (Gumbel.1958 [15]) Extremevalue theory provides important applications in finance, risk management, telecom-munication, environmental and pollution studies and other fields In this thesis, westudy the probabilistic approach to extreme value theory The thesis is divided intothe two chapters, namely,

Chapter 1: Univariate Extreme Value Theory

Chapter 2: Multivariate Extreme Value Theory

Chapter 1 introduces the basic concepts related to Univariate Extreme Value Theory.This chapter concerns with the limit problem of determining the possible limits ofsample extremes and the domain of attraction problem

Chapter 2 provides basic results in Multivariate Extreme Value Theory We dealwith the probabilistic aspects of multivariate extreme value theory by including thepossible limits and their domain of attraction

The main materials of the thesis were taken from the books by M R Leadbetter, G.Lindgren, and H Rootz´en [16], Resnick [18], Embrechts [12] and de Haan and AnaFerreira [11] We have also borrowed extensively from lecture notes of BikramjitDass [9]

Trang 8

C HAPTER 1

Univariate Extreme Value Theory

This chapter is primarily concerned with the central result of classical extreme valuetheory, the Extremal Types Theorem, which specifies the possible forms for the limit-ing distribution of maxima in sequences of independent and identically distributed(i.i.d.) random variables(r.v.s) In the derivation, the possible limiting distributionsare identified with a class having a certain stability property, the so-called max-stabledistributions It is further shown that this class consists precisely of the three familiesknown (loosely) as the three extreme value distributions

We consider some basic theory for sums of independent random variables This cludes classical results such as the strong law of large numbers and the Central LimitTheorem Throughout this chapter X1, X2, is a sequence of i.i.d non-degeneratereal random variables defined on a probability space(Ω,F, P)with common distri-butions function(d.f.) F We consider the partial sums

in-Sn = X1+ · · · +Xn, n≥1.

and of the sample means

Xn =n−1Sn = Sn

n , n ≥1.

Let X be random variable and denote the expectation, the variance of X by E(X) =

µ, Var(X) = σ2 Firstly, we assume that E(X) = µ < ∞ From the strong law of

Trang 9

large numbers, we get

1.1.1 Limit Probabilities for Maxima

Whereas in above, we introduced ideas on partial sums, in this section we gate the fluctuations of the sample maxima:

Remark 1.1 Corresponding results for minima can easily be obtained from those for

maxima by using the identity

xF =sup{x ∈ R : F(x) <1 (1.2)That is, F(x) < 1 for all x< xF and F(x) =1 for all x >xF We immediately obtain

(

P(Mn ≤x) = Fn(x) →0, n→∞ for all x <xF

P(Mn ≤x) = Fn(x) →1, n→∞ in the case xF <∞,∀x >xF

Trang 10

Therefore the limit distribution limn → ∞Fn(x)is degenerate Thus Mn →P xF as n →

∞ where xF < ∞ Since the sequence (Mn) is non decreasing in n, it convergesalmost surely(a.s), no matter whether it is finite or infinite and hence we concludethat

Definition 1.1. A univariate distribution function F, belong to the maximum main of attraction of a distribution function G if

do-1 G is non degenerate distribution

2 There exist real valued sequence an >0, bn ∈R, such that

Finding the limit distribution G(x)is called the Extremal Limit Problem Finding the

F(x)that have sequences of constants as described above leading to G(x) is calledthe Domain of Attraction Problem

For large n, we can approximate P(Mn ≤ x) ≈ G(x−bn

a n ) We denote F ∈ D(G) Weoften ignore the term ’maximum’ and abbreviate domain of attraction as DOA.Now we are faced with certain questions:

1 Given any F, does there exist G such that F ∈ D(G)?

2 Given any F, if G exist, is it unique?

3 Can we characterize the class of all possible limits G according to definition1.1?

4 Given a limit G, what properties should F have so that F ∈ D(G)?

5 How can we compute an, bn ?

The goal of the next section is to answer the above questions

Trang 11

1.2 Maximum Domains of Attraction

Let’s consider probabilities of the form

1 Conditions on F, that ensure the existence of the limit of P(Mn ≤ un) for

n→∞ and appropriate constants un

2 Possible limit laws for the (centered and normalized) maxima Mn(comparable

to the Central Limit Theorem)

Example 1.1 Let X be a standard exponential distribution Then the distribution

Theorem 1.1(Poisson approximation) For given τ ∈ [0,∞]and a sequence{un}of realnumbers, the following two conditions are equivalent:

P(Mn ≤un) → e−τ as n→ ∞ (1.5)where F=1−F

Trang 12

Proof Suppose first that 0≤τ <∞ If (1.4) holds, then

so that (1.5) follows at once

Conversely, if (1.5) holds(0≤τ <∞), we must have

F(un) =1−F(un) → 0(otherwise, F(unk)would be bounded away from 0 for some subsequence(nk) and

P(Mnk ≤ unk) = (1−F(unk))nk would imply P(Mnk ≤ unk) → 0) By taking rithms in (1.5), we have

loga-−n ln(1−F(un)) → τ.Since−ln(1−x) ∼ x for x→0, this implies nF(un) = τ+o(1)that giving (1.4)

If τ = ∞ and (1.4) holds but (1.5) does not, there must be a subsequence(nk) suchthat

P(Mnk ≤unk) → exp{−τ0},

as k→ ∞ for some τ0 < ∞ But then (1.5) implies (1.4), so that nkF(unk) → τ0 < ∞,

contradicting (1.4) with τ=∞

Similarly, (1.5) implies (1.4) for τ=∞

Example 1.2 We consider the distribution function F

P{(log n)2[Mn+ 1

log n] ≤ x+o(1)} → exp(−e

− x),giving Gumbel distribution with

an = (log n)−2, bn = −(log n)−1

Trang 13

We denote f(x−) =limy x f(y)and p(x) = F(x) −F(x−).

Theorem 1.2 Let F be a d.f with right endpoint xF ≤∞ and let τ ∈ (0,∞) There exists

a sequence(un)satisfying nF(un) →τ if and only if

Hence, by Theorem 1.1, if 0 < ρ < 1, there is a sequence {un} such that P(Mn ≤

un) → ρ if and only if (1.6) (or (1.7)) holds For ρ = 0 or 1, such a sequence mayalways be found

Proof We suppose that (1.4) holds for some 0 <τ <∞ but that, say (1.7), does not

Then there exists e and a sequence{xn}such that xn → xF and

p(xn) ≥2e(F(xn−)) (1.8)Now choose a sequence of integers{nj} so that 1− τ

nj is ”close” to the midpoint ofthe jump of F at xj, i.e such that

(i) un j <xjfor infinitely many values of j, or

(ii) unj ≥xjfor infinitely many j-values

If alternative (i) holds, then for such j,

njF(unj) ≥njF(xj−) (1.9)Now, clearly

Trang 14

by (1.8) so that

(1−e)njF(xj−) ≥ττ

nj+1.Since clearly nj →∞, it follows that (since τ ∈ (0,∞)by assumption)

Conversely, suppose that (1.6) holds and let{un}be any sequence such that F(un−) ≤

from which (1.4) follows since clearly un →xF as n →∞

The result applies in particular to discrete distributions with infinite right endpoint

If the jump heights of the d.f do not decay sufficiently fast, then a non-degeneratelimit distribution for maxima does not exist

Example 1.3 (Poisson distribution) Let X be a Poisson r.v.s with expectation λ>0;i.e,

F ( k − 1 ) →0

Hence, by virtue of Theorem 1.2 we can see that no non-degenerate distribution can

be limit of normalized maxima taken from a sequence of random variables cally distributed as X

Trang 15

identi-Example 1.4 (Geometric distribution) We consider the random variable X with

By the same argument as above, no limit P(Mn ≤ un) → ρ exists except for ρ = 0

or 1, that implies there is no nondegenerate limit distribution for the maxima in thegeometric distribution case

Example 1.5 (Negative binomial distribution) Let X be the random variable with

Definition 1.2. Suppose that H : RR is a non-decreasing function The

general-ized inverse of H is given by

H←(x) = inf{y : H(y) ≥ x}

Properties of generalized inverse are given in Appendix A.2

Lemma 1.1. (i) For H as above, if a > 0, b and c are constants, and T(x) = H(ax+

Trang 16

For any function H denote

C(H) = {x ∈R : H is finite and continuous at x}

If two r.v.s X and Y have the same distribution, we write

R Then all normalized d.f’s are of the same type called normal type If X0,1 has

N(0, 1, x)as its distribution and Xµ ,σhas N(µ , σ2, x) as its distribution, then Xµ ,σ =d

σX0,1+µ

Now we state the theorem developed by Gnedenko and Khintchin

Theorem 1.3 (Convergence to types theorem) (a) Suppose U(x)and V(x)are two degenerate distribution functions Suppose for n ≥ 1, Fn is a distribution, an ≥ 0, bn ∈

non-R, αn >0, βn ∈ R and

Fn(anx+bn) →d U(x), Fn(αnx+βn)→d V(x) (1.10)

Trang 17

An equivalent formulation in term of random variables :

(a’) Let Xn, n ≥1 be random variables with distribution function Fn and the U, Vare random variables with distribution functions U(x), V(x) If

By Skorohod’s Theorem (see appendix A.3), there exist eYn, eU, n ≥ 1, defined on

([0, 1],B[0, 1], m)(the Lebesgue probability space, m is Lebesgue measure) such that

d

= U−B

A ,that means Xn −βn

αn

d

→ U−AB.(a) Using Proposition A.2 (see Appendix A.2) that if Gn →d G, then also Gn← →d G←and the relation in (1.10) can be inverted to give

Trang 18

weakly Since neither U(x) nor V(x) concentrates at one point we can find points

y1, y2with yi ∈ C(U←) ∩C(V←), y1<y2, for i =1, 2, satisfying

ư∞<U←(y1) <U←(y2) <∞,and

βnưbn

an

→V←(y1)AưU←(y1) =: BThis gives (1.11) and (1.12) follows from (b)

Remark 1.2. (a) The answer to question 2 is quite clear from Theorem 1.3 Namely,

F ∈ D(G1)and F ∈ D(G2)then G1and G2must be of the same type

(b) The theorem shows that when

Xnưbn

an

d

→Uand U is non-constant, we can always suitable choice of the normalizing con-stants is

an = Fn←(y2) ưFn←(y1),

bn = Fn←(y1)

Trang 19

1.2.1 Max-Stable Distributions

In this section we answer the question: What are the possible (non-degenerate) limitlaws for the maxima Mn when properly normalised and centred?

Definition 1.4. A non-degenerate random distribution function F is max-stable if for

X1, X2, , Xni.i.d F there exist an >0, bn ∈ R such that

Mn =d anX1+bn

Example 1.7 If X1, X2, is a sequence of independent standard exponential Exp(1)

variables, F(x) = 1−e−xfor x>0 Taking an =1 and bn =n, we have

distri-Example 1.8 If X1, X2, is a sequence of independent standard Frechet variables,

F(x) =exp(−1x)for x>0 For an =n and bn =0

Example 1.9 If X1, X2, are a sequence of independent uniform U(0, 1)variables,

F(x) = x for 0≤x≤1 For fixed x <0, suppose n > −x and let an = 1n and bn =1.Then,

Theorem 1.4 (Limit property of max-stable laws) The class of all max-stable

distribu-tion funcdistribu-tions coincide with the class of all limit laws G for (properly normalised) maxima

of i.i.d rvs (as given in (1.3))

Trang 20

Proof 1 If X1, X2, are i.i.d G, G is max-stable and Mn =Wn

H(x) = Hk(a∗kx+b∗k).Therefore if Y1, , Ykare i.i.d from H then for all k ∈ N,

Y1=d

Wn

i = 1Yi−bk∗

a∗kwhich implies

n

_

i = 1

Yi =d a∗kY1+bk∗

1.3 Extremal Value Distributions

1.3.1 Extremal Types Theorem

The extreme type theorems play a central role of the study of extreme value theory

In the literature, Fisher and Tippett (1928) were the first who discovered the extremetype theorems and later these results were proved in complete generality by Gne-denko (1943) Later Galambos (1987), Leadbetter, Lindgren and Rootzen (1983), andResnick (1987) gave excellent reference books on the probabilistic aspect

Trang 21

Theorem 1.5 (Fisher-Tippett(1928), Gnedenko(1943)) Suppose there exist sequence{an >

0 and{bn ∈ R}, n ≥1 such that

Mn−bn

an

d

→Gwhere G is non-degenerate, then G is of one the following three types:

1 Type I, Gumbel : Λ(x) =exp{−e−x}, x∈ R.

2 Type II, Fr´echet : Φα(x) =

(

exp{−x−α} if x≥0 for some α>0.

3 Type III, Weibull : Ψα(x) =

(exp{−(−x)α} if x <0

1 if x ≥0 for some α >0Proof For t ∈ R Let denote [t] = The greatest integer less than or equal to t Weproceed in a sequence of steps

the existence of two functions α(t) >0, β(t) ∈R, t >0 such that for all t >0,

Gt(x) = G(α(t)x+β(t)) (1.20)

Step(ii) We observe that the functionα(t) and β(t) are Lebesgue measurable For

in-stance, to prove α(·)is measurable, it suffices (since limits of measurable functionsare measurable) to show that the function

Trang 22

is measurable Since this function has a countable range {aj, j ≥ 1} it suffices toshow

which, being a union of intervals, is certainly a measurable set

Step(iii) Facts about the Hamel Equation (see [20]) We need to use facts about ble solutions of functional equations called Hamel’s equation and Cauchy’s equation If

possi-f(x), x >0 is finite, measurable and real valued and satisfies the Cauchy equation

f(x+y) = f(x) + f(y), x >0, y>0,then f is necessarily of the form

f(x) = cx, x >0,for some c ∈ R A variant of this is Hamel’s equation If φ(x), x > 0 is finite,measurable, real valued and satisfies Hamel’s equation

for some a>0, c>0 and b, d are constants, then a =c, and b =d

Choose y1 < y2 and −∞ < x1 < x2 < ∞ by (ii) of lemma 1.1 so that x1 =

F←(y1), x2 = F←(y2) Taking inverses of F(ax+b)and F(cx+d)by (i) of the lemma1.1, we have

a−1(F←(y) −b) =c−1(F←(y) −d)

for all y Applying this to y1and y2in turn, we obtain

a−1(x1−b) = c−1(x1−d) and a−1(x2−b) = c−1(x2−d),

Trang 23

from which it follows simply that a=c and b =d.

Step(v) Return to (1.20) and for t >0, s>0 we have on the one hand

α(ts) =α(t)α(s) (1.21)

β(ts) =α(t)β(s) +β(t) = α(s)β(t) +β(s) (1.22)the last step following by symmetry We recognize (1.21) as the famous Hamel func-tional equation The only finite measurable, nonnegative solution is of the followingform

If θ=0, then α(t) = 1 and β(t)satisfies

β(ts) = β(t)β(s)

So exp{β(·)}satisfies the Hamel equation which implies that

exp{β(t)} = tcfor some c∈ R and thus β(t) = c log t

If θ6= 0, then

β(ts) =α(t)β(s) +β(t) = α(s)β(t) +β(s).Fix s0 6=1 and we get

α(t)β(s0) +β(t) = α(s0)β(t) +β(s0),

Trang 24

and solving for β(t)we get

β(t)(1−α(s0)) = β(s0)(1−α(t)).Note that 1−α(s0) 6=0 Thus we conclude

β(t) = β(s0)

1−α(s0)(1−α(t)) =: c(1−tθ).Step(vii) We conclude that

Gt(x) =

(

G(x+c log t) If θ=0 (a)

G(tθx+c(1−tθ)) If θ6=0 (b)

Now we show that θ = 0 corresponds to a limit distribution of typeΛ(x), that the

case θ >0 corresponds to a limit distribution of typeΦα and that θ <0 corresponds

toΨα

Consider the case θ =0 Examine the equation in (a): For fixed x, the function Gt(x)

is non-increasing in t So c < 0, since otherwise the right side of (a) would not bedecreasing If x0 ∈R such that G(x0) = 1, then

1=Gt(x0) = G(x0+c log t), ∀t >0,which implies

G(y) = 1, y∈ R

and this contradicts G non-degenerate If x0 ∈R such that G(x0) = 0, then

0=Gt(x0) = G(x0+c log t), ∀t >0,which implies

G(x) = 0, ∀x∈ R,

again giving a contradiction We conclude 0<G(y) <1, for all y ∈ R.

In (a), set x=0 and set G(0) = e−κ Then

e− =G(c log t).Set y=c log t, and we get

G(y) = exp{−κeyc} =exp{−e−(

y

|c| −log κ)

}

which is the type ofΛ(x)

We consider the case θ>0 Examine the equation in (b):

Gt(x) = G(tθx+c(1−tθ))

= G(tθ(x−c) +c)

Trang 25

i.e, changing variables

Gt(x+c) = G(tθx+c).Set H(x) = G(x+c) Then G and H are of the same type so it suffices to solve for

H The function H satisfies

and H is non-degenerate Set x =0 and we get from (1.23)

t log H(0) = log H(0)

for t > 0 So either log H(0) = 0 or −∞; i.e, either H(0) = 0 or 1 However,

H(0) = 1 is impossible since it would imply the existence of x < 0 such that theleft side of (1.23) is decreasing in t while the right side of (1.23) is increasing in t.Therefore we conclude H(0) = 0 Again from (1.23) we obtain

Ht(1) = H(tθ)

if H(1) = 0, then H ≡0 and if H(1) = 1 then H ≡1, both statements contradicting

H non-degenerate Therefore H(1) ∈ (0, 1) Set α=θ−1, H(1) =exp{−ρα}, u=tθ

so that u−α =t From (1.23) with x =1 we get for u>0

H(u) = exp{−ραt} =exp{−(ρu)−α}

= Ψα(ρu)

The other cases and θ <0 are handled similarly

In words, The extreme type theorems say that for a sequence of i.i.d random ables with suitable normalizing constants, the limiting distribution of maximumstatistics, if it exists, follows one of three types of extreme value distributions thatlabeled I, II and III Collectively, these three classes of distribution are termed the

vari-extreme value distributions , with types I, II and III widely known as the Gumbel, Fr´echet and Weibull families respectively Each family has a location and scale pa-

rameter, band a respectively; additionally, the Fr´echet and Weibull families have a

shape parameter α.

Remark 1.3 (a) Though, for modelling purposes the types ofΛ, ΦαandΨαare verydifferent, from a mathematical point of view they are closely linked Indeed, oneimmediately verifes the following properties Suppose X>0, then

X∼Ψα ⇔ −1

X ∼Ψα ⇔log Xα ∼Λ

Trang 26

(b) We have to shown that:

Class of Extreme Value distributions = Max-stable distributions = Distributions pearing as limits in Definition 1.1

ap-Thus we have a characterization of the limit distributions appearing as limits inDefinition 1.1, which answers question 3

1.3.2 Generalized Extreme Value Distributions

Definition 1.5 (Generalized Extreme Value Distributions) For any γR, defined

the distribution

Gγ(x) :=exp(−(1+γx)γ1), 1+γx>0

is an extreme value distribution, abbreviated by EVD The parameter γ is called the

extreme value index

Since (1+γx)−1/γ → exp(−x), as γ0 interpret for γ = 0, we have G0(x) =

exp{−e−x} The family of distributions Gγ(x−µ

σ ), for µ, γR, σ > 0 is calledthe family of generalized extreme value distributions under von Mises or von Mises-Jenkins parametrization It shows that the limit distribution functions form a simpleexplicit one-parameter family apart from the scale and location parameters

Let us consider the subclasses with γ>0, γ=0, and γ <0 separately:

(a) For γ>0 clearly Gγ(x) < 1 for all x, i.e., the right endpoint of the distribution

is infinity Moreover, as x → ∞, 1−Gγ(x) ∼ γ1/γx−1/γi.e., the distributionhas a rather heavy right tail We use Gγ(x−1

γ )and get with α = 1

(b) For γ =0 The distribution with γ =0

G0(x) =exp(−e−x),for all x∈ R, is called the double-exponential or Gumbel distribution.

Observe that the right endpoint of the distribution equals infinity The bution,however, is rather light-tailed:

distri-1−G0(x) =1−exp{−e−x} ∼ e−x,

as x→∞ and all moments exist

Trang 27

(c) For γ < 0, the right endpoint of the distribution is −1

γ so it has a short tail,verifying 1−Gγ(−γ−1−x) ∼ (−γx)−1/γ, as x ↓ 0 We use Gγ(−1+x

γ ) and

get with α= −γ1 >0,

Ψγ(x) =

(exp(−(−x)−α) x <0

This class is sometimes called the reverse-Weibull class of distributions

1.4 Domain of Attraction Condition

Recall that we defined the generalized inverse of non-decreasing function f We havethe lemma :

Lemma 1.2 Suppose fn is a sequence of nondecreasing functions and g is a nondecreasingfunction Suppose that for each x in some open interval(a, b)that is a continuity point of g,

lim

n → ∞ fn(x) = g(x).Then, for each x in the interval(g(a), g(b))that is a continuity point of g←we have

lim

n → ∞ fn←(x) = g←(x).Proof Let x be a continuity point of g← Fix e > 0 We have to prove that for

Theorem 1.6 The following statements are equivalent for all x, such that0<G(x) <1:

1 There exist an >0, bn ∈ R and a nondegenerate distribution function G such that,

Fn(anx+bn) →d G(x), as n→∞

Trang 28

2 There exist an > 0, bn ∈ R and a nondegenerate distribution function G, for each

5 There exist a(t) > 0, b(t) ∈ R and a nondegenerate distribution function G, such

that

Ft(a(t)x+b(t)) →d G(x), as n→∞

Proof Fix a continuity point x of G, 0<G(x) < 1

(1 ⇔ 2) Clearly F(anx+bn) → 1 as n → ∞ and from the expansion log(1+e) =

e+O(e2)for e→0 By taking logarithms, as n →∞

Fn(anx+bn) → G(x)

⇔ n log F(anx+bn) →log G(x)

⇔ n log(1− (1−F(anx+bn))) →log G(x)

⇔ n(1−F(anx+b)) → −log G(x) (1.24)Similarly, we can show that 3⇔5

(2⇔3) To show that(2 →d 3), let a(t) = a[t], b(t) = b[t](with [t]the integer part oft) Then

lim

t → ∞t(1−F(a(t)x+b(t))) ≥ −log G(x)

Trang 29

Hence, we have(2→d 3) We see that(3→d 2)is obvious.

(3⇔4), firstly, we have to show 3→d 4

Example 1.10 (Normal distribution) Let F be the standard normal distribution We

are going to show that : for all x >0,

lim

n → ∞n(1−F(anx+bn)) =e−x (1.26)with

bn := (2 log n−log log n−log())1/2, (1.27)and

Trang 30

= exp



− b2 n

a0n →

1,(bn0− b n )

an →0, we can replace bn, anfrom (1.27) and (1.28) by, e.g

b0n = (2 log n)1/2−log log n+log()

(2 log n)1/2

and

a0n = (2 log n)−1/2

1.4.1 General Theory of Domains of Attraction

It is important to know which (if any) of the three types of limit law applies whenr.v.s {Xn} have a given d.f F Various necessary and sufficient conditions areknown, involving the ” tail behaviour ” 1−F(x) as x increases, for each type oflimit We shall state these and prove their sufficiency, omitting the proofs of neces-sity

Theorem 1.7 Necessary and sufficient conditions for the d.f F of the r.v.’s of the i.i.d.

sequence{Xn}to belong to each of the three types are

Trang 31

Type III : xF <∞ and

Proof We assume first the existence of a sequence {un} (which may be taken

non-decreasing in n) in each case such that n(1−F(un)) → 1 The un constants will, ofcourse, differ for the differing types Clearly un →xFand un <xFfor all sufficientlylarge n

If F satisfies the Type II criterion we have, writing un for t, for each x >0,

n(1−F(unx)) ∼n(1−F(un))x−α →x−α

so that Theorem 1.1 yields, for x >0,

P(Mn ≤unx) →exp(−x−α).Since un >0 (when n is large, at least) and the right-hand side tends to zero as x ↓0,

it also follows that P{Mn ≤0} → 0, and for x<0, that

where an =un and bn =0 so that the Type II limit follows

The Type III limit follows in a closely similar way by writing hn = xF −un(↓ 0)sothat, for x>0,

lim

n → ∞n(1−F{xF−x(xF−un)}) =xα,and hence (replacing x by−x) for x<0,

lim

n → ∞n(1−F{xF +x(xF−un)}) = (−x)α.Using Theorem 1.1 again, this shows at once that the Type III limit applies withconstants in (1.29) given by

an = (xF−un), bn = xF.The Type I limit also follows along the same lines since, when F satisfies that crite-rion, we have, for all x, writing t=un ↑ xF(≤∞),

lim

n → ∞n(1−F{un+xg(un)}) =e−x,

Ngày đăng: 08/11/2014, 11:18

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w