1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: L (1) bounds for some martingale central limit theorems

13 127 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 13
Dung lượng 196,77 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

L1 bounds for some martingale central limit theoremsLe Van Dunga,1, Ta Cong Sonb,2, and Nguyen Duy Tienb,1 aFaculty of Mathematics, Da Nang University of Education, 459 Ton Duc Thang, Da

Trang 1

L1 bounds for some martingale central limit theorems

Le Van Dunga,1, Ta Cong Sonb,2, and Nguyen Duy Tienb,1

aFaculty of Mathematics, Da Nang University of Education, 459 Ton Duc Thang, Da Nang, Viet Nam

bFaculty of Mathematics, Hanoi University of Science, 334 Nguyen Trai, Hanoi, Viet Nam

(e-mail: lvdung@ud.edu.vn; congson82@hus.edu.vn; nduytien2006@gmail.com)

Received September 24, 2013; revised January 7, 2014

Abstract The aim of this paper is to extend the results in [E Bolthausen, Exact convergence rates in some martingale

central limit theorems, Ann Probab., 10(3):672–688, 1982] and [J.C Mourrat, On the rate of convergence in the mar-tingale central limit theorem, Bernoulli, 19(2):633–645, 2013] to the L1-distance between distributions of normalized partial sums for martingale-difference sequences and the standard normal distribution.

MSC: 60F05, 60G42

Keywords: mean central limit theorems, rates of convergence, martingale

1 Introduction and statements of results

Let X1, X2, , X n be a sequence of real-valued random variables with mean zero and finite variance σ2 Put

S := X1+X2+· · ·+X n Denote by F n the distribution functions of S/σ √ n, and let Φ be the standard normal distribution function The classical central limit theorem confirms that if X1, X2, , X nare independent and

identically distributed, then F n (x) converges to Φ(x) as n → ∞ for all x ∈R In 1954, Agnew [1] showed that

the convergence also holds in L p for p > 1/2 The convergence in the case of p = 1 is called the mean central

limit theorem The rate of convergence in the mean central limit theorem was also studied by Esseen [7], who showed that

F n − Φ1 = O

n −1/2

as n → ∞.

Recently, Sunklodas [12, 13] has extended this result to independent nonidentically distributed random

variables and ϕ-mixing random variables by using the Bentkus approach [2].

Let X = (X1, , X n) be a square-integrable martingale-difference sequence of real-valued random

variables with respect to the σ-fields F j = σ(X1, , X j−1 ), j = 2, 3, , n + 1; F1 = {∅, Ω} Let

M n denote the class of all such sequences of length n If X ∈ M n , we write σ2

j = E(X2

j | F j−1),

σ2

j = E(X2

j ), S = S(X) = n j=1 X j , s2 = s2(X) = n j=1 σ2

j , V2 = V2(X) = n j=1 σ2

j /s2(X), and

1The research of the author has been partially supported by the Viet Nam National Foundation for Science and Technology

Develop-ment (NAFOSTED), grant No 101.03-2012.17.

2The research of the author has been partially supported by project TN-13-01.

48

0363-1672/14/5401-0048 c 2014 Springer Science+Business Media New York

Trang 2

X p= max1jn X j  p for 1 p  ∞ We denote by N a standard normal random variable; the

distribu-tion funcdistribu-tion and the density funcdistribu-tion ofN are denoted by Φ(x) and ϕ(x), respectively.

IfX ∈ M n , V ( X) → 1 in probability, and some Lindeberg-type condition is satisfied, then

lim

n→∞P



S(X) s(X)  x



= Φ(x) for all x ∈ R.

For bounds of the convergence rate in this central limit theorem, the following results were shown by Bolthausen [4]

Theorem 1 (See [4].) Let 0 < α  β < ∞, 0 < γ < ∞ There exists a constant 0 < C α,β,γ < ∞ such that, for any n  2 and any X ∈ M n satisfying σ2

j = σ2j a.s., α  σ2

j  β for 1  j  n, and X3  γ,

sup

x∈R



PS s  x



− Φ(x)

  C α,β,γ n −1/4

Theorem 2 (See [4].) Let γ ∈ (0; +∞) There exists a constant 0 < C γ < ∞, depending only on γ, such that, for any n  2 and any X ∈ M n satisfying X γ and V (X) = 1 a.s.,

sup

x∈R



PS s  x



− Φ(x)

  C γ n log n

s3 Relaxing the condition that V2 = 1 a.s., Bolthausen [4] also showed the following result.

Corollary 1 (See [4].) Let γ ∈ (0; +∞) There exists a constant C γ > 0 such that, for any n  2 and any

X ∈ M n satisfying X ∞  γ,

sup

x∈R



PS s  x



− Φ(x)

  C γ



n log n

s3 + min V2− 1 1/3

1 , V2− 1 1/2

Mourrat [11] generalized Corollary 1 and obtained the optimality of the result to any p ∈ [1; +∞).

Theorem 3 (See [11].) Let p ∈ [1; +∞) and γ ∈ (0; +∞) There exists a constant C p,γ > 0 such that, for any n  2 and any X ∈ M n satisfying X ∞  γ,

sup

x∈R



PS s  x



− Φ(x)

  C p,γ

n log n

s3 + V2− 1 p

p + s −2p

1/(2p+1)

.

The aim of this article is to extend these results to L1-bounds in the mean central limit theorem for

martingale-difference sequences

Theorem 4 Let 0 < α  β < ∞, 0 < γ < ∞ If X3  γ, σ2

j = σ2j a.s., and α  σ2

j  β for 1  j  n, then there exists a constant C = C(α, β, γ) ∈ (0; ∞) such that

F S/s − Φ1 Cn −1/4 Theorem 5 Let 0 < γ < ∞ If X ∞  γ and V2(X) = 1 a.s., then there exists a constant 0 < C < ∞

such that

F S/s − Φ1  C γ3n log n s3 .

We have the following corollary, similar to Corollary 1

Trang 3

Corollary 2 Let 0 < γ < ∞ and p > 1/2 If X ∞  γ, then there exists a positive constant C = C(p), depending only on p, such that

F S/s − Φ1  C

γ3n log n

s3 + min V2− 1 1/2

∞ ,

EV2− 1p1/2p

.

The following corollary is an L1-version of Theorem 3

Corollary 3 Let 0 < γ < ∞ and p > 1/2 If X ∞  γ, then there exists a positive constant C = C(p), depending only on p, such that

F S/s − Φ1 C

γ3n log n



EV2− 1p + s −2p1/2p

.

Note that the termV2− 1 1/31 appearing in Corollary 1 is replaced by the smaller term (E|V2− 1| p)1/2p

in Corollary 2, and the term (V2− 1 p p + s −2p)1/(2p+1)appearing in Theorem 3 is replaced by the smaller term (E|V2− 1| p + s −2p)1/2pin Corollary 3.

2 Auxiliary lemmas

For two random variables X and Y with distribution functions F X and G Y, respectively, applying the Kantorovich–Rubinstein theorem (see, e.g., [6, Thm 11.8.2]), we have that

F X − G Y 1 =

−∞

F X (x) − G Y (x)dx = sup

f∈Λ1

Ef (X)

− Ef(Y ) ,

where Λ1is the set of 1-Lipschitzian functions fromR to R For more details, we refer the reader to [8] and [5]

For functions f, g : R → R, their convolution f ∗ g is defined by

f ∗ g(x) =

−∞

f(x − y)g(y) dy.

We have the following lemmas

Lemma 1 (See [3, p 205].) If 1  p, q, r  ∞, 1/p + 1/q = 1 + 1/r, f ∈ L p(R), and g ∈ L q(R), then we have

f ∗ g r  f p g q Lemma 2 Let X and η be real random variables Then, for any p > 1/2, we have

F X − Φ1  F X+η − Φ1+ 2(2p + 1) Eη 2p  X 1/2p

Proof. The conclusion is trivial in the case ofE(η 2p | X) ∞=∞ So, we assume that E(η 2p | X) ∞=

γ < ∞ For any a > 0, we have that

F X − Φ1 =

−∞

P(X  t − a) − Φ(t − a)dt

Trang 4

−∞

P(X  t − a) − P(X + η  t)dt +

−∞

Φ(t) − Φ(t − a)dt

First, we consider the first term on the right-hand side of (2.2) We have that

P(X + η  t) = EP(η  t − X | X) EI(X  t − a)P(η  t − X | X)

=P(X  t − a) − EI(X  t − a)P(η > t − X | X),

where

EI(X  t − a)P(η > t − X | X) γE(t − X) −2p I(X  t − a).

Therefore,



P(X  t − a) − P(X + η  t)+  γE(t − X) −2p I(X  t − a),

which implies

−∞



P(X  t − a) − P(X + η  t)+dt



−∞

γE(t − X) −2p I(X  t − a)dt = γE

−∞

(t − X) −2p I(X  t − a) dt

= γE

X+a

On the other hand,

P(X + η  t) = EI(X  t − a)P(η  t − X | X)

+EI(t − a < X  t + a)P(η  t − X | X)

+EI(X > t + a)P(η  t − X | X)

 P(X  t − a) + P(t − a < X  t + a) + EI(X > t + a)P(η  t − X | X)

 P(X  t − a) + P(t − a < X  t + a) + γE(t − X) −2p I(X > t + a)

,

which implies that

P(X + η  t) − P(X  t − a)  P(t − a < X  t + a) + γE(t − X) −2p I(X > t + a)

.

Hence,

−∞



P(X  t − a) − P(X + η  t)− dt

Trang 5

−∞

P(t − a < X  t + a) dt +

−∞

γE(t − X) −2p I(X > t + a)

dt

= 2a + γE

−∞

(t − X) −2p I(X > t + a) dt = 2a + γE

X−a

−∞

(t − X) −2p dt

Combining (2.3) and (2.4) yields

−∞

P(X  t − a) − P(X + η  t)dt

=

−∞



P(X  t − a) − P(X + η  t)− dt +

−∞



P(X  t − a) − P(X + η  t)+dt

Next, we consider the second term on the right-hand side of (2.2) We have that

−∞



Φ(t − a) − Φ(t)dt =

0

−∞



Φ(t − a) − Φ(t)dt +

0



Φ(t − a) − Φ(t)dt



0

−∞

aϕ(t) dt +

0

Combining (2.2), (2.3), and (2.6) yields

F X − Φ1  F X+η − Φ1+ 2



(2p − 1) a 2p−1 γ + 2a



Taking a = γ 1/2p gives conclusion (2.1) of Lemma 2.

Lemma 3 Let ψ be a function R → R with ψ ∞ < ∞ and ψ   ∞ < ∞ If X is a random variable, then

Eψ (X)   ψ   ∞ |F X − Φ|1+ψ ∞ Proof. It is clear that

Eψ (X)

− Eψ(N)=







−∞

ψ(x) dF X (x) −

−∞

ψ(x) dΦ(x)



=







−∞



F X (x) − Φ(x)ψ  (x) dx





 ψ   ∞

−∞

F X (x) − Φ(x)dx = ψ   ∞ F X − Φ1

Trang 6

Eψ (N )   ψ ∞

3 Proof of Theorem 4

Let Z1, Z2, , Z n , η be independent normally distributed random variables with mean 0 and E(Z2

j ) = σ2j,

E(η2) = n 1/2 Let U m=m−1

j=1 X j /s and Z =n

j=1 Z j According to Lemma 2 with p = 1, we have

F S/s − Φ1  F (S+η)/s − Φ1+ C

s12Eη2 1/2

∞  F (S+η)/s − F (Z+η)/s 1+ C1s Eη2 1/2

On the other hand, by a proof is similar to that of Theorem 1 in [4] we get that



PS + η s  t



− P



Z + η

s  t

  n

m=1

E



|X m |3

λ m s3



ϕ 

t − U m

λ m − θ m X m

λ m s



 +

n



m=1

E



|Z m |3

λ m s3



ϕ 

t − U m

λ m − θ 

m

Z m

λ m s



,

where 0 θ m , θ 

m  1 and λ m= (n

j=m+1 σ2

j + n 1/2 )/s.

Applying the Fubini theorem and noting that

−∞ |ϕ  (t) | dt 2, we have

−∞



PS + η s  t



− P

Z + η

s  t

dt

 n

m=1

−∞

E



|X m |3

λ3m s3



ϕ 

t − U m

λ m − θ m λ X m

m s



dt

+

n



m=1

−∞

E



|Z m |3

λ3

m s3



ϕ 

t − U m

λ m − θ 

m

Z m

λ m s



dt

 n

m=1

E

−∞

|X m |3

λ3m s3



ϕ 

t − U m

λ m − θ m λ X m

m s



dt

+

n



m=1

E

−∞

|Z m |3

λ3

m s3



ϕ 

t − U m

λ m − θ 

m

Z m

λ m s



dt

 n

m=1

2E



|X m |3

λ3m s3

 +

n



m=1

2E



|Z m |3

λ3m s3



Combining (3.1) and (3.2) yields

F S/s − Φ1 Cn −1/4

The theorem is proved

Trang 7

4 Proof of Theorem 5

For n ∈ N, s > 0, and γ > 0, let

G(s, γ) =X ∈ M n : s( X) = s, X ∞  γ, V2(X) = 1 a.s.

and

Δ(n, s, γ) = sup

 sup

f∈Λ1



Ef



S(X)

s



− Ef(N): X ∈ G(s, γ).

It is clear that Δ(n, s, γ)  Δ(n − 1, s, 2γ).

For a fixed elementX ∈ G(s, γ), where we assume that γ  1, let Z1, Z2, , Z nbe i.i.d standard normal

variables, and let η be a centered normal r.v with variance κ2 such that η is independent of anything else The

variance κ2will be specified later, but in any case, κ2 > 2γ2.

Let

U m=

m−1

j=1 X j

n

j=m+1 σ j Z j + η

n

j=m+1 σ2

j + κ2

Conditioned on σ( F n+1 , Z m ), W m is normally distributed with mean 0 and variance λ2m , and Z =

n

j=1 σ j Z j /s is a standard normal variable Hence, by Lemma 2 we have

F S/s − Φ1 F (S+η)/s − F(n

j=1 σ j Z j +η)/s 1+ C κ

Now we consider the first term on the right-hand side of (4.1) Let ϕ λ m (x) be the density function of W m.

For any 1-Lipschitzian f , according to an idea that goes back to Lindeberg [10], we write

E



f

S + η

s



− E



f

n j=1 σ j Z j + η s



=

n



m=1



E



f



W m + U m+ X m

s



− E



f



W m + U m+σ m Z m

s



=

n



m=1



E



f ∗ ϕ λ m



U m+X m

s



− E



f ∗ ϕ λ m



U m+σ m Z m

s



=

n



m=1



E



g m



U m+X m

s



− E



g m



U m+σ m Z m

s



(where g m = f ∗ ϕ λ m)

=

n



m=1

E

σ

m Z m − X m s



g 

m (U m)

σ2

m Z2

m − X2

m

s2



g 

m (U m)

+ (σ m Z m)3

s3 g 

m



U m − θ m σ m s Z m



− X s3m3 g 

m



U m − θ m  s X m



.

Since U m and λ m areF m−1-measurable, whereF m−1 is the completion ofF m−1, fromE(X m | F m−1) =

E(σ m Z m | F m−1) = 0 a.s andE(σ2

m Z2

j F m−1) =E(X2

j | F m−1 ) = σ2

j a.s it follows that the first two sums

in the above expression must vanish Moreover, since g 

m (x) = f  ∗ ϕ 

λ m (x), we get that



Ef

S + η

s



− E



f

n j=1 σ j Z j + η s





Trang 8

 n

m=1



E



|σ m Z m |3

s3



g 

m



U m+θσ m Z m

s



 + E|X m3|

s3



g 

m



U m+ θ 

m X m s





6s3



m=1

E



|X m |3

f  ∗ ϕ 

λ m



U m − θ m s X m



6s3



m=1

E



|σ m Z m |3

f  ∗ ϕ  λ m



U m − θ m  σ s m Z m



:= 1

We define the sequence of stopping times τ j (1 j  n) by

τ0 = 0, τ j = inf



k: k

i=1

σ2i  n j

 for 1 j  n − 1, τ n = n.

Then

n



m=1

E



|X m |3

f  ∗ ϕ 

λ m



U m − θ m s X m



=

n



j=1

E



m=τ j−1+1

|X m |3

f  ∗ ϕ 

λ m



U m − θ m s X m

If τ j−1 < m  τ j, then

λ2

m 

n

j=τ j+1σ2

j + κ2

s2  s2− js2/n − γ s 2+ κ2 := λ2

j ,

λ2

m 



j=τ j−1+1

σ2

j + κ2 s2  s2− (j − 1)s s2 2/n + κ2 := λ2

j

We denote R m=m−1

i=τ j−1+1X i and A mt={|R m |  |U τ j−1+1−t|/2} for t ∈ R Let the function ψ : R → R

be defined by

ψ(x) = supϕ 

(y): |y| |x|

2 − 1



.

We conclude that, for every t,



ϕ 

U m − t

λ m − θ m λ X m

m s



  ψU τ j−1+1− t

λ j



holds on A mt ∩ {τ j−1 < m  τ m } Then,

E



m=τ j−1+1

|X m |3

f  ∗ ϕ 

λ m



U m − θ m s X m



Trang 9

 γE



m=τ j−1+1

X m2

f  ∗ ϕ  λ m



U m − θ m s X m



 γE



m=τ j−1+1

X m2





−∞

ϕ  λ

m



U m − θ m s X m − t



f  (t) dt







 γE



m=τ j−1+1

X m2





−∞

λ −3 m ϕ 

U

m − t

λ m − θ m λ X m

m s



f  (t)I A mt dt







+ γE



m=τ j−1+1

X m2





−∞

λ −3 m ϕ 



U m − t

λ m − θ m λ X m

m s



f  (t)I A c

mt dt







 γE

j



m=τ j−1+1

X m2

−∞

λ −3 m 

ϕ 

U m − t

λ m − θ m λ X m

m s



f  (t)

I A mt dt + γE



m=τ j−1+1

X m2

−∞

λ −3 m 

ϕ U

m − t

λ m − θ m λ X m

m s



f  (t)

I A c

mt dt

 γλ −3

j E



m=τ j−1+1

X2

m

−∞

ψ

U

τ j−1+1− t

λ j

f 

(t)dt

+ γλ −3

j E



m=τ j−1+1

X2

m

−∞

I A c

mt dt

 γλ −3

j E



m=τ j−1+1

X2

m

−∞

ψ

U

τ j−1+1− t

λ j

f 

(t)dt

+ γλ −3

j E



m=τ j−1+1

X2

m

−∞

I A c

mt dt

:= γλ −3

We first consider M j Since U τ j−1+1isF τ j−1-measurable, we obtain

M j =

−∞

E



m=τ j−1+1

X2

m ψ

U

τ j−1+1− t

λ j f  (t)dt



−∞

E



m=τ j−1+1

ψ

U

τ j−1+1− t

λ j EX2

m  F τ j−1 f  (t)dt

=

−∞

E

ψ

U

τ j−1+1− t

λ j

 τ j m=τ j−1+1

EX2

m  F τ j−1 f  (t)dt

Trang 10

 2γ2

−∞

E



ψ

U

τ j−1+1− t

λ j

f 

(t)dt

= 2γ2E

−∞

ψ

U

τ j−1+1− t

λ j f  (t)dt = 2γ2Eg j (U τ j−1+1)

,

where

g j (x) =

−∞

ψ



x − t

λ j

f 

(t)dt.

Now, since

E



i=τ j−1+1

X i2  F τ j−1 =E



i=τ j−1+1

σ i2  F

τ j−1  s2



1− j − 1 n

 a.s.,

from Lemma 3 we obtain

Eg j (U τ j−1+1)

 C



F S/s − Φ1+



1− j − 1 n + λ j



.

Hence,

M j  Cγ2



F S/s − Φ1+



1− j − 1 n + λ j



Next, we consider N j Let

B j =

 max

τ j−1 <kτ j







k



i=τ j−1+1

X i



> |

U τ j−1+1− t|

2



.

Since A mis (F m−1 ∨ F τ j−1)-measurable, we have

N j =

−∞

E



m=τ j−1+1

σ2

m I A c

m dt  2γ2

−∞

P(B j ) dt

 2γ2

−∞

E

min



1,

|U

τ j−1+1− t|

2λ j

−2

E

max

τ j−1 <kτ j







k



i=τ j−1+1

X i





2  F

τ j−1 dt

 2γ2

−∞

E

min



1,

|U

τ j−1+1− t|

2λ j

−2

E



i=τ j−1+1

X i2  F τ j−1 dt

 Cγ2

−∞

E

 min



1,

|U

τ j−1+1− t|

2λ j

−2

dt = Cγ2Eh j (U τ j−1+1)

,

Trang 11

h j (x) =

−∞

min



1,



x − t 2λ j

−2

dt.

By Lemma 2 we obtain

N j  Cγ2

F S/s − Φ1+



1− j − 1 n + λ j



Combining (4.3), (4.4), and (4.5) with (4.6) yields

I Cγ3n

κ2− 2γ2−1/2

Δ(n, s, γ) + γ3n log n

Next, we need to derive a bound for II on the right-hand side of (4.2)

For t ∈R, put

˜

A mt =



|R m | |U τ j−1+1− t|

4





σ m |Z m | |U τ j−1+1− t|

8



.

Then

E



m=τ j−1+1

σ3

m |Z m |3

f  ∗ ϕ 

λ m



U m − θ m σ s n Z m



 E



m=τ j−1+1

σ3

m |Z m |3





−∞

ϕ 

λ m



U m − θ m σ s n Z m − t



f  (t) dt





 E



m=τ j−1+1

σ3

m |Z m |3

λ3

m







−∞

ϕ U

m − t

λ m − θ m σ λ n Z m /s

m



f  (t)I

A 

mt ∩B mt dt





 +E



m=τ j−1+1

σ3

m |Z m |3

λ3

m

−∞

I A˜c

mt dt +E



m=τ j−1+1

σ3

m |Z m |3

λ3

m

−∞

I B c

mt dt

Making use of the independence of random variables {Z m }, the first and second sums can be estimated as

above As for the third sum, note that

E

j



m=τ j−1+1

σ3

m |Z m |3

λ3

m s3

−∞

I B c

mt dt  γλ −3 j E

j



m=τ j−1+1

σ m2|Z m |3

−∞

I {8|Z

m |>λ −1 j |U τj−1+1 −t|} dt

 cγ3λ −3 j Eψ  (U

τ j−1+1)

,

where

ψ  (x) =

−∞

g 

x − t

λ j



dt

...

m=τ j−1+1

σ3

m |Z m |3

λ3

m...

m=τ j−1+1

σ2

m I A c

m dt  2γ2...

m=τ j−1+1

σ3

m |Z m |3

f  ∗ ϕ 

Ngày đăng: 16/12/2017, 15:01

TỪ KHÓA LIÊN QUAN