Random Variables Fa-mei Zheng School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China Correspondence should be addressed to Fa-mei Zheng,16032@hytc.edu.cn Receive
Trang 1Volume 2011, Article ID 181409, 13 pages
doi:10.1155/2011/181409
Research Article
A Limit Theorem for Random Products of Trimmed Sums of i.i.d Random Variables
Fa-mei Zheng
School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China
Correspondence should be addressed to Fa-mei Zheng,16032@hytc.edu.cn
Received 13 May 2011; Revised 25 July 2011; Accepted 11 August 2011
Academic Editor: Man Lai Tang
Copyrightq 2011 Fa-mei Zheng This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
Let{X, X i ; i ≥ 1} be a sequence of independent and identically distributed positive random variables with a continuous distribution function F, and F has a medium tail Denote S n
n
i1 X i , S n a n
i1 X i IM n − a < X i ≤ M n and V2
n n i1 X i − X2
, where M n max1≤i≤nX i,
X 1/nn
i1 X i , and a > 0 is a fixed constant Under some suitable conditions, we show that
nt k1 T k a/μk μ/V n → exp{d t
0Wx/xdx} in D0, 1, as n → ∞, where T k a S k − S k a is
the trimmed sum and{Wt; t ≥ 0} is a standard Wiener process.
1 Introduction
Let{X n ; n ≥ 1} be a sequence of random variables and define the partial sum S n n
i1 X i
and V2
n n
i1 X i − X2 for n ≥ 1, where X 1/nn
i1 X i In the past years, the asymptotic behaviors of the products of various random variables have been widely studied Arnold and Villase ˜nor1 considered sums of records and obtained the following form of the central limit theoremCLT for independent and identically distributed i.i.d. exponential random variables with the mean equal to one,
n
k1 log S k − n logn n
√
2n
d
−→ N as n −→ ∞. 1.1
Here and in the sequel,N is a standard normal random variable, and → d → , p a.s.→ stands for convergence in distributionin probability, almost surely Observe that, via the Stirling formula, the relation1.1 can be equivalently stated as
n
k1
S k k
1/√
n d
−→ e√2N. 1.2
Trang 2In particular, Rempała and Wesołowski 2 removed the condition that the distribution is exponential and showed the asymptotic behavior of products of partial sums holds for any sequence of i.i.d positive random variables Namely, they proved the following theorem
EX1 μ, Var X1 σ2 > 0 and the coefficient of variation γ σ/μ Then, one has
n
k1 S k n!μ n
1/γ√
n d
Recently, the above result was extended by Qi3, who showed that whenever {Xn ; n ≥
1} is in the domain of attraction of a stable law L with index α ∈ 1, 2, there exists a numerical
sequence A n for α 2, it can be taken as σ√n such that
n
k1 S k n!μ n
μ/An
d
as n → ∞, where Γα 1 ∞
0 x α e −x dx Furthermore, Zhang and Huang 4 extended Theorem A to the invariance principle
In this paper, we aim to study the weak invariance principle for self-normalized prod-ucts of trimmed sums of i.i.d sequences Before stating our main results, we need to introduce some necessary notions Let{X, X n ; n ≥ 1} be a sequence of i.i.d random variables with a continuous distribution function F Assume that the right extremity of F satisfies
γ F sup{x : Fx < 1} ∞, 1.5 and the limiting tail quotient
lim
x → ∞
F x a
exists, where Fx 1 − Fx Then, the above limit is e −ca for some c ∈ 0, ∞, and F or X is said to have a thick tail if c 0, a medium tail if 0 < c < ∞, and a thin tail if c ∞ Denote
M n max1≤j≤nX j For a fixed constant a > 0, we say X j is a near-maximum if and only if
X j ∈ M n − a, M n, and the number of near-maxima is
K n a : Cardj ≤ n; X j ∈ M n − a, M n . 1.7
These concepts were first introduced by Pakes and Steutel5, and their limit properties have been widely studied by Pakes and Steutel5, Pakes and Li 6, Li 7, Pakes 8, and Hu and
Su9 Now, set
S n a : n
i1
X i I {M n − a < X i ≤ M n }, 1.8
Trang 3I {A}
⎧
⎨
⎩
1, ω ∈ A,
0, ω / ∈ A,
T n a : S n − S n a,
1.9
which are the sum of near-maxima and the trimmed sum, respectively From Remark 1 of
Hu and Su9, we have that if F has a medium tail and EX / 0, then T n a/n a.s. → EX, which
implies that with probability one Card{k : Tk a 0, k ≥ 1} is finite at most Thus, we can redefine T k a 1 if T k a 0.
2 Main Result
Now we are ready to state our main results
distribution function F, and EX μ, Var X σ2 Assume that F has a medium tail Then, one has
nt
k1
T k a
μk
μ/Vn d
−→ exp
t
0
W x
in D 0, 1, as n −→ ∞, 2.1
where {Wt; t ≥ 0} is a standard Wiener process.
In particular, when we take t 1, it yields the following corollary.
n
k1
T k a
μk
μ/Vn
d
−→ e√2N, 2.2
as n → ∞, where N is a standard normal random variable.
Remark 2.3 Since1
0Wx/xdx is a normal random variable with
E
1
0
W x
x dx
1
0
EW x
x dx 0,
E
1
0
W x
2
1
0
EW xWy
1
0
min
x, y
xy dx dy 2.
2.3
Corollary 2.2follows fromTheorem 2.1immediately
Trang 43 Proof of Theorem 2.1
In this section, we will give the proof ofTheorem 2.1 In the sequel, let C denote a positive constant which may take different values in different appearances and x mean the largest integer≤ x.
Note that via Remark 1 of Hu and Su9, we have Ck: Tk a/μk a.s.→ 1 It follows that
for any δ > 0, there exists a positive integer R such that
P
sup
k≥R
|C k − 1| > δ
Consequently, there exist two sequences δ m ↓ 0δ1 1/2 and R∗
m↑ ∞ such that P
sup
k≥R∗m
|C k − 1| > δ m
The strong law of large numbers also implies that there exists a sequence Rm↑ ∞ such that
sup
k≥Rm
|C k− 1|a.s.≤ 1
Here and in the sequel, we take R m max{R∗
m , Rm}, and it yields P
sup
k≥Rm
|C k − 1| > δ m
< δ m
sup
k≥Rm
|C k− 1|a.s.≤ 1
m .
3.4
Then, it leads to
P
μ
V n
nt
k1
logCk ≤ x
P
μ
V n
nt
k1
logCk ≤ x, sup
k≥Rm
|C k − 1| > δ m
P
μ
V n
nt
k1
logCk ≤ x, sup
k≥Rm
|C k − 1| ≤ δ m
: A m,n B m,n ,
3.5
Trang 5and A m,n < δ m By using the expansion of the logarithm log1 x x − x2/21 θx2, where
θ ∈ 0, 1 depends on |x| < 1, we have that
B m,n P
μ
V n
nt
k1
logCk ≤ x, sup
k≥Rm
|C k − 1| ≤ δ m
P
⎛
⎝ μ
V n
Rm ∧nt−1
k1
logCk μ
V n
nt
k R m ∧nt−11
log1 Ck − 1 ≤ x, sup
k≥Rm
|C k − 1| ≤ δ m
⎞
⎠
P
⎛
⎝ μ
V n
Rm ∧nt−1
k1
logCk μ
V n
nt
k R m ∧nt−11
C k− 1
−μ
V n
nt
kRm ∧nt−11
C k− 12
21 θk C k− 12 ≤ x, sup
k≥Rm
|C k − 1| ≤ δ m
⎞
⎠
P
⎛
⎝ μ
V n
Rm ∧nt−1
k1
logCk μ
V n
nt
k R m ∧nt−11
C k− 1
−μ
V n
nt
k R m ∧nt−11
C k− 12
21 θk C k− 12I
sup
k≥Rm
|C k − 1| ≤ δ m
≤ x
⎞
⎠
− P
⎛
⎝ μ
V n
Rm ∧nt−1
k1
logCk μ
V n
nt
k R m ∧nt−11
C k − 1 ≤ x, sup
k≥Rm
|C k − 1| > δ m
⎞
⎠
: D m,n − E m,n ,
3.6
where θ k k 1, , nt are 0-1-valued and E m,n < δ m
Also, we can rewrite D m,nas
D m,n P
μ
V n
Rm ∧nt−1
k1
logCk − C k 1 μ
V n
nt
k1
C k− 1
− μ
V n
nt
k R m ∧nt−11
C k− 12
21 θk C k− 12I
sup
k≥Rm
|C k − 1| ≤ δ m
≤ x
⎞
⎠.
3.7
Observe that, for any fixed m, it is easy to obtain
μ
V n
Rm ∧nt−1
k1
logCk − C k 1−→ 0 as n −→ ∞, p 3.8
by noting that V2
n p
→ ∞
Trang 6And if R m ≥ nt − 1, then we have
μ
V n
C nt− 12
2
1C nt− 1θ nt2 a.s.≤ V C
n
p
−→ 0, 3.9
as n → ∞ If R m < nt − 1, then R m 1 < nt Denote
F m,n:
μ
V n
nt
kRm1
C k− 12
21 θk C k− 12
I
sup
k≥Rm
|C k − 1| ≤ δ m
, 3.10
and, by observing that x2/1 θx2≤ 4x2, then we can obtain
F m,n≤ C
V n
nt
kRm1
C k− 12 C
V n
nt
kRm1
S k − S k a
2
≤ C
V n
nt
kRm1
S k
μk− 1
2
C
V n
nt
kRm1
S k a
μk
2
: H m,n L m,n
3.11
For any ε > 0, by the Markov’s inequality, we have
P
1
√
n
nt
kRm1
S k
μk − 1
2
> ε
≤ C
ε√
nE
nt
kRm1
S k
μk− 1
2
C
ε√
n
nt
kRm1
Var
S k μk
Cσ2
εμ2√
n
nt
kRm1
1
k
a.s.
−→ 0.
3.12
Then, H m,n
p
→ 0 To obtain this result, we need the following fact:
V2
n n
a.s.
→ σ2,
n
i1
X i − X2
n
i1
X i − μ2
a.s.
−→ 1, as n −→ ∞. 3.13
Indeed,
n
i1
X i − X2
n
i1
X i − μ2
n
i1
X i − μ2− nμ − X2
n
i1
X i − μ2
1 −
μ − X2
n
i1
X i − μ2
n .
3.14
Trang 7Now, we choose two constants N > 0 and 0 < δ < 1 such that P |X − μ| > N < δ Hence, in view of the strong law of large numbers, we have for n large enough
μ − X2
n
i1
X i − μ2
n ≤
μ − X2
n
i1
X i − μ2
IX i − μ> N
n
≤
μ − X2
N2n
i1 IX i − μ> N
n
o1
N2
PX − μ> N
o1
a.s.
o1,
3.15
which together with3.14 implies that
n
i1
X i − X2
n
i1
X i − μ2 V n2
n
i1
X i − μ2
a.s.
−→ 1, 3.16
as n → ∞ Furthermore, in view of the strong law of large numbers again, we obtain
V2
n
n
n
i1
X i − X2
n
i1
X i − μ2 ·
n
i1
X i − μ2
n
a.s.
−→ σ2, 3.17
as n → ∞, where σ2 VarX > 0 For L m,n , by noting that S n a/S n a.s. → 0, as n → ∞ see
Hu and Su9, thus we can easily get
S n a
n S n a
S n ·S n n
a.s.
as n → ∞ Then, for any 0 < δ< 1, there exists a positive integer Rsuch that
P
sup
k≥R
S k a
k ≥ δ
Consequently, coupled with3.18, we have
PL m,n > δ
≤ P
C
V n n k1
S k a
μk
2
> δ, sup k≥R
S k a
k < δ
P
sup
k≥R
S k a
k ≥ δ
≤ P
C
V n n k1
S k a
k > δ
δ.
3.20
Trang 8Clearly, to show L m,n
p
→ 0, as n → ∞, it is sufficient to prove
1
V n n k1
S k a
k
p
Indeed, combined with3.17, we only need to show
1
√
n n k1
S k a
k
p
As a matter of fact, by the definitions of S n a and K n a, we have
M n − aK n a < S n a ≤ M n K n a. 3.23
In view of the fact M n ↑ ∞a.s., we can get from Hu and Su 9 that
S n a
M n
a.s. ∼ K n a, 3.24
and thus it suffices to prove
1
√
n n k1
M k K k a
k
p
Actually, for all ε, δ > 0, and N1large enough, we can have that
P
1
√
n
n
k1
K k aM k
k > ε
P
1
√
n n k1
K√k a
k ·M√k
k > ε
≤ P
1
√
n n k1
K√k a
k · δ > ε, sup
k≥N1
M√k
k < δ
P
sup
k≥N1
M√k
k ≥ δ
.
3.26
Observe that if F has a medium tail, then we have M n /√
n M n / log nlog n/√
n a.s.→ 0 by
noting that M n / log n a.s. → 1/c 9, where c is the limit defined inSection 1 Thus it follows
P
sup
k≥N
M√k
k ≥ δ
Trang 9
as N1 → ∞ Further, by the Markov’s inequality and the bounded property of EK k a from
Hu and Su9, we have
P
1
√
n n k1
K√k a
k · δ > ε, sup
k≥N1
M√k
k < δ
≤ P
δ
√
n n k1
K√k a
k > ε
≤ C δ
ε√
n n k1
EK k a
√
k
≤ C δ
ε√
n n k1
1
√
k ≤ C δ
ε ,
3.28
and, hence, the proof of 3.22 is terminated Thus Lm,n
p
→ 0 follows Finally, in order to complete the proof, it is sufficient to show that
Y n t : μ
V n
nt
k1
C k− 1−→d
t
0
W x
x dx, 3.29 and, coupled with3.21, we only need to prove
Y n t : μ
V n
nt
k1
S k
μk− 1
d
−→
t
0
W x
x dx. 3.30 Let
H
f
t
⎧
⎪
⎪
t
f x
x dx, t > ,
0, 0≤ t ≤ ,
Y n, t
⎧
⎪
⎪
1
V n
nt
kn 1
S k − μk
k , t > ,
0, 0≤ t ≤
3.31
It is obvious that
max
0≤t≤1
t
0
W x
x dx − H Wt
0≤t≤ sup
t
0
W x
−→ 0, as −→ 0. a.s. 3.32 Note that
max
0≤t≤ |Y n t − Y n, t| max
0≤t≤
1
V n
nt
k1
S k − μk
V n
n
k1
S k − μk
k , 3.33
Trang 10and then, for any 1> 0, by the Cauchy-Schwarz inequality and 3.17, it follows that
lim
→ 0lim sup
max
0≤t≤ |Y n t − Y n, t| ≥ 1
≤ lim
→ 0lim sup
1
V n
n
k1
S k − μk
≤ lim
→ 0lim sup
n → ∞
C
√
n
n
k1
ES k − μk
k
≤ lim
→ 0lim sup
n → ∞
C
√
n
n
k1
1
√
k
Var
S k − μk
√
k
1/2
lim
→ 0lim sup
n → ∞
C
√
n
n
k1
1
√
k
≤ lim
→ 0lim sup
n → ∞
C
√
n
n .
3.34 Furthermore, we can obtain
sup
≤t≤1
1
V n
nt
k n 1
S k − μk
nt
n
S x − xμ
≤ sup
≤t≤1
1
V n
nt1
n 1
S x − xμ
x dx −
nt
n
S x − xμ
≤ 1
V n
n 1
n
S x − xμ
≤t≤1sup
1
V n
nt1
nt
S x − xμ
sup
≤t≤1
1
V n
nt1
n 1
S x − xμ 1
x−x1
dx
≤ maxk≤nS k − μk
≤t≤1
2
n 2
nt 1
n
≤ Cmaxk≤nS k − μk
k
i1X i − μ
nV n
C
V n
n
i1X i − μ
n
a.s.
−→ 0.
3.35
Therefore, uniformly for t ∈ , 1, we have
1
V n
nt
kn 1
S k − μk
V n
nt
n
S x − xμ
x dx o P1
t
W n t
x dx o P 1, 3.36
Trang 11where W n t : S nt − ntμ/V n Notice that H · is a continuous mapping on the space
D0, 1 Thus, using the continuous mapping theorem c.f., Theorem 2.7 of Billingsley 10,
it follows that
Y n, t H W n t o P1−→ H d Wt, in D0, 1, as n −→ ∞. 3.37
Hence,3.32, 3.34, and 3.37 coupled with Theorem 3.2 of Billingsley 10 lead to 3.30 The proof is now completed
A useful notion of a U-statistic has been introduced by Hoeffding 11 Let a U-statistic be
defined as
U n
n m
−1 1≤i 1<···<im ≤n
h X i1, , X im , 4.1
where h is a symmetric real function of m arguments and {X i ; i ≥ 1} is a sequence of i.i.d random variables If we take m 1 and hx x, then U n reduces to S n /n Assume that EhX1, , X m2
< ∞, and let
h1x Ehx, X2, , X m ,
U n m
n n i1
h1X i − Eh Eh. 4.2
Thus, we may write
U n U n R n , 4.3 where
R n
n m
−1
1≤i 1<···<im ≤n
H X i1, , X im ,
H x1, , x m hx1, , x m − m
i1
h1x i − Eh − Eh.
4.4
It is well knowncf Resnick 12 that
Cov
U n , R n
0,
n Var
⎛
⎝
n m
−1
R n
⎞
⎠ −→ 0, as n −→ ∞. 4.5
Theorem 2.1now is extended to U-statistics as follows.
Trang 12Theorem 4.1 Let U n be a U-statistic defined as above Assume that Eh2 < ∞ and PhX1, , X m > 0 1 Denote μ Eh > 0 and σ2 Varh1X1 / 0 Then,
nt
km
U k
μn
m
μ/mVn
d
−→ exp
t
0
W x
, in D 0, 1, as n −→ ∞, 4.6
where Wx is a standard wiener process, and V2
n n i1 X i − X2.
In order to prove this theorem, by3.17, we only need to prove
μ
m√
n
nt
km
U k
μn
m − 1
d
−→ σ
t
0
W x
x dx, in D0, 1, as n −→ ∞. 4.7
If this result is true, then with the fact thatn
m−1U n a.s. → Eh μ deduced from E|h| < ∞ see
Resnick12 and 4.3,Theorem 4.1follows immediately from the method used in the proof
ofTheorem 2.1with S k /k replaced byn
m−1U n Now, we begin to show4.7 By 4.3, we have
μ
m√
n
nt
km
U k
μn
m− 1
μ
m√
n
nt
km
U k
μn
m− 1
μ
m√
n
nt
km
R k
μn
m. 4.8
By applying3.30 to random variables mh1X i for i ≥ 1, we have
μ
m√
n
nt
km
U k
μn
m− 1
√μ
n
n
k1
k
i1 h1x i
−m−1
k1
k
i1 h1x i
d
−→ σ
t
0
W x
4.9
in D0, 1, as n → ∞, since the second expression converges to zero a.s as n → ∞ Therefore,
for proving4.7, we only need to prove
μ
m√
n
nt
km
R k
μn
m
p
−→ 0, as n −→ ∞, 4.10 and it is sufficient to demonstrate
R n : μ
m√
n n km
R k
μn
m
p
−→ 0, as n −→ ∞. 4.11
Indeed, we can easily obtain E R2
n → 0 as n → ∞ from Hoeffding 11 Thus, we complete
the proof of4.7, and, hence, Theorem 3.1 holds