1. Trang chủ
  2. » Khoa Học Tự Nhiên

Negative association and negative dependence for random upper semicontinuous functions, with applications

16 204 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 16
Dung lượng 292,22 KB
File đính kèm Preprint1336.rar (274 KB)

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The aim of this paper is to discuss the notions of negative association and negative dependence for random upper semicontinuous functions. Besides giving some properties for these notions, we obtain inequalities which form maximal inequality and H´ajeckR´enyi’s type inequality. Also, some laws of large numbers are established under various settings and they are extensions for corresponding ones in the literature

Trang 1

Negative association and negative dependence for random upper semicontinuous functions, with

applications

Nguyen Tran Thuana,1 and Nguyen Van Quanga,2

aDepartment of Mathematics, Vinh University, Nghe An Province, Viet Nam

Abstract The aim of this paper is to discuss the notions of negative association and negative depen-dence for random upper semicontinuous functions Besides giving some properties for these notions,

we obtain inequalities which form maximal inequality and H´ajeck-R´enyi’s type inequality Also, some laws of large numbers are established under various settings and they are extensions for corresponding ones in the literature

Mathematics Subject Classifications (2010): 60B99, 60F15, 28B20

Key words: Random upper semicontinuous functions, fuzzy random sets, level-wise negatively asso-ciated, level-wise negatively dependent, law of large numbers

1 Introduction

Let (Ω, F , P ) be a complete probability space and {Yn, n > 1} be a collection of random variables defined on it The consideration of independent relation or dependent relation for {Yn, n > 1} plays

an important role in probability and statistics The independence of random variables is the strong property, and in fact, many phenomena in practice usually depend on each other Therefore, besides studying the independent random variables, many kinds of dependence have been considered, such

as martingale, m-dependence, ρ-mixing, ϕ-mixing, negative dependence, positive dependence, etc One of the dependence structures for collection of real-valued random variables that has attracted the interest of many authors is negative association The definition of negative association was first introduced by Alam and Saxena [1] and carefully studied by Joag-Dev and Proschan [10] A finite family {Yi, 1 6 i 6 n} of real-valued random variables is said to be negatively associated (NA) if for any disjoint subsets A, B of {1, 2, , n} and any real coordinatewise nondecreasing functions f on

R|A|, g on R|B|,

Cov f (Yi, i ∈ A), g(Yj, j ∈ B) 6 0 whenever the covariance exists, where |A| denotes the cardinality of A An infinite family of random variables is NA if every finite subfamily is NA

The next dependence notion is extended negatively dependent which was introduced by Liu [16] as follows A finite family of real-valued random variables {Yi, 1 6 i 6 n} is said to be extended negatively dependent (END) if there is some M > 0 such that the two following inequalities hold

P (Y1> x1, , Yn > xn) 6 M

n

Y

i=1

P (Y16 x1, , Yn 6 xn) 6 M

n

Y

i=1

1 Email: tranthuandhv@gmail.com

2 Email: nvquang@hotmail.com

Trang 2

for all xi, i = 1, , n An infinite family of random variables is END if every finite subfamily is END.

If two inequalities (1.1) and (1.2) hold with M = 1 then the sequence {Yi, 1 6 i 6 n} is call negatively dependent (ND), which was introduced by Lehmann [15] Thus, END is an extension of ND A family {Yn, n > 1} is pairwise negatively dependent (PND) if P (Yi > x, Yj > y) 6 P (Yi > x)P (Yj > y) (or equivalently, P (Yi 6 x, Yj 6 y) 6 P (Yi 6 x)P (Yj 6 y)) for all i 6= j and all x, y ∈ R This follows that PND is also an extension of ND Note that NA implies ND, but ND does not imply NA For the details, the reader can refer to [10, 23] These notions were extended when considering them in more abstract spaces, such as the notion of NA in Rd [2], and recently, NA in Hilbert space [14] In this paper, we will consider above dependence notions in space of upper semicontinuous functions The upper semicontinuous (u.s.c.) functions are very useful in many contexts such as in optimiza-tion theory, image processing, spatial statistics, etc In various settings, the u.s.c funcoptimiza-tions appear under different names For instance, they are also called fuzzy sets [5, 6, 9, 11, 13, 22] or grey-scale images [19, 25] Random u.s.c functions were introduced to model random elements which their values are u.s.c functions Up to now, many authors have been concerning with limit theorems for class of random u.s.c functions, especially, many laws of large numbers were established in various settings (for example, see [5, 6, 7, 11, 13, 20, 21, 22, 26])

However, to the best of our knowledge, many laws of large numbers were obtained for independent random u.s.c functions (or independent fuzzy random sets) Some few ones are known for dependent case, such as Ter´an [26] gave a strong law of large numbers for random u.s.c functions under ex-changeability conditions; Fu [7], Quang and Giap [21] obtained some strong laws of large numbers for ϕ(ϕ∗)-mixing dependence fuzzy random sets; recently, Quang and Thuan [22] got some strong limit theorems for adapted arrays of fuzzy random sets The aim of this paper is to propose some new kinds of dependence for random u.s.c functions which rely on the NA, ND notions mentioned above, and then establish several laws of large numbers We also show that our results are generalizations of corresponding ones in the literature The layout of this paper is as follows: in Section 2, we summarize some basic definitions and related properties Section 3 will discuss the notion of NA and Section 4 will present the notion of END, PND, ND in space of u.s.c functions In addition, we give some in-equalities which form H´ajeck-R´enyi’s type inequality for these notions and some laws of large numbers will be established

2 Preliminaries

Let K be the set of nonempty convex and compact subsets of R If a is an element of K then it will be an interval of R, which will be denoted by a =a(1); a(2) (where a(1), a(2) are two end points) The Hausdorff distance dH on K is defined by

dH(a, b) = max

a(1)− b(1)

; a(2)− b(2)

, a, b ∈ K

It was known that (K, dH) is a separable and complete metric space A linear structure in K is defined

as follows: for a, b ∈ K, λ ∈ R then

a + b = {x = y + z : y ∈ a, z ∈ b} =a(1)+ b(1); a(2)+ b(2),

λa = {λx : x ∈ a} =

(

λa(1); λa(2)

if λ > 0

λa(2); λa(1) if λ < 0. For a function u : R → [0; 1], then for each α ∈ (0; 1], the α-level sets of u are defined by [u]α = {x ∈ R : u(x) > α} It is easy to see that [u]α = ∩β<α[u]β For each α ∈ [0; 1), [u]α+

denotes the closure of {x ∈ R : u(x) > α}, and equivalently [u]α+= cl{∪1>β>α,β↓α[u]β} In particular, [u]0+ is called the support of u and denoted by supp u For convenience, we also use [u]0 to indicate supp u, it means [u]0 = [u]0+ = supp u Note that all level sets [u]α, α ∈ (0; 1] of u are closed if and only if u is u.s.c Recall that an u.s.c function u : R → [0; 1] is called quasiconcave function if u(λx + (1 − λ)y) > min{u(x), u(y)}, and its equivalent condition is that [u]α is a convex set of R for every α ∈ (0; 1] Let U denote the family of all u.s.c functions u : R → [0; 1] satisfying the following conditions

Trang 3

(i) supp u is compact;

(ii) [u]16= ∅;

(iii) u is quasiconcave

Therefore, if u ∈ U then for each α ∈ (0; 1], [u]αis an interval of R and denoted by [u]α=[u](1)

α ; [u](2)α , where [u](1)α and [u](2)α are two end points of the interval Moreover, it is clear that for each α ∈ [0; 1),

uα+ is also an interval of R We denote [u]α+=[u](1)

α+; [u](2)α+, where [u](1)

α+= lim1>β>α,β↓α[u](1)β and [u](2)α+= lim1>β>α,β↓α[u](2)β

Note that in the other setting, the range of u is not always equal to [0; 1] and the above condition (ii) does not necessarily hold The following proposition will summarize the properties of an element

u ∈ U

Proposition 2.1 (Goetschel and Voxman [9]) For u ∈ U , denote u(1)(α) = [u](1)α and u(2)(α) = [u](2)α by considering them as functions of α ∈ [0; 1] Then the following hold:

(1) u(1) is a bounded nondecreasing function on [0; 1]

(2) u(2) is a bounded nonincreasing function on [0; 1]

(3) u(1)

(1) 6 u(2)(1)

(4) u(1)(α) and u(2)(α) are left continuous on (0; 1] and right continuous at 0

(5) If v(1) and v(2) satisfy the above (1)-(4), then there exists a unique v ∈ U such that [v]α =

v(1)(α) ; v(2)(α) for all α ∈ [0; 1]

The above proposition show that an element u ∈ U is completely determinate through its whole α-level sets

The addition and scalar multiplication on U are defined by

(u + v)(x) = sup

y+z=x

min{u(y), v(z)}, (λu)(x) =

( u(λ−1x) if λ 6= 0 e

where u, v ∈ U , λ ∈ R and e0 = I{0} is the indicator function of {0} Then for u, v ∈ U , λ ∈ R we have [u + v]α= [u]α+ [v]αand [λu]α= λ[u]αfor each α ∈ (0; 1]

The following metrics on U are often used: for u, v ∈ U ,

D∞(u, v) = sup

α∈(0;1]

dH([u]α, [v]α) = sup

α∈[0;1]

dH([u]α, [v]α),

Dp(u, v) =

Z 1

0

dpH([u]α, [v]α)dα

1/p

, 1 6 p < ∞

It is clear to see that

Dp(u, v) 6 Dq(u, v) 6 D∞(u, v) for all u, v ∈ U and 1 6 p 6 q < ∞ It is known that metric space (U, D∞) is complete but not separable and (U , Dp) is separable but not complete Moreover, when considering u.s.c functions on

R, the following metric also is used

D](u, v) = dH

Z 1

0

[u]αdα,

Z 1

0

[v]αdα,

whereR1

0[u]αdα = R1

0[u](1)α dα ;R1

0[u](2)α dα is an element of K By simple estimations, we can see that

D](u, v) 6 Dp(u, v) for all u, v ∈ U and for all p > 1 For u ∈ U, denote kuk∞ = D∞(u, e0), kukp =

Dp(u, e0) and kuk]= D](u, e0)

We define the mapping h., i : K × K → R by the equation

ha, bi = 1

2



a(1)b(1)+ a(2)b(2), where a =a(1), a(2), b = b(1), b(2)

For a, b ∈ K, we define

d∗(a, b) =pha, ai − 2ha, bi + hb, bi =1

2



a(1)− b(1)2

+ a(2)− b(2)2 1/2

,

Trang 4

and this implies that d∗is a metric on K On the other hand, we have following estimation

d2H(a, b) 6 2d2∗(a, b) 6 2d2H(a, b) for all a, b ∈ K and hence, the two metrics dH and d∗ are equivalent The metric space (K, d∗) is complete and separable and this one is implied by the completeness and separation of (K, dH)

Define h., i : U × U → R by the equation

hu, vi =

Z 1

0

h[u]α, [v]αidα

It is easy to see that the mapping h., i has the following properties: (i) hu, ui > 0 and hu, ui = 0 ⇔ u =

e0; (ii) hu, vi = hv, ui; (iii) hu + v, wi = hu, wi + hv, wi; (iv) hλu, vi = λhu, vi; (v) |hu, vi| 6phu, uihv, vi, where u, v, w ∈ U and λ > 0

For u, v ∈ U , define

D∗(u, v) =phu, ui − 2hu, vi + hv, vi =Z 1

0

d2∗([u]α, [v]α)dα

1/2

It is clear that D∗ is a metric on U , moreover, it follows from the relation of dH and d∗ that

D22(u, v) 6 2D∗2(u, v) 6 2D22(u, v)

So, D2and D∗are two equivalent metrics We also deduce that (U , D∗) is separable metric space but not complete For u ∈ U , denote kuk∗= D∗(u, e0)

A mapping X : Ω → K is called a K-valued random variable if X−1(B) ∈ F for all B ∈ B(K), where B(K) is the Borel σ-algebra on (K, dH) A mapping X : Ω → U is called a U -valued random variable (or random u.s.c function) if [X]α is K-valued random variable for all α ∈ (0; 1] Note that this condition is equivalent with the one that X is (U , Dp)-valued random variable for any p > 1, i.e

X−1(B) ∈ F for all B ∈ B(U , Dp), where B(U , Dp) is the Borel σ-algebra on (U , Dp) For q > 0, denote Lq(U ) by the class of U -valued random variables X satisfying EkXkq< ∞ (where the symbol

 represents the ∞, p, ∗, \) It is easy to see that Lq

∞(U ) ⊂ Lq

p(U ) ⊂ Lq2(U ) = Lq∗(U ) ⊂ Lq

r(U ) ⊂ Lq\(U ) for all q > 0, 1 6 r 6 2 6 p < ∞ If X ∈ L1

∞(U ) then X is called D∞-integrable, this implies that [X](1)α and [X](2)α are integrable real-valued random variables for all α ∈ (0; 1]

Let X, Y be two U -valued random variables, then X and Y are called level-wise independent if for each α ∈ (0; 1], the σ-algebra σ([X]α) and σ([Y ]α) are independent; X and Y are called independent

if the σ-algebra σ([X]α: 0 < α 6 1) and σ([Y ]α: 0 < α 6 1) are independent; the independence and level-wise independence for arbitrary collection of U -valued random variables are defined as usual; X and Y are called level-wise identically distribution if for each α ∈ (0; 1], [X]αand [Y ]αare identically distribution, this is equivalent with [X](1)α ; [X](2)α  and [Y ](1)

α ; [Y ](2)α  being identically distributed random vectors It is clear that independence implies level-wise independence, however the example below will show that the inverse is not true

Example 2.2 Let X, Y be two independent random variables such that X has uniform distribution

on [−1; 0] and Y has uniform distribution on [0; 1] (i.e X ∼ U[−1;0], Y ∼ U[0;1]) We construct two

U -valued random variables F1, F2as follows:

F1(ω)(x) =

1

2.X(ω)+1x+1 if − 1 6 x < X(ω)

1

2 if X(ω) 6 x < Y (ω)

1

2+12.x−Y (ω)1−Y (ω) if Y (ω) 6 x 6 1

0 if x < −1 or x > 1 and

F2(ω)(x) =

1

2.1−Y (ω)x+1 if − 1 6 x < −Y (ω)

1

2 if − Y (ω) 6 x < −X(ω)

1

2+12.x+X(ω)1+X(ω) if − X(ω) 6 x 6 1

0 if x < −1 or x > 1

Trang 5

By simple calculations, we have

[F1]α(ω) = [F1(ω)]α=

( [2α (X(ω) + 1) − 1 ; 1] if 0 < α 6 12

[(2α − 1)(1 − Y (ω)) + Y (ω) ; 1] if 12 < α 6 1 and

[F2]α(ω) = [F2(ω)]α=

( [2α (−Y (ω) + 1) − 1 ; 1] if 0 < α 6 1

2

[(2α − 1)(1 + X(ω)) − X(ω) ; 1] if 12 < α 6 1.

It is easy to see that [F1]α and [F2]α are independent for each α ∈ (0; 1] (by independence of X, Y ) Thus, F1, F2 are level-wise independent However, σ{[F1]α, 0 < α 6 1} = σ{[F2]α, 0 < α 6 1} = σ{X, Y }, so F1 and F2 are not independent Moreover, it follows from X ∼ U[−1;0], Y ∼ U[0;1] that

−X and Y are identically distributed Thus, [F1]α and [F2]α are identically distributed for each

α ∈ (0; 1]

Let X be a D∞-integrable U -valued random variable Then, the expectation of X, denote by EX,

is defined as a u.s.c function whose α-level set [EX]α is given by

[EX]α=[EX](1)

α ; [EX](2)α  = E[X](1)

α ; E[X](2)α  for each α ∈ (0; 1]

For X, Y ∈ L1

∞(U ) ∩ L2

∗(U ), the notions of variance of X and covariance of X, Y were introduced

in [6] (also see [5]) as follows:

Cov(X, Y ) = EhX, Y i − hEX, EY i and V arX = Cov(X, X) = EhX, Xi − hEX, EXi Now we recall some properties of the notions of variance and covariance

Proposition 2.3 ([6]) Let X, Y be random u.s.c functions in L1∞(U ) ∩ L2∗(U ) Then

(1) Cov(X, Y ) = 1

2

R1

0 Cov([X](1)α , [Y ](1)α ) + Cov([X](2)α , [Y ](2)α )dα and consequently, V ar(X) =

1

2

R1

0 V ar([X](1)α ) + V ar([X](2)α )dα = ED2

∗(X, EX)

(2) V ar(λX + u) = λ2

V arX for λ > 0 and u is an element (not random) of U;

(3) Cov(λX + u, µY + v) = λµCov(X, Y ) for λ, µ > 0 and u, v are elements (not random) of U; (4) V ar(X + Y ) = V arX + V arY + 2Cov(X, Y );

(5) Cov(X, Y ) = 0 if X and Y are level-wise independent;

(6) V ar(ξX) = (V arξ)EhX, Xi + (Eξ)2V arX if ξ is a nonnegative real-valued random variable and ξ, X are independent

3 Negatively associated sequence in U

Definition 3.1 Let {Xi, 1 6 i 6 n} be a finite collection of K-valued random variables Then, {Xi, 1 6 i 6 n} is said to be negatively associated if {Xi(1), 1 6 i 6 n} and {Xi(2), 1 6 i 6 n} are sequences of NA real-valued random variables An infinite collection of K-valued random variables

is NA if every finite subfamily is NA Let {Xn, n > 1} is a sequence of U-valued random variables Then, {Xn, n > 1} is said to be level-wise negatively associated (level-wise NA) if {[Xn]α, n > 1} are sequences of NA K-valued random variables for all α ∈ (0; 1]

Example 3.2 (1) Let {Xn, n > 1} be a collection of level-wise independent U-valued random vari-ables Then {[Xn](1)α , n > 1} and {[Xn](2)α , n > 1} are collections of independent real-valued random variables for every α ∈ (0; 1] and hence, they are collections of NA real-valued random variables Therefore, {Xn, n > 1} is the sequence level-wise NA U-valued random variables On the other hand,

it is not hard to show that there exist the sequences of level-wise NA but not level-wise independent via their end points Thus, the class of level-wise NA U -valued random variables is actually larger than the class of level-wise independent U -valued random variables

Trang 6

(2) We consider the family of functions as follows: for i = 1, 2

fn(i): [0; 1] × R → R

(α, x) 7→ fn(i)(α, x)

- For each α ∈ [0; 1], the functions fn(1)(α, ) are simultaneously nondecreasing (or simultaneously non-increasing) for all n > 1, the functions fn(2)(α, ) are simultaneously nondecreasing (or simultaneously nonincreasing) for all n > 1, the nondecreasing or nonincreasing of fn(1)(α, ) and fn(2)(α, ) is indepen-dent together

- For each x ∈ R and for each n > 1, the functions fn(1)(., x) and fn(2)(., x) satisfy conditions (1)-(4) of Proposition 2.1 (where fn(1)(., x) is regarded as u(1) and fn(2)(., x) is regarded as u(2))

We can point out many functions satisfying above conditions, for instance, fn(1)(α, x) = α.ex/n,

fn(2)(α, x) = (2 − α).e(x+1)/n; fn(1)(α, x) = arctan α+n1+nα2x, f(2)

n (α, x) = π/2 + arccot α√n

nx, etc Define the mappings efn : R → U as follows: for t ∈ R

[ efn(t)]α=f(1)

n (α, t) ; fn(2)(α, t) for all α ∈ [0; 1]

If {Xn, n > 1} is a sequence of NA real-valued random variables, then { efn(Xn), n > 1} is a sequence

of level-wise NA U -valued random variables In special case, when efn(t) is the indicator function of {t} (i.e efn(t) = I{t}∈ U , for t ∈ R and for all n > 1), then efn(Xn) = I{Xn} and [ efn(Xn)]α= {Xn} = [Xn; Xn] for all α ∈ [0; 1] Hence, a sequence {Xn, n > 1} of NA real-valued random variables can be regarded as a sequence of level-wise NA U -valued random variables

Proposition 3.3 Let X, Y ∈ L1

∞(U ) ∩ L2

∗(U ) be two level-wise NA U -valued random variables Then

we have Cov(X, Y ) 6 0, or in other words EhX, Y i 6 hEX, EY i

Proof It follows from the property of NA real-valued random variables that

Cov([X](i)α , [Y ](i)α ) 6 0 for all α ∈ (0; 1] and for i = 1, 2 Combining with Proposition 2.2(1), this implies Cov(X, Y ) 6 0 and

The following theorem will establish the H´ajeck-R´enyi’s type inequality for level-wise NA U -valued random variables and it plays the key role to establish the laws of large numbers A version for real-valued random variables was given in [17] Note that this result is obtained in the settings with respect

to metric D∗

Theorem 3.4 Let {bn, n > 1} be a sequence of positive nondecreasing real numbers Assume that {Xn, n > 1} is a sequence of D∞-integrable, level-wise NA U -valued random variables with EkXnk2

∗<

∞, n > 1 Then, for ε > 0 and any 1 6 m 6 n we have

P max

m6k6n

1

bkD∗(Sk, ESk) > ε6ε24b2

m

m

X

i=1

V arXi+32

ε2

n

X

j=m+1

V arXj

b2 j

where Sn=Pn

i=1Xi

Proof We have

P max

m6k6n

1

bkD∗(Sk, ESk) > ε

6 P max

m6k6n

1

bk

D∗(Sm, ESm) >2ε+ P

 max

m6k6n

1

bk

D∗

X

j=m+1

Xj,

k

X

j=m+1

EXj



>ε2



= P 1

bm

D∗(Sm, ESm) > ε

2

 + P

 max

m6k6n

1

bk

D∗

X

j=m+1

Xj,

k

X

j=m+1

EXj



> ε 2



:= (I1) + (I2)

Trang 7

For (I1), by Markov’s inequality and Proposition 2.3(1), we have

(I1) 6 4

ε2b2 m

ED2∗(Sm, ESm) = 2

ε2b2 m

Z 1

0



V ar [Sn](1)α  + V ar [Sn](2)α 

Since for each α ∈ (0; 1], {[Xn](1)α , n > 1} and {[Xn](2)α , n > 1} are sequences of NA real-valued random variables,

(I1) 6 2

ε2b2 m

Z 1

0

m

X

i=1



V ar [Xi](1)α  + V ar [Xi](2)α 

dα = 4

ε2b2 m

m

X

i=1

ED∗2(Xi, EXi)

For (I2), by putting Sk0 =Pk

j=1Xj+m, we have 1

b2

k+m

D2∗(Sk0, ESk0) = 1

b2 k+m

Z 1

0

d2∗([Sk0]α, [ESk0]α) dα

=

Z 1

0

1

2b2

k+m



 [Sk0](1)α − [ESk0](1)α 

2

+[S0k](2)α − [ES0k](2)α 

2 dα

=1

2

Z 1

0



Pk j=1

 [Xj+m](1)α − E[Xj+m](1)α



bk+m

2

+

Pk j=1

 [Xj+m](2)α − E[Xj+m](2)α



bk+m

2

dα (3.2)

Denote Yj−= [Xj+m](1)α , Yj+= [Xj+m](2)α , we obtain

Pk

j=1(Yj±− EYj±)

bk+m

= 1

bk+m

k

X

j=1

bj+m.Y

±

j − EYj±

bj+m

bk+m

k

X

j=1

 j

X

i=1

(bi+m− bi+m−1)Y

±

j − EYj±

bj+m

 + bm

k

X

j=1

Yj±− EYj±

bj+m

bk+m

k

X

i=1

(bi+m− bi+m−1) X

i6j6k

Yj±− EYj±

bj+m + bm

k

X

j=1

Yj±− EYj±

bj+m

6 max

16i6k

X

i6j6k

Yj±− EYj±

bj+m

= max

16i6k

X

16j6k

Yj±− EYj±

bj+m

16j<i

Yj±− EYj±

bj+m

6 2 max

16i6k

i

X

j=1

Yj±− EYj±

bj+m

It follows from (3.2) and (3.3) that

1

b2

k+m

D∗2(S0k, ES0k) 6

Z 1

0

 max

16i6k

i

X

j=1

Yj−− EYj−

bj+m

2

+ max

16i6k

i

X

j=1

Yj+− EY+

j

bj+m

2

Therefore,

max

m6k6n

1

b2 k

D∗2

X

j=m+1

Xj,

k

X

j=m+1

EXj



16k6n−m

1

b2 k+m

D2∗(Sk0, ESk0)

6 2

Z 1

0

 max

16i6n−m

i

X

j=1

Yj−− EYj−

bj+m

2

16i6n−m

i

X

j=1

Yj+− EY+

j

bj+m

2

Trang 8

This implies that,

(I2) = P



max

m6k6n

1

b2D2∗

X

j=m+1

Xj,

k

X

j=m+1

EXj



> ε

2

4



6 P2

Z 1

0

 max

16i6n−m

i

X

j=1

Yj−− EYj−

bj+m

2

16i6n−m

i

X

j=1

Yj+− EY+

j

bj+m

2

dα > ε

2

4



6 P

 Z 1

0

max

16i6n−m

i

X

j=1

Yj−− EYj−

bj+m

2

dα > ε

2

16

 + P

 Z 1

0

max

16i6n−m

i

X

j=1

Yj+− EY+

j

bj+m

2

dα > ε

2

16



6 16

ε2

Z

Z 1

0

max

16i6n−m

i

X

j=1

Yj−− EYj−

bj+m

2

16i6n−m

i

X

j=1

Yj+− EY+

j

bj+m

2

dαdP

= 16

ε2

Z 1

0

16i6n−m

i

X

j=1

Yj−− EYj−

bj+m

2

+ E max

16i6n−m

i

X

j=1

Yj+− EYj+

bj+m

2

dα (by Fubini theorem)

Again, for each α ∈ (0; 1], {[Xn](1)α , n > 1} and {[Xn](2)α , n > 1} are sequences of NA real-valued random variables, so are {b−1j+mYj−, 1 6 j 6 n − m} and {b−1j+mYj+, 1 6 j 6 n − m} It follows from Theorem 2 of [24] that

(I2) 616

ε2

Z 1

0

n−m

X

j=1

E(Yj−− EYj−)2+ E(Yj+− EY+

j )2

b2 j+m

 dα

=32

ε2

Z 1

0

n−m

X

j=1

Ed2

∗([Xj+m]α, [EXj+m]α)

b2 j+m

 dα

=32

ε2

n

X

j=m+1

ED2∗(Xj, EXj)

b2 j

Theorem 3.5 Let 0 < bk ↑ ∞ and let {Xn, n > 1} be a sequence of D∞-integrable, level-wise

NA U -valued random variables with EkXnk2

∗ < ∞, n > 1 such that P∞

k=1b−2k V arXk < ∞ Put

Sn=Pn

i=1Xi, then

(a) b−1

n D∗(Sn, ESn) → 0 a.s as n → ∞, i.e the strong laws of large numbers holds

(b) E supn>1 b−1n D∗(Sn, ESn)r

< ∞ for all r ∈ (0; 2)

Proof (a) For ε > 0, by Theorem 3.4 we obtain

Psup

k>m

b−1k D∗(Sk, ESk) > ε= lim

n→∞P max

m6k6nb−1k D∗(Sk, ESk) > ε

6 limn→∞

 4

ε2b2 m

m

X

i=1

V arXi+32

ε2

n

X

k=m+1

V arXk

b2k



6 ε24b2

m

m

X

i=1

V arXi+32

ε2

X

k=m+1

V arXk

b2 Letting m → ∞ in above estimation, the first term of right hand side tends to zero by Kronecker’s lemma, the second term tends to zero by the hypothesis So, we get the conclusion (a)

(b) We have

E sup

n>1

b−1n D∗(Sn, ESn)r6 1 +

X

k=1

Psup

n>1

b−1n D∗(Sn, ESn) > k1/r

6 1 + 32

X

n=1

V arXn

b2 n

X

k=1

1

k2/r < ∞

Trang 9

The conclusion (b) is proved  From Theorem 3.5, we immediately deduce the following corollary with bn= n1/t, 0 < t < 2 Corollary 3.6 Let {Xn, n > 1} be a sequence of D∞-integrable and level-wise NA U -valued random variables with EkXnk2

∗ < ∞, n > 1 such that supnV arXn < ∞ Put Sn = Pn

i=1Xi, then for

0 < t < 2,

D∗(Sn, ESn)

n1/t → 0 a.s as n → ∞ and E sup

n>1

 D∗(Sn, ESn)

n1/t

r

< ∞ for all r ∈ (0; 2) For weak law of large numbers, we obtain following theorem

Theorem 3.7 Let {Xn, n > 1} be a sequence of D∞-integrable, level-wise NA U -valued random variables with EkXnk2

∗ < ∞, n > 1 and 0 < bn ↑ ∞ Put Sn =Pn

i=1Xi If b−2n Pn

k=1V arXk → 0

as n → ∞, then we have the weak law of large numbers b−1n max

16k6nD∗(Sk, ESk) → 0 in probability as

n → ∞, particularly, b−1n D∗(Sn, ESn) → 0 in probability as n → ∞

Proof For ε > 0 arbitrary, by Markov’s inequality we have

Pb−1n max

16k6nD∗(Sk, ESk) > ε6 1

ε2b2 n

E max

16k6nD∗(Sk, ESk)

2

So, the weak law of large numbers holds if the following inequality

E max

16k6nD∗(Sk, ESk)

2

6 C

n

X

k=1

V arXk holds, where C is a some constant number which does not depend on n Indeed,

E max

16k6nD∗(Sk, ESk)

2

= E max

16k6nD∗2(Sk, ESk)

=

Z



max

16k6n

Z 1

0

d2∗([Sk]α, [ESk]α) dαdP 6

Z

Z 1

0

max

16k6nd2∗([Sk]α, [ESk]α) dαdP

= 1

2

Z 1

0

E max

16k6n

k

X

i=1

 [Xi](1)α − E[Xi](1)α 

2

dα + 1 2

Z 1

0

E max

16k6n

k

X

i=1

 [Xi](2)α − E[Xi](2)α 

2

6 C

2

Z 1

0

n

X

k=1

E[Xk](1)α − E[Xk](1)α 

2

dα +C 2

Z 1

0

n

X

k=1

E[Xk](2)α − E[Xk](2)α 

2

dα (by Theorem 2 of Shao [24])

= C

n

X

k=1

ED2∗(Xk, EXk)

4 Negatively dependent sequence in U

In this section, we discuss the notion of negatively dependent, extended negatively dependent and pairwise negatively dependent for random variables taking values in U

Definition 4.1 Let {Xi, 1 6 i 6 n} be a finite collection of K-valued random variables Then, {Xi, 1 6 i 6 n} is said to be negatively dependent (extended negatively dependent, pairwise negatively dependent) if {Xi(1), 1 6 i 6 n} and {Xi(2), 1 6 i 6 n} are sequences of ND (resp END, PND) real-valued random variables An infinite collection of K-valued random variables is ND (END, PND)

if every finite subfamily is ND (resp END, PND) Let {Xn, n > 1} is a sequence of U-valued random variables Then, {Xn, n > 1} is said to be level-wise ND (level-wise END, level-wise PND) if {[Xn]α, n > 1} are sequences of ND (resp END, PND) K-valued random variables for all α ∈ (0; 1]

Trang 10

The illustrative examples for these notions can be constructed analogously as in Example 3.2 Now we consider partial order relations 4k (4u) on K (resp U ) defined as follows:

- For a, b ∈ K, a =a(1); a(2) and b = b(1) ; b(2), then a 4k b if and only if a(1)6 b(1)and a(2)6 b(2)

- For v, w ∈ U , then v 4uw if and only if [v]α4k [w]α for all α ∈ (0; 1]

It follows from Proposition 2.1(1)-(2) that if v 4uw then [v]α+4k[w]α+ for all α ∈ [0; 1)

Proposition 4.2 (1) Let X, Y ∈ L1

∞(U ) ∩ L2

∗(U ) be two level-wise PND U -valued random variables Then we have Cov(X, Y ) 6 0, or in other words EhX, Y i 6 hEX, EY i

(2) If X, Y are two U -valued random variables satisfying that for each α ∈ (0; 1],

P [X]α4k a, [Y ]α4kb 6 P [X]α4k aP [Y ]α4kb for all a, b ∈ K, then X, Y are level-wise PND

Proof (1) From Proposition 3.3, this conclusion is clear

(2) For each α ∈ (0; 1] and for any x, y ∈ R, we have

P [X](1)α 6 x, [Y ](1)α 6 y = P

[

n=1

[X]α4k [x; n], [Y ]α4k [y; n]

= lim

n→∞P [X]α4k [x; n], [Y ]α4k[y; n]

6 limn→∞P [X]α4k [x; n]P [Y ]α4k [y; n]

= P [X](1)α 6 xP [Y ](1)

α 6 y

Similarly,

P [X](2)α 6 x, [Y ](2)α 6 y = P [X]α4k[x; x], [Y ]α4k[y; y]

6 P [X]α4k [x; x]P [Y ]α4k[y; y] = P [X](2)

α 6 xP [Y ](2)

α 6 y

By applying the corresponding results for sequence of ND real-valued random variables (Theorem 3.1 and Corollary 3.2 of [12]) and revising the steps of proof similarly with the one in Theorem 3.4 above, we get the following theorem

Theorem 4.3 Assume that {Xn, n > 1} ⊂ L1∞(U ) ∩ L2∗(U ) is a sequence of level-wise ND U -valued random variables Put Sn =Pn

i=1Xi Then, there exists a constant C which does not depend on n such that

ED2∗(Sn, ESn) 6 C

n

X

i=1

V arXi,

E max

16k6nD∗2(Sk, ESk)6log n

log 3 + 2

2Xn

i=1

V arXi6 C log2n

n

X

i=1

V arXi,

P max

m6k6n

1

bkD∗(Sk, ESk) > ε632ε2log nlog 3 + 2

2 1

b2 m

m

X

i=1

V arXi+

n

X

j=m+1

V arXj

b2 j



6εC2log2n 1

b2 m

m

X

i=1

V arXi+

n

X

j=m+1

V arXj

b2 j



Theorem 4.3 provides some inequalities which are forms of maximal inequality and H´ajeck-R´enyi’s type inequality for U -valued random variables with respect to D∗ Therefore, by the same ways as in Theorem 3.5 and 3.7, we deduce the following laws of large numbers

... real-valued random variables for every α ∈ (0; 1] and hence, they are collections of NA real-valued random variables Therefore, {Xn, n > 1} is the sequence level-wise NA U-valued random. .. λ2

V arX for λ > and u is an element (not random) of U;

(3) Cov(λX + u, µY + v) = λµCov(X, Y ) for λ, µ > and u, v are elements (not random) of U; (4) V ar(X + Y... called a U -valued random variable (or random u.s.c function) if [X]α is K-valued random variable for all α ∈ (0; 1] Note that this condition is equivalent with the one that X

Ngày đăng: 16/10/2015, 09:30

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN