1. Trang chủ
  2. » Khoa Học Tự Nhiên

SOME RESULTS ON ALMOST SURE STABILITY OF NONAUTONOMOUS STOCHASTIC DIFFERENTIA

13 121 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 13
Dung lượng 393,62 KB
File đính kèm Preprint1534.rar (365 KB)

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Abstract. This paper studies both nonautonomous stochastic differential equations and stochastic differential delay equations with Markovian switching. A new result on almost sure stability of stochastic differential equations is given. Moreover, we provide new conditions for tightness and almost sure stability of stochastic differential equations

Trang 1

NON-AUTONOMOUS STOCHASTIC DIFFERENTIAL EQUATIONS WITH MARKOVIAN SWITCHING

NGUYEN THANH DIEU

Abstract This paper studies both non-autonomous stochastic

differ-ential equations and stochastic differdiffer-ential delay equations with

Mar-kovian switching A new result on almost sure stability of stochastic

differential equations is given Moreover, we provide new conditions for

tightness and almost sure stability of stochastic differential equations.

1 SDDEs with Markovian switching Due to increasing demands from real systems and phenomena in which both continuous dynamics and discrete events are involved, hybrid mod-els have been increasingly considered for decades If random factors in terms of white noise and Markov chains are taken into account, stochas-tic differential equations with regime-switching will be uses for modeling These equations have numerous applications in many branches of science and industry such as manufacturing systems, financial engineering, genetic technologies, ecology, see [2, 4, 7, 14, 15, 21, 22, 24, 25, 31, 32] among others If we suppose that the discrete component is a finite Markov pro-cess that does not depend on the state of the continuous components, we have a stochastic differential equation (SDEs) with Markovian switching This kind of differential equation have received a lot of attention (see e.g [1, 5, 6, 10, 13, 18, 19, 26, 27]) However, it is pointed out that many systems arising from science and technology do not depend only the present but also

on the past This fact results in needs of thorough research on stochastic differential delay equations (SDDEs) with Markovian switching For the two past decades, SDDEs with Markovian switching have been study very fre-quently in literature focusing mainly on types of stability and boundedness (see e.g [3, 9, 8, 11, 13, 20, 23, 28]) Continuing these studies, this paper consider both non-autonomous SDEs and SDDEs with Markovian switching

2 Almost sure stability of SDEs with Markovian switching

In this section, we deal with sufficient condition for almost sure stability

of SDEs with Markovian switching as follows

dX(t) = f X(t), r(t), tdt + g X(t), r(t), tdB(t) (2.1)

on a probability space (Ω, F , (Ft)t≥0, P) satisfying the usual conditions, where f : Rn× S × R+ → Rn; g : Rn× S × R+ → Rn× Rm; B(t) =

1991 Mathematics Subject Classification 34K50, 34K20, 65C30, 60J10.

Key words and phrases Stochastic differential delay equations; Stability in distribu-tion; Itˆ o’s formula; Markov switching.

1

Trang 2

2 NGUYEN THANH DIEU

(B1(t), , Bm(t))T is an m-dimensional Brownian motion, r(t) is a Markov chain taking values in a finite state space S = {1, 2, , N } with generator

Γ = (γij)N ×N with γij > 0 if i 6= j and r(·) is independent of B(·) More-over, B(t) and r(t) are Ft-adapted Let C2,1(Rn× S × R+; R+) denote the family of nonnegative functions V on Rn× S × R+ which are twice con-tinuously differentiable in x and once concon-tinuously differentiable in t For

V ∈ C2,1(Rn× S × R+; R+), we define

LV (x, i, t) =Vt(x, i, t) +

n

X

j=1

γijV (x, j, t) + Vx(x, i, t)f (x, i, t)

+1

2trace[g

T(x, i, t)Vxx(x, i, t)g(x, i, t)], where

Vt(x, i, t) = ∂V (x, i, t)

∂t , Vx(x, i, t) =

 ∂V (x, i, t)

∂x1

, ,∂V (x, i, t)

∂xn

 ,

Vxx(x, i, t) = ∂2V (x, i, t)

∂xk∂xj



n×n

Denote by Xx0 ,i(t) the solution to Equation (2.1) with initial data X(0) = x0 and r(0) = i For any two stopping times 0 ≤ τ1 ≤ τ2 < ∞, it follows from the generalized Itˆo formula that

EV (Xx0,i(τ2), r(τ2), τ2) = EV (Xx0 ,i(τ1), r(τ1), τ1)

+ E

Z τ 2

τ 1

LV (Xx0 ,i(s), r(s), s)ds provided that the integrations involved exist and are finite

In [12], the authors provided a criterion for stochastically asymptoti-cally stable in the large of Equation (2.1) which is cited as the following theorem

Theorem 2.1 [12, Theorem 5.37, pp.205] Assume that there are functions

V ∈ C2,1(Rn× R+×; R+), µ1, µ2 ∈ K∞ and µ3∈ K such that

µ1(|x|) ≤ V (x, t, i) and LV (x, t, i) ≤ −µ3(|x|) (2.3) for all (x, i, t) ∈ Rn× R+× S Then the trivial of equation (2.1) is stochas-tically asymptostochas-tically stable in the large

Although this theorem can be applied to many stochastic differential equations with Markovian switching as demonstrated in [12], the condition (2.3) seem to be restrictive, in which LV (x, t, i) is required to be uniformly upper bounded by a function of |x|, may not be satisfied for many equations Motivated by this comment, the main goal of this section is to weaken the aforesaid hypotheses We always impose the following assumption

Assumption 2.1 Suppose that Equation (2.1) has a unique global solution for any initial value (x0, i, t0) ∈ Rn× S × R+; f (0, i, t) = g(0, i, t) = 0 ∀i ∈

S, t ∈ R+ Furthermore, for each integer k ≥ 1 there is a positive constant

hk such that

|f (x, i, t) − f (y, i, t)| + |g(x, i, t) − g(y, i, t)| ≤ hk(|x − y|) (2.4)

Trang 3

for all |x| ∨ |y| ≤ k, i ∈ S, t ∈ R+.

Assumption 2.2 There are two functions V ∈ C2,1(Rn× S × R+; R+) and

w ∈ C(Rn; R+) such that w vanish at only 0 and that for any (x, i, t) ∈

Rn× S × R+ and

LV (x, i, t) ≤ γ(t) − α(t)w(x) (2.5) where α(.), γ(.) are non-negative, continuous, bounded function satisfying

αT = inf

t∈R +

Z t+T t

α(s)ds > 0 for some T > 0 and

Z ∞ 0

γ(t)dt < ∞

Lemma 2.2 Under assumption 2.1, for any σ > 0, ε > 0, T > 0, there exists a δ > 0 satisfying for any t > 0,

inf

t≤s≤t+TP{|Xx0,i(s)| ≥ δ} > 1 − ε provided that Xx0 ,i(t) ≥ σ

Proof Since f (0, i, t) = g(0, i, t) = 0, we have from (2.4) that |f (x, i, t)| +

|g(x, i, t)| ≤ h1|x| ∀ |x| ≤ 1 Note that, we always can construct a function ϕ(r) ∈ C2((0, ∞); R+) satisfying ϕ(r) = 0 if r > 1 and ϕ(r) = 1r if 0 < r ≤ 1 Lϕ(x) = − |x|−3xTf (x, i, t)

+1

2 − |x|

−3

xTf (x, i, t)|g(x, i, t)|2+ 3|x|−5|xTg(x, i, t)|2

≤|x|−2||f (x, i, t)| + |x−3||g(x, i, t)|2 ≤ (h1+ h21)ϕ(x)

It is easy to see that for |x| ≥ 1, we can find a constant H > 0 (we choose

H > (h1+h21)) such that Lϕ(x) ≤ Hϕ(x) So, for all x 6= 0, Lϕ(x) ≤ Hϕ(x) for each k ∈ N, define the stopping time

ςkt = inf{s ≥ t : |ϕ(Xx0 ,i(s))| > k}

Applying Itˆo’s formula for e−H(s−t)ϕ(Xx0 ,i(s)) we have

Ee−H((t+T )∧ς

t

k −t)ϕ(Xx0 ,i((t + T ) ∧ ςkt))

= Eϕ(Xx0 ,i

(t)) + E

Z (t+T )∧ς t

k

t

e−H(s−t)− Hϕ(Xx0 ,i(s)) + Lϕ(Xx0 ,i(s))ds

≤ Eϕ(Xx0 ,i(t)) ≤ 1

σ.

It can be implied from this inequality that lim

k→∞ςkt > T almost surely More-over, letting k → ∞ we have Eϕ(Xx0 ,i(t + T )) ≤ e

HT

σ The conclusion of this lemma follows directly from this estimate  Theorem 2.3 Let Assumptions 2.1 and 2.2 be satisfied Then, for any initial value (x0, i), we have

P{ lim

t→∞Xx0 ,i(t) = 0} = 1

Trang 4

4 NGUYEN THANH DIEU

Proof It suffices to show that for any σ > 0, P{lim sup

t→∞

|Xx 0 ,i(t)| ≤ σ} = 1 Let ~ > 0 and set

Aσ,~t = {σ ≤ |Xx0 ,i(t)| ≤ ~}, bσ,~= inf{w(x) : σ ≤ |x| ≤ ~}

For each n ∈ N, define the stopping time

Tn= inf{s > 0 : |Xx0 ,i(s)| > n} ∧ t

In view of Itˆo formula,

EV (Xx0,i(Tn), r(Tn), Tn) ≤

Z T n

0

γ(s)ds+EV (x0, i, 0)−E

Z T n

0

α(s)w(Xx0 ,i(s))ds

Z T n

0

γ(s)ds + V (x0, i, 0) − E

Z T n

0

1Aσ,~

s α(s)w(Xx0 ,i(s))ds (2.6) Consequently,

Z t

0

α(s)P(Aσ,~s )ds ≤ 1

bσ,~E

Z t 0

1Aσ,~

s α(s)w(Xx0 ,i(s))ds

≤ lim

k→∞

1

bσ,~E

Z T k

0

1Aσ,~

s α(s)w(Xx0 ,i(s))ds ≤

Z t 0

γ(s)ds + V (x0, i, 0) (2.7) Letting t → ∞ we have

Z ∞ 0

α(s)P(Aσ,~s )ds < ∞ (2.8) Suppose that P(Aσ,~s ) does not converge to 0, then, there is a sequence

tn ↑ ∞ such that P{Aσ,~t n } > `, ∀n ∈ N We can suppose without loss of generality that tn+1 > tn+ T , where T is the constant satisfying αT = inf

t∈R +

Rt+T

t α(s)ds > 0 Using Lemma 2.2 and the Markovian property of

Xx0 ,i(t), we can find δ > 0 such that P(Aδ,~s ) > `

2, ∀ tn≤ s ≤ tn+ T Hence

Z t n +T

t n

α(s)P(Aδ,~s )ds ≥ `

2

Z t n +T

t n

α(s) ≥ `

2αT ∀n ∈ N

Consequently

Z ∞ 0

α(s)P(Aδ,~s )ds = ∞ which is a contradiction since the inequality (2.8) holds for any σ, ~ > 0, that is, it must holds for the pair (δ, ~) We therefore conclude that

lim

t→∞P{Aσ,~t } = 0

That means for any σ > 0, P{lim sup

t→∞

|Xx 0 ,i(t)| ≤ σ} = 1 The proof is

Example 2.1 Let us consider Equation (2.1) with S = {1, 2}, r(t) with the generator

Γ =



2 −2



Trang 5

f (x, 1, t) = −(0, 5 + sin+t)x, g(x, 1, t) = x, and

f (x, 2, t) = −5x

8 , g(x, 2, t) =

x

2cos t where sin+t = 0 ∨ sin t It is easy to see that f and g satisfy the Assumption 2.1 Define

V (x, 1, t) = x2, V (x, 2, t) = 4x2

By computation,

LV (x, 1, t) = −2x2(0, 5 + sin+t) + x2 = −2x2sin+t

and

LV (x, 2, t) = −5x2+ 2x2cos2t = −3x2− 2x2sin2t

It is easy to see that, there is no nonnegative function w(x) vanishing only

at 0 such that LV (x, i, t) ≤ −w(x) That mean the condition of Theorem 2.1 is not satisfied However, we have LV (x, i, t) ≤ −2x2sin+t This means that Assumption 2.2 holds In view of Theorem 2.3, Equation (2.1) is almost surely asymptotic stability Approximating the stochastic differential equa-tions with Markovian switching by Euler-Maruyama method as in [29], we can simulate a sample path of solution to (2.1) with respect to this example

It is illustrated by Figure 1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

Figure 1 Trajectories of X(t) in Example 2.1

3 Tightness and almost sure stability of SDDEs with

Markovian switching

In this section we consider the sufficient condition for the almost sure stability of SDDEs with Markovian switching as follows

dX(t) = f X(t), X(t − τ ), r(t), tdt + g X(t), X(t − τ ), r(t), tdB(t) (3.1) where f : Rn× Rn× S × R+→ Rn; g : Rn× Rn× S × R+→ Rn× Rm

In order to prevent possible confusion, the notations in this paper are the same as in [30] which will be reintroduced below for convenience Denote

by C([−τ ; 0]; Rn) the family of continuous functions ϕ(·) from [−τ ; 0] to Rn with norm kϕk = sup−τ ≤θ≤0|ϕ(θ)| For a continuous Rn-valued stochastic process X(t) on t ∈ [−τ ; ∞), denote Xt= {X(t + θ) : −τ ≤ θ ≤ 0} for t ≥ 0

so Xt is a stochastic process with state space C([−τ ; 0]; Rn) Moreover, for

Trang 6

6 NGUYEN THANH DIEU

a subset A ⊂ Ω, we denote by 1A the indicator function of A, i.e 1A= 1 if

ω ∈ A and 1A= 0 otherwise For V ∈ C2,1(Rn× S × R+; R+), we define

LV (x, y, i, t) =Vt(x, i, t) +

n

X

j=1

γijV (x, j, t) + Vx(x, i, t)f (x, y, i, t)

+1

2trace[g

T(x, y, i, t)Vxx(x, i, t)g(x, y, i, t)], where Vt(x, i, t), Vx(x, i, t), Vxx(x, i, t) are defined by (2.2) Denote by

Xξ,i(t) the solution to Equation (3.1) with initial data X0 = ξ ∈ C([−τ, 0]; Rn) and r(0) = i We also denote by ri(t) the Markov chain starting in i For any two stopping times 0 ≤ τ1≤ τ2< ∞, it follows from the generalized Itˆo formula that

EV (Xξ,i(τ2), ri(τ2)) = EV (Xξ,i(τ1), ri(τ1))

+ E

Z τ 2

τ 1

LV (Xξ,i(s), Xξ,i(s − τ ), ri(s), s)ds provided that the integrations involved exist and are finite

In [28], the authors provided a criterion for stochastically asymptoti-cally stable in the large of Equation (3.1) which is cited as the following theorem

Theorem 3.1 [28, Theorem 2.1, pp.344] Let Assumption 3.1 hold Assume that there are functions V ∈ C2,1(Rn×R+×; R+), γ ∈ L1(R+; R+), w1, w2 ∈ C(Rn; R+) such that

LV (x, y, i, t) ≤ γ(t) − w1(x) + w2(y) (3.2) for any (x, y, i, t) ∈ Rn× Rn× S × R+,

w1(0) − w2(0) = 0, w1(x) > w2(x) ∀ x 6= 0 (3.3) and

lim

y→∞

 inf

|x|≥y,i∈S,t∈R +

V (x, i, t) = ∞ (3.4) Then the trivial of equation (3.1) is stochastically asymptotically stable in the large

Although this theorem can be applied to many stochastic differential delay equations with Markovian switching as demonstrated in [28], the con-dition (3.2) is restrictive because the last two terms w1(x) and w2(y) are required to be independent of variable t It is in fact not easy to construct

a right Lyapunov function to satisfy this condition when we consider non autonomous SDDEs Motivated by this comment, the main goal of this section is to weaken the hypotheses in Theorem 3.1

Assumption 3.1 (The local Lipschitz condition) For each integer k ≥ 1 there is a positive constant hk such that

|f (x, y, i, t) − f (x, y, i)|2+ |g(x, y, i, t) − g(x, y, i)|2≤ hk(|x − x|2+ |y − y|2)

(3.5) for all |x| ∨ |y| ∨ |x| ∨ |y| ≤ k, i ∈ S

Trang 7

Assumption 3.2 There is a positive numbers c1 and there are functions

V ∈ C2,1(Rn× S × R+; R+), w1 ∈ C(Rn× [0; +∞); R+), w2 ∈ C(Rn× [−τ ; +∞); R+) and w ∈ C(Rn; R+) such that

lim

y→∞

 inf

|x|≥y,i∈S,t∈R +

V (x, i, t) = ∞, (3.6)

w1(x, t) − w2(x, t) ≥ α(t)w(x); w2(x, t) ≤ c1 V (x, i, t) + α(t)w(x)

(3.7) and that

LV (x, y, i, t) ≤ γ(t) − w1(x, t) + w2(y, t − τ ) (3.8)

for any (x, y, i, t) ∈ Rn× Rn× S × R+ where w(x) 6= 0 ∀x 6= 0 and α(.), γ(.) are non-negative, continuous function satisfying

α(δ) = inf

t∈R +

Z t+δ t

α(s)ds > 0, ∀δ > 0

and R∞

0 γ(t)dt < ∞

To establish new sufficient conditions for almost sure stability of Equa-tion (3.1), we will give following lemma

Lemma 3.2 Let Assumptions 3.1 and 3.2 hold For any ξ ∈ C([−τ, 0]; Rn) and i ∈ S, there exists a unique global solution Xξ,i(t) to the equation (3.1)

on [0, ∞) Moreover,

a) there is M > 0 such that

EV (Xξ,i(t), ri(t), t) ≤ M ∀ t ≥ 0;

b) for any T > 0, ε > 0, there exists a positive integer H = H(T, ε) such that

PkXξ,i

s k ≤ H ∀ s ∈ [t; t + T ] ≥ 1 − ε, ∀ t ≥ 0

Proof Under Assumptions 3.1 and 3.2, the existence of a unique global solution follows from Theorem [13, 7.13, pp 280] For each k ∈ N, define the stopping time

σk= inf{t ≥ 0 : |Xξ,i(t)| > k}

Trang 8

8 NGUYEN THANH DIEU

Applying the generalized Itˆo formula to V (Xξ,i(t), ri(t), t) and then using Assumption 3.2, yields

EV (Xξ,i(t ∧ σk), ri(t ∧ σk), t ∧ σk)

= EV (Xξ,i(0), ri(0), 0) + E

Z t∧σ k

0

LV (Xξ,i(s), Xξ,i(s − τ ), ri(s), s)ds

Z t

0

γ(s)ds + EV (Xξ,i(0), ri(0), 0) − E

Z t∧σ k

0

α(s)w(Xξ,i(s))ds + E

Z t∧σ k

0

 − w2(Xξ,i(s), s) + w2(Xξ,i(s − τ ), s − τ )ds

Z t

0

γ(s)ds + V (ξ(0), i, 0) − E

Z t∧σ k

0

α(s)w(Xξ,i(s))ds + E

Z τ

0

w2(Xξ,i(s − τ ), s − τ )ds

Z ∞

0

γ(s)ds + V (ξ(0), i, 0) +

Z 0

−τ

w2(ξ(s), s)ds := M < ∞

(3.9) Letting k → ∞ we obtain the item a) Now, we move on to the item b) Note that (3.9) implies that

E

Z t 0

α(s)w(Xξ,i(s))ds ≤ M ∀ t ≥ 0 (3.10) Since Xξ,i(s) = ξ(s) if s ≤ 0, it follows from (3.10) and (3.7)

E

Z t t−τ

w2(Xξ,i(s), s)ds ≤ c1(1 + τ )M (3.11) Let Xξ,i(s) be the solution with initial value (ξ, i) Define

σtk= inf{s ≥ t : kXsk > k}

Employing the generalized Itˆo formula and Assumption 3.2 again we have

EV (Xξ,i((T + t) ∧ σtk), ri((T + t) ∧ σtk), (T + t) ∧ σkt)

Z t+T

t

γ(s)ds + EV (Xξ,i(t), ri(t), t) − E

Z (T +t)∧σ t

k

t

α(s)w(Xξ,i(s))ds + E

Z (T +t)∧σk

t

− w2(Xξ,i(s), s) + w2(Xξ,i(s), s)ds

Z ∞

0

γ(s)ds + EV (Xξ,i(t), ri(t), t) + E

Z t∧σk t−τ

w2(Xξ,i(s), s)ds

(3.12) Applying item a) and (3.11) to (3.12)

EV (Xξ,i((T + t) ∧ σtk), ri((T + t) ∧ σkt), (T + t) ∧ σkt) ≤ (1 + c1(1 + τ ))M

(3.13) Let H = H(K, T, ε) ∈ N satisfy

inf

|y|≥H,j∈S,t∈R V (y, j, t) ≥ 1

ε

 (1 + c1+ c1τ )M (3.14)

Trang 9

Employing (3.14) and (3.13) yields

1

ε



(1 + c1+ c1τ )M P{σHt < t + T }

|y|≥H,j∈S,t∈R +

V (y, j, t)



· P{σtH < t + T }

≤ EV Xξ,i((t + T ) ∧ σHt ), ri((t + T ) ∧ σtH), (t + T ) ∧ σHt  (3.15)

This implies that P(σtH < t + T ) ≤ ε The proof is complete  Using this lemma, we are able to show that ε1, ε2 > 0, there exists a

δ0= δ0(ε1, ε2, K) > 0 such that

P



sup

t≤s1≤s2≤t+τ

s2−s1<δ0

|Xξ,i(s2) − Xξ,i(s1)| ≥ ε1



≤ ε2, ∀ (ξ, i) ∈ K × S, t ≥ 0

(3.17) This claim is proved by the arguments in the proof of Lemma 2.3 in [3] It

is also means that transition probability p(t, ξ, i, dζ × {j}) of homogeneous Markov process (Xtξ,i, r(t)) is tight

Theorem 3.3 Let the Assumptions 3.1, 3.2 be satisfied Then, for any

ξ ∈ C [−τ, 0]; Rn,

P{ lim

t→∞Xξ,i(t) = 0} = 1, ∀ i ∈ S

Proof We will show that for any σ, ~ > 0,

lim

t→∞PAσ,~

t → 0, where Aσ,~t = ω : kXtξ,ik ≤ ~, |Xξ,i(t)| ≥ σ For each n ∈ N, define the stopping time

Tn= inf{s > 0 : kXsξ,ik > n} ∧ t

To simplify the notation, denote

cσ,~2 = min{w(x) : σ ≤ |x| ≤ ~} > 0

We implies from (3.10) that

Z ∞

0

α(s)P{Aσ,~s }ds ≤ 1

cσ,~2 E

Z ∞ 0

α(s)



1{Aσ,~

s }w(Xξ,i(s))



ds < ∞ (3.18) Suppose that

lim sup

t→∞ P{Aσ,~t } > 0

Thus, there exists a constant ` > 0 and a sequence {tn}∞

n=1 such that tn <

tn+1− τ and that

P{Aσ,~tn } = PkXtξ,ink ≤ ~, |Xξ,i(tn)| ≥ σ > `, ∀ n ∈ N (3.19) Due to the tightness, there is 0 < δ0 < τ such that

P

n

sup

t ≤s<t +δ

|Xξ,i(s)−Xξ,i(tn)| ≥ σ

2

o

≤ `

2, ∀ (ξ, i) ∈ K×S, n ∈ N (3.20)

Trang 10

10 NGUYEN THANH DIEU

It follows from (3.19) and (3.20) that

P

n

sup

t n ≤s<t n +δ 0

|Xξ,i(s)| ≥ σ

2

o

> ` − `

2 =

`

2, ∀ n ∈ N (3.21)

In view of the tightness of the family {p(t, ξ, i, dζ × {j}) : (t, ξ, i) ∈ R+×

K × S}, we can find an H1= H1(K, `) satisfying

P{kXtξ,ik ≤ H1} ≥ 1 − `

4, ∀ t ≥ 0, (ξ, i) ∈ K × S. (3.22) Combining (3.21) and (3.22), we deduce that for tn≤ s < tn+ δ0

P

n

kXsξ,ik ≤ H1; |Xξ,i(s)| ≥ σ

2

o

≥ `

2−

`

4 =

`

It means that

Z t n +δ 0

t n

α(s)P{A

σ

3 ,H 1

s }ds ≥ δ0`

4

Z t n +δ 0

t n

α(s)ds ≥ δ0`α(δ0)

4 > 0, ∀ n ∈ N Consequently,

Z ∞ 0

α(s)P{A

σ

3 ,H 1

s }ds = ∞ which is a contradiction since the inequality (3.18) holds for any σ, ~ > 0, that is, it must holds for the pair (σ

2, H1) We therefore conclude that lim

For any ε, σ > 0, let ~ be such large that P{kXtk ≤ ~} ≥ 1−ε

2∀t In view of (3.24), we can find T = T (σ, ε) such that P{Aσ,~t } ≤ ε

2 for all t > T , which implies that P{|Xξ,i(t)| > σ} ≤ , ∀t > T Since we can choose σ arbitrarily,

we conclude that

P{ lim

t→∞Xξ,i(t) = 0} = 1

 Example 3.1 Let us consider Equation (3.1) with S = {1, 2}, r(t) having the generator

Γ =



4 −4

 while

f (x, y, 1, t) = −(0, 5x + x cos2t − 0, 5y), g(x, y, 1, t) = xy cos t, and

f (x, y, 2, t) = 1

6[y cos(t − τ ) − 2x], g(x, y, 2, t) =

1

3x sin t.

It is easy to see that f and g satisfy Assumption 3.1 Define

V (x, 1, t) = x2, V (x, 2, t) = 3x2

By computation,

LV (x, y, 1, t) = −x2− 2x2cos2t + xy + x2cos2t

= −x2− x2cos2t + xy,

Ngày đăng: 12/10/2015, 10:42

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm