1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: ASYMPTOTIC BEHAVIOR OF KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE IN RANDOM ENVIRONMENT

20 139 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 464,96 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

DSpace at VNU: ASYMPTOTIC BEHAVIOR OF KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE IN RANDOM ENVIRONMENT tài liệu, giáo án...

Trang 1

PURE AND APPLIED ANALYSIS

ASYMPTOTIC BEHAVIOR OF KOLMOGOROV SYSTEMS WITH PREDATOR-PREY TYPE IN RANDOM ENVIRONMENT

Nguyen Huu Du

Department of Mathematics, Mechanics and Informatics, Hanoi National University,

334 Nguyen Trai, Thanh Xuan, Hanoi Vietnam

Nguyen Hai Dang

Department of Mathematics, Wayne State University,

Detroit, MI 48202, USA

(Communicated by Wei Feng)

Abstract The paper is concerned with the asymptotic behavior of two species

population whose densities are described by Kolmogorov systems of

predator-prey type in random environment We study the omega-limit set and find

con-ditions ensuring the existence and attractivity of a stationary density Some

applications to the predator-prey model with Beddington-DeAngelis functional

response are considered to illustrate our results.

1 Introduction In ecology, the development of species are affected by intrinsic factors as well as interactions with other species One of the most important types

of interactions between populations is the predator-prey one which includes some

of normal interactions such as plant-herbivore, host-parasitoid, herbivore-carnivore and host-pathogen interactions To understand intensively the dynamics of popula-tions of predator-prey type, these are often modeled by Kolmogorov systems, that

is, the prey densities x and predator one y satisfy differential equations

(

˙

x = xf (x, y),

˙

where f (x, y), g(x, y) are the growth rates of the two species (see [3, 20]) More-over, in order to indicate the prey-predator relationship between these species, the following assumptions are given

f (0, 0) > 0; g(0, 0) < 0; ∂f (x, y)

∂y < 0;

∂g(x, y)

∂x > 0 ∀ (x, y) ∈ R2+ (1.2)

It has been well recognized that, these traditional models are in general not adequate

to describe the reality because the growth rates are subject to environment noise and other random factors This fact has stimulated studies on population models perturbed by some kind of random noise

2000 Mathematics Subject Classification Primary: 34C12, 60H10; Secondary: 92D25 Key words and phrases Kolmogorov systems of predator-prey type, telegraph noise, stationary distribution, piecewise deterministic Markov process.

The first author is supported by NAFOSTED n 0 101.02 - 2011.21, The second author is sup-ported in part by the Army Research Office under grant W911NF-12-1-0223.

Trang 2

Although for the past decade, there have been many papers studying stochastic Kolmogorov systems with white noise and/or colored noise (e.g [6,13,14, 15,16,

18,23] and the references therein), these usually focus on special cases such as Lotka-Volterra models and other predator-prey models with special functional responses Meanwhile, not much attention has been drawn to general random Kolmogorov systems For this reason, this paper will consider a Kolmogorov system perturbed

by colored noise The colored noise arises from the assumption that there is a random switching between several regimes such as a hot regime and a cold one, rainy and dry seasons, good or poor protection for prey These regimes differ

in nutrition, food resources and other factors Consequently, the switching makes changes in growth rates, carrying capacities, as well as inter-specific or intra-specific interactions which cannot be described by a traditional deterministic model The random switching can be represented by a continuous-time Markov chain, so we need

to consider a hybrid system subject to such a Markov chain in lieu of Kolmogorov system (1.1) To simplify, we suppose that the Markov chain ξt takes values in

a two-element set E = {+, −} representing telegraph noise Recently, the papers [2,4,8,15,23] among others have dealt with such models

In [2], the authors dealt with the classical prey-predator systems with telegraph noise

(

˙ x(t) = x(t)(a(ξt) − b(ξt)x(t) − c(ξt)y(t))

˙

where a, b, c, d, f are positive functions defined on E The main findings in [2] are

to point out some subsets of the ω−limit set of the solutions and to give criteria for the permanence of the system (1.3) The purpose of this paper is to generalize and improve these results for a general Kolmogorov system



˙ x(t) = xa(ξt, x, y)

˙

where, a(i, x, y), b(i, x, y) are continuously differentiable real-valued functions on

R2+= {(x, y) : x ≥ 0, y ≥ 0} for any i ∈ E In this model, the telegraph noise (ξt) results in switching between the deterministic Kolmogorov systems



˙ x(t) = xa(+, x, y)

˙

and



˙ x(t) = xa(−, x, y)

˙

A similar model has been considered in [7] when (1.4) is of competition type With some slight assumptions on the functions a(±, x, y), b(±, x, y), all ω−limit sets of positive solutions for the equation (1.4) are described and it is shown that the ω-limit sets of all positive solution are the same and absorb all other pos-itive solutions Moreover, if the threshold values λ1 and λ2 are positive, then lim sup

t→∞

x(t) > 0, lim sup

t→∞

y(t) > 0 a.s However, there are still two open problems for that model They are:

• Does the hypothesis λ1> 0, λ2> 0 imply the existence of a stationary distri-bution?

• When either λ < 0 or λ < 0, we do not know the behavior of x(t), y(t)

Trang 3

The aim of this paper is to study the Kolmogorov equation (1.4) as a model

of predator-prey type We want to obtain results like those in [7] and to answer the above open questions In order to describe the ω-limit set, we introduce a threshold value λ and consider three concrete cases We pay more attention to the case either the system (1.5) or (1.6) has a unique limit cycle The threshold value λ plays an important role in practice because it helps us to know whether the system

is persistent by means of analyzing coefficients Since λ is given by an integral formula, we can easily calculate it by numerical methods One of the distinctive characteristics of this paper is to show that lim

t→∞y(t) = 0 if λ < 0 in contrast

to the existence of a stationary distribution of the Markov process (ξt, x(t), y(t)) with the support in E × intR2

+ given that λ > 0 We can also show that this stationary distribution has a density and it attracts all other distributions under certain additional assumptions

The rest of the paper is organized as follows: In Section2we describe path-wise dynamic behaviors of positive solutions to systems of predator-prey type under the effect of telegraph noise It is shown that the ω−limit set absorbs all positive solutions Section3 is concerned with the stability of stationary density by using the Foguel alternative theorem in [21] The last section gives an application to the predator-prey model with Beddington-DeAngelis functional response

2 Asymptotic behavior of Kolmogorov prey-predator type systems in random environment Let (Ω, F , P) be a complete probability space and (ξt)t≥0

be a Markov process, defined on (Ω, F , P), taking values in a set of two elements, say E = {+, −} Suppose that (ξt) has transition intensities + → − and −α → +β with α > 0, β > 0 The process (ξt) has a unique stationary distribution

p = lim

t→∞P{ξt= +} = β

α + β; q = limt→∞P{ξt= −} = α

α + β. The trajectories of (ξt) are piecewise-constant, cadlag functions Let

0 = τ0< τ1< τ2< < τn<

be its jump times Put

σ1= τ1− τ0, σ2= τ2− τ1, , σn= τn− τn−1

σ1= τ1is the first jump from the initial state, σ2 is the time the process ξtspends

in the state into which it has moved from the first state It is known that σk’s are mutually independent on the condition that the sequence {ξτk}∞

k=1 is known Note that if ξ0 is given then ξτn is known because the process ξt takes only two values Hence, {σk}∞

n=1is a sequence of conditionally independent R+-valued random vari-ables Moreover, if ξ0= + then σ2n+1 has the exponential density α1[0,∞)exp(−αt) and σ2n has the density β1[0,∞)exp(−βt) Conversely, if ξ0= − then σ2n has the exponential density α1[0,∞)exp(−αt) and σ2n+1 has the density β1[0,∞)exp(−βt) (see [10, vol 2, pp 217]) Here 1[0,∞)= 1 for t ≥ 0 (= 0 for t < 0)

Adapting from the conditions of prey-predator relation (1.2), we suppose that for both systems (1.5) and (1.6), the coefficients a(±, x, y) and b(±, x, y) satisfy an assumption making the system (1.4) looks like a prey-predator model

Assumption 2.1 a(±, ·) and b(±, ·) are continuously differentiable in R2

+ More-over,

1 ∂a(±, x, 0)

∂x < 0 ∀x ≥ 0,

Trang 4

2 a(±, 0, 0) > 0, lim

x→∞a(±, x, 0) < 0,

3 b(±, 0, y) < 0 ∀y ≥ 0

Note that Assumption2.1is more relaxed than (1.2) since its conditions are not imposed on R2+ as the relations in (1.2), but only on the boundary ∂R2+

Throughout this paper, we suppose that the system (1.5), (resp (1.6)) has

a unique solution (x+(t, x0, y0), y+(t, x0, y0)) (resp (x−(t, x0, y0), y−(t, x0, y0))) starting in (x0, y0) ∈ intR2

+ and this solution is defined on [0, ∞) Likewise, denote

by (x(t, x0, y0), y(t, x0, y0) the solution to (1.4) with initial value (x0, y0) Further-more, we assume both systems (1.5) and (1.6) to be dissipative with a common set

D in the following sense

Assumption 2.2 For any (x0, y0) ∈ int R2

+, there is a compact set D := D(x0, y0)

⊂ R2

+ such that (x0, y0) ∈ D and D is a common invariant set for both systems (1.5) and (1.6)

From now on, we will drop x0, y0from the notations for solutions to (1.4), (1.5) and (1.6) whenever it causes no ambiguity We need the following lemma

Lemma 2.1 Suppose that the system



˙ x(t) = f (x, y),

˙ y(t) = g(x, y), where f, g : R2 → R2, has a globally asymptotically stable equilibrium (x∗, y∗), i.e., (x∗, y∗) is stable and every solution (x(t), y(t)) defined on [0, ∞) satisfies limt→∞(x(t), y(t)) = (x∗, y∗) Then, for any compact set K ⊂ R2and any neighbor-hood U of (x∗, y∗), there exists a number T∗> 0 such that (x(t, x0, y0), y(t, x0, y0)) ∈

U for any t > T∗, provided (x0, y0) ∈ K

Proof See the proof of Lemma 2.1 in [7]

Adapting from the concept in [5], we define the (random) ω−limit set of the trajectories of system (1.4) that start in a closed set B as

Ω(B, ω) = \

T >0

[

t>T

x(t, ·, ω), y(t, ·, ω)B

In particular, the ω−limit set of the trajectory starting from (x0, y0) is

Ω(x0, y0, ω) = \

T >0

[

t>T

x(t, x0, y0, ω), y(t, x0, y0, ω)

This concept is different from the one in [9] but it is closest to that of an ω−limit set for a deterministic dynamical system In the case where Ω(x0, y0, ω) is a.s constant, it is similar to the concept of weak attractor and attractor given in [17,24] Although, in generally, the ω-limit set in this sense does not have the invariant property, this concept is appropriate for our purpose of describing the pathwise asymptotic behavior of the solution with a given initial value

Our task in this section is to show that under some conditions, Ω(x0, y0, ω) is deterministic, that is, it is constant almost surely Further, it is also independent

of the initial value (x0, y0)

As in deterministic cases, the behavior of the solutions on the boundary to the system (1.4) plays an important role For the predator-prey systems, in the absence

of the prey (when x(t) = 0), the predator must die out Indeed, by item 3) of

Trang 5

Assumption2.1, b(±, 0, y) < 0 ∀y ≥ 0, we can find an  > 0 and an M > y(0) > 0 such that b(±, 0, y) < − for any 0 < y < M Hence,

˙

y(t) = y(t)b(ξt, 0, y(t)) < −y(t) ∀ t > 0 =⇒ lim

t→∞y(t) = 0

Therefore, we focus on considering the system on the boundary R+× {0}

˙ u(t) = u(t)a(ξt, u(t), 0), u(0) ∈ [0, ∞) (2.1)

By Assumption2.1, there is a unique number u+> 0 satisfying a(+, u+, 0) = 0 and

a number u− > 0 with a(−, u−, 0) = 0 In the case u+ 6= u− we suppose u+ < u− and put

h+= h+(u) = ua(+, u, 0), h−= h−(u) = ua(−, u, 0)

It is known that if u(t) is the solution of the system (2.1) then (ξt, u(t)) is a Markov process with the infinitesimal operator L given by

Lg(+, u) = −α(g(+, u) − g(−, u)) + h+(u) d

dug(+, u) Lg(−, u) = β(g(+, u) − g(−, u)) + h−(u) d

dug(−, u), with g(i, x) a function defined on E × (0, ∞), continuously differentiable in x The stationary density (µ+, µ−) of (ξt, u(t)) can be found from the Fokker-Planck equa-tion

−αµ+(u) + βµ−(u) − d

du[h

+µ+(u)] = 0,

αµ+(u) − βµ−(u) − d

du[h

−µ−(u)] = 0

(2.2)

Solving this equation, we obtain a unique positive density given by

µ+(u) = θF (u)

u|a(+, u, 0)|, µ

−(u) = θF (u)

where

F (u) = exp



Z u

u 0

τ a(+, τ, 0)+

β

τ a(−, τ, 0)

 dτ

 , u ∈ [u+, u−], u0= u

++ u−

and

θ =

 Z u−

u +



u|a(+, u, 0)|+ q

F (u) u|a(−, u, 0)|

 du

−1

Thus, the process (ξt, u(t)) has a unique stationary distribution with density (µ+, µ−) (see [1] for the details) Further, for any continuous function f : E × R → R with

Z u−

u + p|f (+, u)|µ+(u) + q|f (−, u)|µ−(u) du < ∞,

we have

lim

t→∞

1

t

Z t

0

f (ξs, u(s)) ds =

Z u−

u +

pf (+, u)µ+(u) + qf (−, u)µ−(u)du (2.4)

In case u+ = u−, the process (ξt, u(t)) has a unique stationary distribution with generalized density µ+(u) = µ−(u) = δ(u − u+) where δ(·) is the Dirac function Define

λ =

Z u −

u + (pb(+, u, 0)µ+(u) + qb(−, u, 0)µ−(u))du (2.5) Theorem 2.2 For any x > 0, y > 0,

Trang 6

a) there exists δ1> 0 such that lim sup

t→∞

x(t, x0, y0) ≥ δ1 a.s

b) if λ > 0, there is δ2> 0 satisfying lim sup

t→∞

y(t, x0, y0) ≥ δ2 a.s

Proof Let M be a positive number satisfying D ⊂ [0, M ] × [0, M ] By the assump-tion b(±, 0, y) < 0, a(±, 0, 0) > 0, there exist δ1 > 0, 1 > 0 such that b(±, x, y) <

−1∀ 0 < x ≤ δ1, 0 < y ≤ M and a(±, x, y) > ε1 for all 0 ≤ x, y ≤ δ1 Suppose that lim supt→∞x(t) < δ1with a positive probability Then, there is a T1> 0 such that x(t) < δ1, y(t) ≤ M ∀t ≥ T1, which implies ˙y(t) = yb(ξt, x(t), y(t)) < −1y(t) Therefore, for some T2> T1we have y(t) < δ1∀t ≥ T2 Combining with the inequal-ity x(t) < δ1 we obtain ˙x(t) > ε1x(t) ∀t ≥ T2 Hence, x(t) ↑ ∞ This contradiction yields the first assertion of this theorem

Item b) can be proved in the same manner as in [7, Theorem 2.1]

In the sequel, we always suppose that λ > 0 By the assumption a(±, 0, 0) > 0 b(±, 0, 0) < 0 and Theorem 2.2, there exists δ > 0 such that lim supt→∞x(t) > δ, lim supt→∞y(t) > δ and a(±, x, y) > 0, b(±, x, y) < 0 if 0 < x, y ≤ δ

Lemma 2.3 With probability 1, there are infinitely many sn = sn(ω) > 0 such that sn> sn−1, limn→∞sn= ∞ and x(sn) ≥ δ, y(sn) ≥ δ for all n ∈ N

Proof The proof is similar to that of Lemma 2.3 in [7] with some slight modifica-tions, so it is omitted here

We consider some concrete cases with additional assumptions Firstly, we note that the eigenvalues of the Jacobian matrix of xa(−, x, y), yb(−, x, y)T

at (u−, 0) are u− ∂a∂x(−, u−, 0) and b(−, u−, 0) In view of Assumption 2.1, ∂a∂x(−, u−, 0) <

0, so the equilibrium (u−, 0) of the system (1.6) is a saddle point if and only if b(−, u−, 0) > 0 Moreover, the condition b(−, u−, 0) < 0 is sufficient for the local stability of (u−, 0)

Set

xn= x(τn, x, y); yn= y(τn, x, y);Fn

0 = σ(τk : k ≤ n);F∞

n = σ(τk− τn : k > n)

It is clear that (xn, yn) is Fn

0− measurable and Fn

0 is independent of F∞

n if ξ0 is given For the sake of simplicity, we suppose ξ0 = + and define for 0 < ε ≤ N ,

Hε,N = [ε, N ] × [ε, N ] ∩ D

2.1 Case 1: One system is stable and the other is permanent

Assumption 2.3 On the open quadrant int R2

+, the system (1.5) has a globally stable positive state (x∗+, y+∗) Moreover, the equilibrium (u−, 0) of the system (1.6)

is a saddle point

Lemma 2.4 Let Assumption2.3be satisfied and M be as in the proof of Theorem

2.2 Then, for any ε > 0, there exists a σ+(ε) such that x+(t) > σ+(ε), y+(t) >

σ+(ε) for all t > 0, provided (x+(0), y+(0)) ∈ Hε,M

Proof See [7, Lemma 2.4]

Lemma 2.5 Let Assumption2.3be satisfied and M be as in the proof of Theorem

2.2 Then, there is a σ− such that x−(t) > σ−, y−(t) > σ− for all t > 0, provided (x (0), y (0)) ∈ H

Trang 7

Proof It is possible to consider the system (1.6) as a special case of the system (1.4) when a(+, x, y) ≡ a(−, x, y) and b(+, x, y) ≡ b(−, x, y) The value λ in this case is b(−, u−, 0) > 0 Hence, from Theorem2.2and Lemma2.3, we derive the existence

of a 0 < δ0 < δ satisfying that for any positive initial value (x0, y0) ∈ D, there

is a sequence s0n ↑ ∞ such that (x−(s0n, x0, y0), y−(s0n, x0, y0)) ∈ Hδ 0 ,M Note that

δ0 > 0 can be chosen uniformly for every (x0, y0) ∈ D ∩ intR2

+ Therefore, for any

z = (x, y) ∈ Hδ0

2 ,M, there exists sz > 0 satisfying x−(sz, z), y−(sz, z) ∈ Hδ 0 ,M

By the continuous dependence of the solution on the initial value, we can find an open set Uz 3 z such that x−(sz, w), y−(sz, w) ∈ Hδ0

2 ,M∀w ∈ Uz Since Hδ0

2 ,M

is compact, there exists a finite number of points z1, · · · , zn ∈ Hδ0

2 ,M such that

Hδ0

2 ,M ⊂ ∪n

i=1Uzi by the Heine-Borel covering theorem Putes = max{sz1, · · · , szn} and

σ−=1

2inf min{x−(t, z), y−(t, z)} : z ∈ Hδ0

2 ,M and 0 ≤ t ≤es

It is clear that σ− > 0 We now show that x−(t, z) > σ−, y−(t, z) > σ− for any initial value z ∈ Hδ0

2 ,M and t > 0 Suppose, to the contrary, that there exist

z0 ∈ Hδ 0 ,M and et > 0 such that either x−(et, z0) ≤ σ− or y−(et, z0) ≤ σ− Put e

t0 = sup{0 ≤ t ≤ et : (x−(t, z0), y−(t, z0)) ∈ Hδ0

2 ,M} Since {Uzi} covers Hδ0

2 ,M, e

z0 = (x−(et0, z0), y−(et0, z0)) ∈ Uzi for some i It follows from the property of Uzi

that x−(et0+ szi, z0), y−(et0+ szi, z0) = x−(szi,ze0), y−(szi,ez0) ∈ Hδ0

2 ,M By the definition ofet0, it is obviouset0≤et <et0+ szi ≤et0+es As a result

x−(et, z0) = x−(et −et0,ez0) ≥ inf

(z,t)∈Hδ0

2,M

×[0,es]x−(t, z), y−(t, z)} := 2σ− Similarly, y−(et, z0) ≥ 2σ− This is a contradiction The proof is complete

Lemma 2.6 Let Assumption2.3be satisfied Then, for δ as above, there are, with probability 1, infinitely many k = k(ω) ∈ N such that x2k+1> min{σ+(δ), σ+(σ−)},

y2k+1> min{σ+(δ), σ+(σ−)}

Proof By Lemma2.3, there exists a sequence sn↑ ∞ such that x(sn) ≥ δ, y(sn) ≥ δ for all n ∈ N Put kn := max{i : τi ≤ sn} From Lemmas 2.4 and 2.5, it is seen that if kn is even, xkn+1 > σ+(δ), ykn+1 > σ+(δ) and that if kn is odd, xkn+1 >

σ−, ykn+1 > σ− Applying Lemma2.4 again we obtain xkn+2 > σ+(σ−), ykn+2 >

σ+(σ−), provided that knis odd Since at least one of the two sets {n : kn+1 is odd} and {n : kn+ 2 is even} is infinite the proof of Lemma2.6is complete

2.2 Case 2: One system is stable and the other is not permanent Assumption 2.4 The system (1.5) has a globally stable positive state (x∗+, y+∗) Moreover, (u−, 0) is a locally stable equilibrium of the system (1.6)

Lemma 2.7 Let Assumption2.4be satisfied Then,

a) for any ε > 0 we have x+(t) > σ+(ε), y+(t) > σ+(ε) ∀t > 0, provided (x+(0), y+(0)) ∈ Hε,M, where σ+(ε) is as in Lemma2.4;

b) there exists σ− > 0 such that x−(t) ≥ σ− for all t > 0 if (x−(0), y−(0)) ∈ D and x−(0) ≥ δ Moreover, for any 0 < ε ≤ δ, we can find 0 < σ−(ε) < ε such that if y (0) < σ (ε), x (0) ≥ σ− then y (t) < ε, x (t) > σ− for all t > 0

Trang 8

Proof The assertion in a) has been proved in Lemma2.4 To prove the first asser-tion of b), we employ arguments similar to those in the proof of Lemma 2.5 Note that lim supt→∞x−(t, x, y) > δ, ∀x > 0, y ≥ 0 Hence, for any z = (x, y) ∈ D, x ≥ δ2, there exists sz> 0 such that x−(t, sz) ≥ δ Thus, using the same method as in the proof of Lemma2.5, we can show that there exists σ− > 0 such that x−(t) ≥ σ− provided (x−(0), y−(0)) ∈ D and x−(0) ≥ δ

For the proof of the second assertion of the item b), we refer to [7, Lemma 2.6]

Lemma 2.8 If Assumption 2.4 is satisfied, then with probability 1, there are in-finitely many k = k(ω) ∈ N such that x2k+1> ~, y2k+1> ~ where ~ = minσ+(δ),

σ+(min{σ−, σ−(δ)})

Proof In view of Lemma 2.3, there is a sequence (sn) ↑ ∞ such that x(sn) ≥

δ, y(sn) ≥ δ for all n ∈ N In case τ2k ≤ sn < τ2k+1 we have x(τ2k+1) > σ+(δ) ≥

~, y(τ2k+1) > σ+(δ) ≥ ~ by the item a) of Lemma 2.7 If τ2k−1 ≤ sn < τ2k, Lemma 2.7 shows that x2k > σ− Further, when y2k ≥ σ−(δ) it yields x2k+1 >

σ+(min{σ−, σ−(δ)}) ≥ ~, y2k+1> σ(min{σ(δ), σ−(δ)}) ≥ ~ We consider the case where y2k < σ−(δ) In the case max{y(t) : τ2k ≤ t ≤ τ2k+1} ≥ σ−(δ), we choose

t1= min{t ∈ [τ2k, τ2k+1] : y(t) ≥ σ−(δ)} It is clear ˙y(t1) ≥ 0 and y(t1) = σ−(δ) < δ since ˙y(t) < 0 whenever 0 < x(t); y(t) < δ, x−(t1) > δ Consequently, x2k+1 >

σ+(min{σ−, σ−(δ)}) ≥ ~, y2k+1> σ+(min{σ−, σ−(δ)}) ≥ ~ If y(t) < σ−(δ), x(t) ≥

δ ∀τ2k ≤ t ≤ τ2k+1, then y2k+1< σ−(δ) which implies y(t) < δ ∀ τ2k+1≤ t ≤ τ2k+2 Continuing this process, we can either find an odd number 2m + 1 > n satisfying

x2m+1 > ~, y2m+1 > ~ or show that y(t) < δ ∀t > τ2k However, the latter contradicts to Lemma2.3 Thus, x2m+1> ~, y2m+1> ~ The proof is complete For the purpose of simplifying notations, we denote by πt+(x, y) = (x+(t, x, y),

y+(t, x, y)), (resp by πt−(x, y) = (x−(t, x, y), y−(t, x, y)) the solution of the system (1.5) (resp (1.6)) with initial value (x, y) Put

S =n(x, y) = π%(n)tn · · · πt+2π−t1(x∗+, y∗+) : 0 ≤ t1, t2, , tn; n ∈ No, (2.6) where %(k) = (−1)k

The pathwise dynamic behavior of the solutions of the system (1.4) is fully de-scribed by means of the following theorem which is stated and proved in [7] Theorem 2.9 Suppose that (1.5) has a globally stable positive equilibrium (x∗+, y+∗) and there exist ~ and M such that P{~ < x2n+1, y2n+1< M i.o of n} = 1 Then, a) with probability 1, the closure S of S is a subset of the ω-limit set Ω(x0, y0, ω), b) if there exists t0 > 0 such that the point (x0, y0) = πt−

0(x∗+, y+∗) satisfies the following condition

deta(+, x0, y0) a(−, x0, y0) b(+, x0, y0) b(−, x0, y0)



then, with probability 1, the closure S of S is the ω-limit set Ω(x0, y0, ω) Moreover, S absorbs all positive solutions in the sense that for any initial value (x0, y0) ∈ intR2+, the value γ(ω) = inf{t > 0 : (x(s, x0, y0, ω), y(s, x0,

y0, ω)) ∈ S ∀ s > t} is finite outside a P-null set

Remark 1 The existence of a point (x0, y0) = πt−0(x∗+, y+∗) satisfying (2.7) is equivalent to the existence of a point (x, y) ∈ {π−t(x∗+, y∗+) : t ≥ 0} such that the curve {π+(x, y) : t ≥ 0} is not contained in {π−(x∗, y∗) : t ≥ 0}

Trang 9

Remark 2 The assumption that (1.5) has a globally stable positive equilib-rium (x∗+, y+∗) and that there exist ~ and M such that P{~ < x2n+1, y2n+1 <

M, i.o of n} = 1 is satisfied if either Assumption2.3or Assumption2.4holds 2.3 Case 3: One system has a unique stable limit cycle

Assumption 2.5 On the quadrant int R2

+, the system (1.5) has a unique equi-librium (x∗+, y+∗) and a unique stable limit cycle Γ which attracts any solution (x+(t, x, y), y+(t, x, y)) starting in (x, y) ∈ intR2+\{(x∗

+, y+∗)} Moreover, (u−, 0)

is either a saddle point or a stable critical point of the system (1.6)

Denote by D0the domain surrounded by Γ As in the proof of Lemma2.1, given

a compact K ∈ intR2, we can show that, for any neighborhood U of D0, there exists a T > 0 such that x+(t, x, y), y+(t, x, y)) ∈ U ∀(x, y) ∈ K, ∀t ≥ T Using this property and the same arguments as in the proofs of Lemmas2.3-2.8we obtain Lemma 2.10 If Assumption 2.5 is satisfied, there is ~ > 0 and infinitely many

k = k(ω) such that x2k+1> 2~ and y2k+1> 2~ almost surely

To describe the ω−limit set of the solution (x(t, x0, y0), y(t, x0, y0)) in this case

we need the following lemma

Lemma 2.11 Suppose that Assumption 2.5is satisfied Let K∗ be the ε− neigh-borhood of (x∗+, y∗+) and U0

0 be the ε0− neighborhood of a point (x0, y0) ∈ Γ Then, there are t0 = t0(ε0) > 0 and T+ = T+(ε, ε0) > 0 such that T+

x,y = inf{t > 0 : k(x+(t, x, y), y+(t, x, y)) − (x0, y0)k ≤ ε0

2} ≤ T+ for all (x, y) ∈ H~,M\K∗

 and that (x+(t, x, y), y+(t, x, y)) ∈ U0

0 ∀ T+ x,y < t < T+

x,y+ t0 Proof Denote by T the period of the periodic solution By the continuous depen-dence of the solution on initial values, for any (u, v) ∈ Γ, there exists 0 ≤ tu,v≤ T and a neighborhood Uu,v of (u, v) such that k(x+(tu,v, u0, v0), y+(tu,v, u0, v0)) − (x0, y0)k ≤ ε0

2 for all (u0, v0) ∈ Uu,v Since Γ is a compact set covered by the family {Uu,v : (u, v) ∈ Γ}, there exist (u1, v1), · · · , (un, vn) ∈ Γ such that Γ ⊂

U := Sn

i=1Uui,vi As in the proof of Lemma 2.1, we can find a T+

∗ = T+

∗ (ε) > 0 such that (x+(t, u, v), y+(t, u, v)) ∈ U ∀ (u, v) ∈ H~,M\K∗

 and ∀t ≥ T+

∗ Let (x, y) ∈ H~,M\K∗

 and T+ x,y= inf{t > 0 : k(x+(t, x, y), y+(t, x, y)) − (x0, y0)k ≤ ε0

2} Obviously, T+

x,y≤ T+

∗ + T := T+ It again follows from the continuous dependence

of the solution on the initial values that there exists a t0 = t0(ε0) > 0 satisfying (x+(t, u, v), y+(t, u, v)) ∈ U

0 ∀ 0 < t < t0 provided k(u, v) − (x0, y0)k ≤ ε0

2 Hence, (x+(t, x, y), y+(t, x, y)) ∈ U0

0 ∀ T+ x,y < t < T+

x,y+ t0 Lemma 2.11is proved

In analogy with (2.6), for any (u, v) ∈ R2+ we put

S(u, v) =n(x, y) = π%(n)tn · · · π%(1)t1 πt+0(u, v) : 0 ≤ t0, t1, t2, , tn; n ∈ No, (2.8) where %(k) = (−1)k

Lemma 2.12 The set S(x0, y0) does not depend on the choice of (x0, y0) ∈ Γ and

it is denoted by S

Proof Let (x0, y0) and (˘x0, ˘y0) belong to Γ Then, we can find s1> 0, s2> 0 such that π+s1(x0, y0) = (˘x0, ˘y0) and πs+2(˘x0, ˘y0) = (x0, y0) Hence,

πt%(n)· · · πt%(1)π+t (x0, y0) = πt%(n)· · · π%(1)t π+t+s (˘x0, ˘y0),

Trang 10

πt%(n)n · · · πt%(1)1 π+t0(˘x0, ˘y0) = π%(n)tn · · · πt%(1)1 πt+0+s1(x0, y0)

Consequently, S(x0, y0) = S(˘x0, ˘y0) The proof is complete

Theorem 2.13 Suppose that on the quadrant int R2

+, the system (1.5) has a unique stable limit cycle Γ and a unique equilibrium (x∗+, y+∗) which is not a equilibrium of the system (1.6) Suppose further that, for any (x, y) ∈ intR2

+\{(x∗ +, y∗ +)}, Γ is the ω−limit set of the solution (x+(t, x, y), y+(t, x, y)) Finally assume that there exist positive numbers ~ and M such that P{2~ < x2n+1, y2n+1< M i.o of n} = 1 Then,

a) with probability 1, the closure S of S is a subset of the ω-limit set Ω(x0, y0, ω); b) if there exists (x0, y0) ∈ Γ such that

deta(+, x0, y0) a(−, x0, y0) b(+, x0, y0) b(−, x0, y0)



then, with probability 1, the closure S of S is the ω-limit set Ω(x0, y0, ω) Moreover, S absorbs all positive solutions in the sense that for any initial value (x0, y0) ∈ intR2

+, the value γ(ω) = inf{t > 0 : (x(s, x0, y0, ω), y(s, x0,

y0, ω)) ∈ S ∀ s > t} is finite outside a P-null set

Remark 3 The assumption that there exists a (x0, y0) ∈ Γ satisfying (2.9) is equivalent to the condition that Γ does not contain any orbit of the system (1.6) The proof of Theorem 2.13 We construct a sequence of stopping times

η1= inf{2k + 1 : (x2k+1, y2k+1) ∈ H2~,M},

η2= inf{2k + 1 > η1: (x2k+1, y2k+1) ∈ H2~,M},

· · ·

ηn= inf{2k + 1 > ηn−1: (x2k+1, y2k+1) ∈ H2~,M}

It is easy to see that {ηk = n} ∈ Fn

0 for any k, n Thus, the event {ηk = n} is independent ofF∞

n if ξ0is given By the hypothesis, ηn< ∞ a.s for all n

Let s1 > 0 is such small that (x(t, x0, y0), y(t, x0, y0) ∈ H~,M for all (x0, y0) ∈

H2~,M Denote K2∗ by the 2ε− neighborhood of (x∗+, y+∗) and let ε > 0 be suffi-ciently small that K2∗ ⊂ D0 Since (x∗+, y∗+) is not an equilibrium of the system (1.6), there are  > 0 and 0 < s2< s3such that x−(t, x, y), y−(t, x, y)) ∈ H~,M\ K∗

2

provided (x, y) ∈ K2∗, s2 < t < s3 On the other hand, there exists s4 > 0 such that x+(t, x, y), y+(t, x, y)) /∈ K∗

 if 0 < t < s4, provided (x, y) /∈ K∗

2 Let

Bn = {ω : ση n +1 < s1, s2< ση n +2 < s3, ση n +3 < s4} Using arguments similar to the proof of [7, Theorem 2.2], we obtain that

P

∞

\

k=1

[

i=k

Bi



= P{ω : σηn+1< s1, s2< σηn+2< s3, σηn+3< s4 i.o of n} = 1

Note that, if Bn occurs and (xηn+1, yηn+ 1) ∈ K2ε∗ then (xηn+3, yηn+3) ∈ D0\K∗

ε ⊂

H~,M\K∗

ε Hence, if Bn occurs, either (xη n +1, yη n +1) or (xη n +3, yη n +3) belongs to

G := H \K∗ Since Bn occurs infinitely often with probability 1, we also have

Ngày đăng: 16/12/2017, 02:55

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm