1. Trang chủ
  2. » Khoa Học Tự Nhiên

New exponential stabilization criteria for non autonomous delayed neural networks via Riccati equations

17 81 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 17
Dung lượng 159,44 KB
File đính kèm Preprint1232.rar (138 KB)

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

This paper deals with the problem of global exponential stabilization for a class of nonautonomous cellular neural networks with timevarying delays. The system under consideration is subject to timevarying coefficients and timevaying delays. Two cases of timevarying delays are considered: (i) the delays are differentiable and has an upper bound of the delayderivative; (ii) the delays are bounded but not necessary to be differentiable. Based on LyapunovKrasovskii functional method combined with the used of Razumikhin technique, we establish new delaydependent conditions to design memoryless state feedback controller for exponential stabilizing the system. The derived conditions are formulated in terms of the solution of Riccati differential equations, which allow to simultaneously calculate the bounds that characterize the exponential stability rate of the solution. Numerical examples are given to illustrate the effectiveness of our results

Trang 1

New exponential stabilization criteria for non-autonomous delayed neural networks via

Riccati equations Mai Viet Thuan1

Le Van Hien2 ,∗ and Vu Ngoc Phat3 1

Department of Mathematics Thai Nguyen University, Thai Nguyen, Vietnam

2

Hanoi National University of Education

136 Xuan Thuy Road, Hanoi, Vietnam

3

Institute of Mathematics, VAST,

18 Hoang Quoc Viet Road, Hanoi, Vietnam

∗ Corresponding author: Hienlv@hnue.edu.vn

Abstract This paper deals with the problem of global exponential stabilization for a class of non-autonomous cellular neural networks with time-varying delays The system under consideration is subject to time-varying coefficients and time-vaying delays Two cases of time-varying delays are considered: (i) the delays are differentiable and has

an upper bound of the delay-derivative; (ii) the delays are bounded but not necessary

to be differentiable Based on Lyapunov-Krasovskii functional method combined with the used of Razumikhin technique, we establish new delay-dependent conditions to design memoryless state feedback controller for exponential stabilizing the system The derived conditions are formulated in terms of the solution of Riccati differential equations, which allow to simultaneously calculate the bounds that characterize the exponential stability rate of the solution Numerical examples are given to illustrate the effectiveness of our results

MSC: 34D20, 37C75, 93D20

Key words Neural networks, stability, stabilization, non-differentiable delays, Lya-punov function, matrix Riccati equations, linear matrix inequalities

1 Introduction

During the past decades, we witnessed an increasing interest to the delayed cellular neural networks (CNNs) models due to their successfully applications in many fields, such as signal processing, pattern recognition and association (e.g see, [2] and references therein) Much more efforts of researchers from mathematics and systems theory communities have been paid

Trang 2

to develop the study of stability analysis and control of such systems [2-4, 9, 12-13] General speaking, in applications, it is required that the equilibrium points of the designed network

be stable In both biological and artificial neural systems, time delays due to integration and communication are ubiquitous, and often become a source of instability The time delays

in electronic neural networks are usually time-varying, and sometimes vary violently with respect to time due to the finite switching speed of amplifiers and faults in the electrical circuitry Therefore, stability analysis of delayed neural networks is a very important issue, and many stability criteria have been developed in the literature [4, 9, 12] and the references cited therein

In recently years, the stability analysis and control of the autonomous delayed cellular neural networks (DCNNs) have been widely investigated Many important results on global stability and stabilization, H∞ control, etc, have been established [7, 9, 14, 15] However, in many realistic systems, the system parameters usually be changing with time This leads to non-autonomous phenomena systems Thus, more attentions have been paid to study the stability and stabilization of non-autonomous systems recently [1, 6, 11, 17] Particularly, in [7], the cellular neural networks with time-varying coefficients and delays is studied Based

on Lyapunov functional method and the use of matrix inequality technique, the authors established some criteria on the boundedness, global asymptotic stability and exponential stability However, these conditions can not be improved to fast time varying delays systems due to a hard assumption about bounded continuously differentiable of delay functions with upper bounds are strictly less than one

In this paper, we consider the problem of exponential stabilization for a class of non-autonomous cellular neural networks with time-varying delays The system under consider-ation is subject to time-varying coefficients with various activconsider-ation functions and two cases

of time-varying delays: (1) the state delay is differentiable and has an upper bound of the delay-derivative and (2) the delays are bounded but not necessary to be differentiable In this case, the restriction on the derivative of time-delay functions is removed, which means that fast time-varying delays are allowed Based on Lyapunov-Krasovskii functional method combined with the used of Razumikhin technique, we establish new delay-dependent condi-tions to design memoryless state feedback controller for exponential stabilizing the system The derived conditions are formulated in terms of the solution of suitable Riccati differential equations (RDEs), which allow simultaneous computation of two bounds that characterize the exponential stability rate of solution Numerical examples are given to illustrate the effectiveness of our results

The rest of the paper is organized as follows Section 2 presents definitions and some technical propositions needed for the proof of the main result In Section 3, new delay-dependent conditions in terms of Riccati differential equations are derived for exponential stabilization of the system Illustrative examples are given in Section 4 The paper ends with conclusion and cited references

Notations The following notations will be used throughout this paper: R+ denotes the set of all real non-negative numbers; Rn denotes the n−dimensional space with the scalar product hx, yi =Pn

i=1xiyi and the vector norm kxk =pPn

i=1x2

i; Rn×r denotes the space of

Trang 3

all matrices of (n × r)−dimensions, AT denotes the transpose of matrix A, A is symmetric

if A = AT, I denotes the identity matrix; λ(A) denotes the set of all eigenvalues of A,

λmax(A) (λmin(A), resp.) denotes the maximal (the minimal, resp.) number of the real part

of eigenvalues of A; xt := {x(t + s) : s ∈ [−h, 0]}, kxtk = sup−h≤s≤0kx(t + s)k; matrix A

is called semi-positive definite (A ≥ 0) if hAx, xi ≥ 0, for all x ∈ Rn, A is positive definite (A > 0) if hAx, xi > 0 for all x 6= 0, A > B means A − B > 0; µ(A) denotes the matrix measure of A defined by µ(A) = 1

2λmax(A + A

T); SM+(0, ∞) denotes the set of continuous symmetric and semi-positive definite matrix function in [0, ∞), BM+(0, ∞) denotes the subset of SM+(0, ∞) consisting of bounded matrix functions; C([−d, 0], Rn) denotes the Banach space of all Rn valued continuous functions with the norm kxk = supt∈[−d,0]kx(t)k for x(.) ∈ C([−d, 0], Rn)

2 Preliminaries

Consider a class of non-autonomous cellular neural networks with time-varying delays of the form

˙x(t) = −A(t)x(t) + W0(t)f(x(t)) + W1(t)g(x(t − h(t)))

+ W2(t)

Z t t−κ(t)

c(x(s))ds + B(t)u(t), t ≥ 0, x(t) = φ(t), t ∈ [−d, 0], d = max{h, κ}

(2.1)

where x(t) = [x1(t), x2(t), , xn(t)]T ∈ Rn is the state; u(.) ∈ L2([0, t], Rm) is the control;

n is the neural number; f(x(t)) = (fi(xi(t)))n×1, g(x(t − h(t))) = (gi(xi(t − h(t))))n×1 and c(x(t)) = (ci(xi(t)))n×1 are the activation functions; A(t) = diag (a1(t), a2(t), , an(t))) represents the self-feedback term, W0(t), W1(t), W2(t) denote the connection weight matrices and B(t) is the control input matrix Time-varying delay functions h(t), κ(t) are continuous and satisfy condition either (D1) or (D2):

(D1) 0 ≤ h(t) ≤ h, ˙h(t) ≤ µ < 1, 0 ≤ κ(t) ≤ κ, ∀t ≥ 0, (D2) 0 ≤ h(t) ≤ h, 0 ≤ κ(t) ≤ κ, ∀t ≥ 0

The initial function φ(t) ∈ C([−d, 0], Rn), with the norm kφk = sup−d≤t≤0kφ(t)k

In this paper, for system (2.1) we introduce the following assumptions

(H1) Matrix functions A(t), W0(t), W1(t), W2(t) and B(t) are continuous in [0, ∞), and

ai(t) > 0 for all t ≥ 0, i = 1, 2, , n;

(H2) The activation functions f(.), g(.), c(.) satisfy the following growth conditions

|fi(ξ)| ≤ ai|ξ|, |gi(ξ)| ≤ bi|ξ|, |ci(ξ)| ≤ ci|ξ|, i = 1, 2, , n, ∀ξ ∈ Rn, (2.2) where, ai, bi, ci are given positive constants

Trang 4

Next, we recall some definitions for system (2.1) as follows.

Definition 2.1 For given α > 0, system (2.1) with u(t) = 0 is said to be α−exponentially stable if there exists β > 0 such that every solution x(t, φ) of (2.1) satisfies the following condition

kx(t, φ)k ≤ βkφke−αt, ∀t ≥ 0

System (2.1) is exponentially stable if it is α−exponentially stable for some α > 0

Definition 2.2 System (2.1) is exponentially stabilizable if there exists a state feedback controller u(t) = K(t)x(t), K(t) ∈ Rm×n such that the closed-loop system

˙x(t) = [−A(t) + B(t)K(t)] x(t) + W0(t)f(x(t)) + W1(t)g(x(t − h(t)))

+ W2(t)

Z t t−κ(t)

c(x(s))ds, t ≥ 0, x(t) = φ(t), t ∈ [−d, 0],

(2.3)

is exponentially stable

We introduce the following technical well-known propositions, which will be used in the proof of our results

Proposition 2.1 (Razumikhin stability theorem) [5] Consider the following functional dif-ferential equation

˙x(t) = f(t, xt), t ≥ 0, x(t) = φ(t), t ∈ [−d, 0], (2.4) where f : R × C([−d, 0], Rn) → Rn takes R× (bounded sets of C([−d, 0], Rn)) into bounded sets of Rn, and u, v, w : R+ −→ R+ are continuous nondecreasing functions, u(s) and v(s) are positive for s > 0, and u(0) = v(0) = 0, v is strictly increasing If there exists a continuous function V : R × Rn −→ R such that

u(kxk) ≤ V (t, x) ≤ v(kxk), for t ∈ R and x ∈ Rn, and the derivative of V along the solution x(t) of systems (2.4) satisfies

˙

V (t, x(t)) ≤ −w (kx(t)k) whenever V (t + s, x(t + s)) < qV (t, x(t)), q > 1, ∀s ∈ [−d, 0], then the zero solution of system (2.4) is globally uniformly asymptotically stable

Proposition 2.2 (Cauchy Matrix Inequality) For any x, y ∈ Rnand positive definite matrix

N ∈ Rn×n, we have

2xTy ≤ xTN−1x + yTN y

Proposition 2.3 For any symmetric positive definite matrix M > 0, scalar ν > 0 and vector function ω : [0, ν] → Rn such that the integrations concerned are well defined, we have

Z ν 0

ω(s)ds

T

M

Z ν 0

ω(s)ds



≤ ν

Z ν 0

ωT(s)Mω(s)ds

 Proposition 2.4 (Schur complement lemma) Let X, Y, Z be any matrices with appropriate dimensions, X = XT, Y = YT > 0 Then X + ZTY−1Z < 0 if and only if

X ZT

Z −Y



< 0

Trang 5

3 Main result

In this section, we present some new sufficient conditions for exponential stabilization of non-autonomous neural networks system (2.1) Firstly, we consider the case delays functions satisfy condition (D1) For α > 0, P (t) ∈ SM+(0, ∞), we denote

F = diag{ai}, G = diag{bi}, H = diag{ci}, i = 1, 2, , n,

S(t) = W0(t)WT

0 (t) + (1 − µ)−1W1(t)WT

1 (t) + κe2ακW2(t)WT

2 (t), A(t) = −A(t) + αI + λdS(t), Pd(t) = P (t) + λdI,

Q(t) = 2αλdI + λ2dS(t) + F2+ G2+ κH2, R(t) = S(t) − B(t)BT(t),

λd= e−d, δ1= max

1≤i≤nb2i, δ2 = max

1≤i≤nc2i,

p0 = λmax(P (0)), Λ = p0+ λd+ δ11 − e−2αh

2α + δ2

2ακ + e−2ακ− 1 4α2 The following theorem present conditions for α−exponentially stabilizable for system (2.1)

Theorem 3.1 Let conditions (H1), (H2) and (D1) hold Then for given α > 0, system (2.1) is exponentially stabilizable if there exist a matrix function P (t) ∈ SM+(0, ∞) satisfy the following Riccati differential equation

˙

P (t) + AT(t)P (t) + P (t)A(t) + P (t)R(t)P (t) + Q(t) = 0 (3.1) The state feedback control is given by

u(t) = −1

2B

T(t)Pd(t)x(t), t ≥ 0 (3.2) Moreover, every solution x(t, φ) of the closed-loop system (2.3) satisfies

kx(t, φ)k ≤r Λ

λd

kφke−αt, t ≥ 0

Proof Let P (t) be a solution of (3.1), we consider the closed-loop system (2.3) Consider the following Lyapunov-Krasovskii functional

V (t, xt) = V1 + V2+ V3, where

V1(t, xt) = xT(t)Pd(t)x(t)

V2(t, xt) =

Z t t−h(t)

e2α(s−t)xT(s)GGx(s)ds,

V3(t, xt) =

Z 0

−κ

Z t t+s

e2α(τ −t)xT(τ )HHx(τ )dτ ds

Trang 6

It is easy to verify that

V (t, xt) ≥ λdkx(t)k2, t ∈ R+ (3.3) Taking derivative of V1 in t along the solution of (2.3) we obtain

˙

V1 = xT(t) ˙P (t)x(t) + 2xT(t)Pd(t) ˙x(t)

= xT(t) ˙P (t)x(t) + xT(t)h−Pd(t)A(t) − AT(t)Pd(t) + Pd(t)B(t)K(t) + KT(t)BT(t)Pd(t)ix(t) + 2xT(t)Pd(t)W0(t)f(x(t)) + 2xT(t)Pd(t)W1(t)g(x(t − h(t))) + 2xT(t)Pd(t)W2(t)

Z t t−κ(t)

c(x(s))ds

(3.4)

From (2.2) we have the following estimations by using propositions 2.2, 2.3

2xT(t)Pd(t)W0(t)f(x(t)) ≤ xT(t)Pd(t)W0(t)WT

0 (t)Pd(t)x(t) + fT(x(t))f(x(t))

≤ xT(t)Pd(t)W0(t)WT

0 (t)Pd(t)x(t) + xT(t)F F x(t); (3.5) 2xT(t)Pd(t)W1(t)g(x(t − h(t))) ≤ (1 − µ)−1xT(t)Pd(t)W1(t)WT

1 (t)Pd(t)x(t) + (1 − µ)gT(x(t − h(t)))g(x(t − h(t)))

≤ (1 − µ)−1xT(t)Pd(t)W1(t)WT

1 (t)Pd(t)x(t) + (1 − µ)xT(t − h(t))GGx(t − h(t));

(3.6)

2xT(t)Pd(t)W2(t)

Z t t−κ(t)

c(x(s))ds

≤ κe2ακxT(t)Pd(t)W2(t)W2T(t)Pd(t)x(t) + κ−1e−2ακ

Z t t−κ(t)

c(x(s))ds

TZ t t−κ(t)

c(x(s))ds



≤ κe2ακxT(t)Pd(t)W2(t)WT

2 (t)Pd(t)x(t) + e−2ακ

Z t t−κ(t)

cT(x(s))c(x(s))ds

≤ κe2ακxT(t)Pd(t)W2(t)W2T(t)Pd(t)x(t) + e−2ακ

Z t t−κ

xT(s)HHx(s)ds

(3.7)

From (3.4) to (3.7), we have

˙

V1 ≤ xT(t) ˙P (t)x(t) + xT(t)h−Pd(t)A(t) − AT(t)Pd(t)

+ Pd(t)B(t)K(t) + KT(t)BT(t)Pd(t) + F Fix(t) + xT(t)Pd(t)S(t)Pd(t)x(t)

+ (1 − µ)xT(t − h(t))GGx(t − h(t)) + e−2ακ

Z t t−κ

xT(s)HHx(s)ds

(3.8)

Trang 7

Next, by taking derivative of V2, V3 along solution of (2.3), respectively, we obtain

˙

V2 ≤ −2αV2+ xT(t)GGx(t) − (1 − µ)xT(t − h(t))GGx(t − h(t));

˙

V3 ≤ −2αV3+ κxT(t)HHx(t) − e−2ακ

Z 0

−κ

xT(t + s)HHx(t + s)ds

= −2αV3+ κxT(t)HHx(t) − e−2ακ

Z t t−κ

xT(s)HHx(s)ds

(3.9)

Thus, we have

˙

V + 2αV ≤ xT(t)



˙

P (t) − Pd(t)A(t) − AT(t)Pd(t) + 2αPd(t) + Pd(t)B(t)K(t) + KT(t)BT(t)Pd(t) + F2+ G2 + κH2

 x(t) + xT(t)Pd(t)S(t)Pd(t)x(t)

(3.10)

By substituting K(t) = −1

2B

T(t)Pd(t) into (3.10) leads to

˙

V + 2αV ≤ xT(t)h ˙P (t) − Pd(t)A(t) − AT(t)Pd(t) + 2αPd(t) + F2+ G2+ κH2ix(t)

+ xT(t)Pd(t)−B(t)BT(t) + S(t) Pd(t)x(t)

= xT(t)h ˙P (t) + AT

(t)P (t) + P (t)A(t) − P (t)R(t)P (t) + Q(t)ix(t)

− 2λdxT(t)A(t)x(t) − λ2dxT(t)B(t)BT(t)x(t)

(3.11)

Since, P (t) is a solution of (3.1), it follows from (3.11) that

˙

V + 2αV ≤ −2λd

n

X

i=1

ai(t)x2i(t) − λ2dkBT(t)x(t)k2≤ 0, ∀t ≥ 0,

which implies

V (t, xt) ≤ V (0, x0)e−2αt, ∀t ≥ 0,

by integrating from 0 to t On the other hand

V (0, x0) ≤ x(0)P (0)x(0) + δ1

Z 0

−h

e2αskx(s)k2ds + δ2

Z 0

−κ

Z 0 s

e2ατkx(s)k2dτ ds



p0+ λd+ δ1

Z 0

−h

e2αsds + δ2

Z 0

−κ

Z 0 s

e2ατdτ ds

 kφk2

≤ Λkφk2 Taking the estimation (3.3) into account, we finally obtain

kx(t, φ)k ≤r Λ

λd

kφke−αt, t ≥ 0

This completes the proof of theorem

Trang 8

Remark 3.1 The exponential stabilizable conditions given in theorem 3.1 are derived in terms of the solution of suitable Riccati differential equation (RDE) Various efficient nu-merical techniques for solving RDE can be found in [8] and [16] On the other hand, condition (2.3), in fact, can be relaxed via a matrix inequality:

˙

P (t) + AT(t)P (t) + P (t)A(t) + P (t)R(t)P (t) + Q(t) ≤ 0 (3.12) Remark 3.2 Based on the existence of some constant diagonal matrices satisfying some matrix inequalities uniformly, H Jiang and Z Teng [7] prove the global exponential stability for a class of CNNs with the upper bounds of delays less than one However, it can be seen that the proposed conditions in [7] are very conservative Differ from [7], in the case of differentiable delays, we do not require the self-feedback term A(t) to be uniformly positive

as well as the boundedness of A(t), W0(t), W1(t), W2(t)

In the sequel, we consider the problem of exponential stabilization for system (2.1) with

no restriction on the derivative of the time-varying delay functions Based on Razumikhin stability theorem we derive conditions for the exponential stabilization of system (2.1) in terms of Riccati differential equation For this, we assume that

(H1’) Matrix functions A(t), W0(t), W1(t), W2(t) and B(t) are continuous in [0, ∞), and

ai(t) ≥ ai > 0 for all t ≥ 0, i = 1, 2, , n

Let P (t) ∈ BM+(0, ∞), we define some notations as follows

a = min

1≤i≤nai, λb = inf

t∈R +λmin(B(t)BT(t)),

θ = 2λda + λ2dλb, σ = κ

2+ 1

2 , p = supt∈R+

kP (t)k, S(t) = λdW0(t)W0T(t) + δ1W1(t)W1T(t) + δ2W2(t)W2T(t), A(t) = −A(t) − λdB(t)BT(t) + σI + S(t),

R(t) = λ−1

d S(t) − B(t)BT(t), Q(t) = 2σλdI + F2+ λdS(t)

Then, we have the following theorem

Theorem 3.2 Let (H1’), (H2) and (D2) hold Then system (2.1) is exponentially stabilizable

if there exist a matrix function P (t) ∈ BM+(0, ∞) satisfy the following Riccati differential equation

˙

P (t) + AT(t)P (t) + P (t)A(t) + P (t)R(t)P (t) + Q(t) = 0 (3.13) The state feedback control is given by

u(t) = −1

2B

T(t)Pd(t)x(t), t ≥ 0 (3.14) Moreover, every solution x(t, φ) of the closed-loop system (2.3) satisfies

kx(t, φ)k ≤ βkφke−αt, ∀t ≥ 0,

Trang 9

where, β = 1 + p

λd

and α = θ

2 (p + λd).

Proof Let P (t) be solution of (3.13) With the state feedback control (3.14), consider the following Lyapunov-Krasovskii functional for the closed-loop system (2.3)

V (t, x(t)) = hPd(t)x(t), x(t)i = xT(t)P (t)x(t) + λdkx(t)k2, t ≥ 0

It is easy to verify that

λdkx(t)k2 ≤ V (t, x(t)) ≤ (p + λd) kx(t)k2, ∀t ≥ 0 (3.15) The time derivative of V (t, x(t)) along the solution of system (2.3) is estimated as follows

˙

V (t, x(t)) = xT(t) ˙Pd(t)x(t) + 2xT(t)Pd(t) ˙x(t)

= xT(t) ˙P (t)x(t) + 2xT(t)Pd(t)h(−A(t) + B(t)K(t)) x(t) + W0f(x(t))

+ W1(t)g(x(t − h(t))) + W2(t)

Z t t−κ(t)

c(x(s))dsi

= xT(t)h ˙P (t) − Pd(t)A(t) − AT(t)Pd(t)

+ Pd(t)B(t)K(t) + KT(t)BT(t)Pd(t)ix(t) + 2xT(t)Pd(t)W0(t)f(x(t)) + 2xT(t)Pd(t)W1(t)g(x(t − h(t))) + 2xT(t)Pd(t)W2(t)

Z t t−κ(t)

c(x(s))ds

(3.16)

By using Proposition 2.2 and condition (2.2), we have

2xT(t)Pd(t)W0(t)f(x(t)) ≤ xT(t)Pd(t)W0(t)W0T(t)Pd(t)x(t) + fT(x(t))f(x(t))

≤ xT(t)Pd(t)W0(t)WT

0 (t)Pd(t)x(t) + xT(t)F F x(t) (3.17)

In the light of the Razumikhin stability theorem, we assume that, for any  > 0,

V (t + s, x(t + s)) < (1 + )V (t, x(t)), ∀s ∈ [−d, 0], ∀t > 0

Therefore, the following estimations hold by using Proposition 2.2 and 2.3

2xT(t)Pd(t)W1(t)g(x(t − h(t)))

≤ δ1λ−1d xT(t)Pd(t)W1(t)WT

1 (t)Pd(t)x(t) + δ−11 λdgT(x(t − h(t)))g(x(t − h(t)))

≤ δ1λ−1d xT(t)Pd(t)W1(t)WT

1 (t)Pd(t)x(t) + δ−11 λdxT(t − h(t))GGx(t − h(t))

≤ δ1λ−1

d xT(t)Pd(t)W1(t)WT

1 (t)Pd(t)x(t) + λdkx(t − h(t))k2

≤ δ1λ−1d xT(t)Pd(t)W1(t)W1T(t)Pd(t)x(t) + V (t − h(t), x(t − h(t)))

≤ δ1λ−1d xT(t)Pd(t)W1(t)W1T(t)Pd(t)x(t) + (1 + )xT(t)Pd(t)x(t);

(3.18)

Trang 10

2xT(t)Pd(t)W2(t)

Z t t−κ(t)

c(x(s))ds

≤ δ2λ−1d xT(t)Pd(t)W2(t)WT

2 (t)Pd(t)x(t) + δ2−1λd

Z t t−κ(t)

c(x(s))ds

TZ t t−κ(t)

c(x(s))ds



≤ δ2λ−1d xT(t)Pd(t)W2(t)WT

2 (t)Pd(t)x(t) + δ2−1λdκ

Z t t−κ(t)

kc(x(s))k2ds

≤ δ2λ−1d xT(t)Pd(t)W2(t)WT

2 (t)Pd(t)x(t) + δ2−1λdκ

Z t t−κ(t)

xT(s)HHx(s)ds

≤ δ2λ−1

d xT(t)Pd(t)W2(t)WT

2 (t)Pd(t)x(t) + λdκ

Z t t−κ(t)

kx(s)k2ds

≤ δ2λ−1d xT(t)Pd(t)W2(t)WT

2 (t)Pd(t)x(t) + κ

Z 0

−κ(t)

λdkx(t + s)k2ds

≤ δ2λ−1d xT(t)Pd(t)W2(t)WT

2 (t)Pd(t)x(t) + κ (1 + )

Z 0

−κ(t)

xT(t)Pd(t)x(t)ds

≤ δ2λ−1d xT(t)Pd(t)W2(t)W2T(t)Pd(t)x(t) + κ2(1 + ) xT(t)Pd(t)x(t)

(3.19)

Combining (3.16)-(3.19), we obtain

˙

V (t, x(t)) ≤ xT(t)h ˙P (t) − Pd(t)A(t) − AT(t)Pd(t)

+ Pd(t)B(t)K(t) + KT(t)BT(t)Pd(t) + κ2+ 1 (1 + )Pd(t) + F F + Pd(t)W0(t)WT

0 (t)Pd(t) + δ1λ−1d Pd(t)W1(t)WT

1 (t)Pd(t) + δ2λ−1d Pd(t)W2(t)WT

2 (t)Pd(t)i(t)

(3.20)

By substituting K(t) = −12BT(t)Pd(t) and let  → 0+, equation (3.20) leads to

˙

V (t, x(t)) ≤ xT(t)h ˙P (t) − Pd(t)A(t) − AT(t)Pd(t) − Pd(t)B(t)BT(t)Pd(t)

+ (κ2+ 1)Pd(t) + F F + Pd(t)W0(t)WT

0(t)Pd(t) + δ1λ−1d Pd(t)W1(t)W1T(t)Pd(t)

+ δ2λ−1d Pd(t)W2(t)W2T(t)Pd(t)ix(t)

(3.21)

From (3.21) we obtain

˙

V (t, x(t)) ≤ xT(t) ˙P (t) + AT(t)P (t) + P (t)A(t) + P (t)R(t)P (t) + Q(t)x(t)

− 2λdxT(t)A(t)x(t) − λ2dxT(t)B(t)BT(t)x(t)

Ngày đăng: 26/10/2015, 14:08

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w