Từ khóa: hệ ô-tô nôm; miền hấp dẫn; lý thuyết Lyapunov; hàm Lyapunov; hàm Lyapunov liên tục, affine từng mảnh.[r]
Trang 1ESTIMATING THE REGION OF ATTRACTION FOR AN AUTONOMOUS SYSTEM WITH CPA LYAPUNOV FUNCTIONS
Tran Thi Hue, Dinh Van Tiep *
University of Technology - TNU
ABSTRACT
Lyapunov Converse Theorems only tell us the sufficient conditions for the existence of a Lyapunov function in a nonconstructive way They are helpless to construct such a function Recently, constructing Continuous Piecewise Affine (CPA) Lyapunov functions has been developed Basing on this technique, we can construct one Then, this construction can help us to find a very exact estimate of the region of attraction This is the main result of the paper We use the method to estimate the region of attraction for the case of the asymptotical stability We study this technique for the case autonomous systems
Keywords: autonomous system; the region of attraction; Lyapunov theory; Lyapunov functions;
CPA Lyapunov functions.
INTRODUCTION*
We study the autonomous system
𝒙̇ = 𝑓(𝒙) (1)
where 𝑓: 𝑹𝒏→ 𝑹𝒏 belongs to 𝐶2(𝑹𝒏, 𝑹𝒏),
and 𝑓(𝟎) = 𝟎, i.e 𝒙∗= 𝟎 is an equilibrium
point
Lyapunov Theorem asserts that if we can find
a neighborhood 𝐷 ⊂ 𝑹𝒏 of the origin 𝟎 ∈ 𝑹𝒏,
and a continuously differentiable, positive
definite function 𝑉: 𝑹𝒏→ 𝑹, 𝑉(𝟎) = 0, called
a Lyapunov function, which possesses a
negative definite derivative along the
trajectory of (1), then 𝒙∗= 𝟎 is
asymptotically stable Moreover, if 𝑉 satisfies
that
𝑎‖𝒙‖𝛼 ≤ 𝑉(𝒙) ≤ 𝑏‖𝒙‖𝛼,
𝑉̇(𝒙) = ∇𝑉(𝒙) ⋅ 𝑓(𝒙) ≤ −𝑐‖𝒙‖𝛼, (2)
for some a, b, c, 𝛼 > 0, ∀𝒙 ∈ 𝑹𝒏, then 𝒙∗= 𝟎
is exponentially stable The existence of such
a Lyapunov function 𝑉 was also already
affirmed if 𝒙∗= 𝟎 is exponentially stable
However, the construction of 𝑉 is recently
known (cf [1]) The method of constructing
𝑉 was based on the Linear Programming with
the constraints guaranteeing the condition (2),
*
Tel: 0968 599033, Email: tiepdinhvan@gmail.com
relaxing the condition of continuous differentiability to only locally Lipschitz continuity The constraints here are represented linearly, and function 𝑉 now is required only continuously piecewise affine (CPA) To generalize the concept of derivative, we introduce the notation of Dini’s derivative (the upper Dini’s derivative) along the trajectory of the system (1) That is,
𝐷+𝑉(𝒙) ≔ lim
𝑡→0sup𝑉(𝒙 + 𝑡𝑓(𝒙)) − 𝑉(𝒙)
So, if we have a CPA function 𝑉 satisfying the following condition, instead of (2), for
𝛼 = 1, 𝑎‖𝒙‖ ≤ 𝑉(𝒙) ≤ 𝑏‖𝒙‖,
𝐷+𝑉(𝒙) ≤ −𝑐‖𝒙‖, (2)′ then 𝒙∗= 𝟎 is exponentially stable (cf
Theorem 4.10 [4]) Conversely, if 𝒙∗= 𝟎 is
exponentially stable for (1), (cf [1]) we can construct a CPA Lyapunov function 𝑉 defining on a neighborhood 𝐷 of the origin The main contribution of this article is to provide a method to estimate the region of attraction,
ℛ{𝒙 ∈ 𝑹𝒏| lim𝑡→∞𝑠𝑢𝑝 𝜙(𝑡, 𝒙) = 𝟎}, where
𝜙(𝑡, 𝒙) is the solution of (1) satisfying that
𝜙(0, 𝒙) = 𝒙 The idea for this estimation is
based on Proposition 1 (cf [2] for detail)
Trang 2Proposition 1 If 𝑉: 𝐷 → 𝑹 is a positive
definite function, and 𝐷+𝑉(𝜙(𝑡, 𝒙)) < 0 for
all 𝜙(𝑡, 𝒙) ∈ 𝐷, then every compact,
connected component of 𝑉−1([0, 𝑐]), ∀ 𝑐 > 0,
is a subset of ℛ
LINEAR PROGRAMMING PROBLEM
This section, we are going to discuss the
problem of constructing a CPA Lyapunov
function based on a linear programming To
do so, first, we consider a neighborhood
𝐷 ⊂ 𝑹𝒏 of the origin, which can be
partitioned into n-simplices Δ𝜈 This simplical
partition is called a triangulation Λ That is,
𝐷 = ⋃Δ𝜈∈ΛΔ𝜈
For each Δ𝜈∈ Λ, assume that 𝒙𝟎, 𝒙𝟏, … , 𝒙𝒏
are all its vertices We require the
triangulation satisfying that 𝟎 = 𝒙𝟎 if 𝟎 ∈ Δ𝜈,
and that there exists Δ𝜈 ∈ Λ containing 𝟎
Now, we assign each vertex 𝒙𝒊 ∈ Δ𝜈 a value
𝑉𝒙𝒊∈ 𝑹 This setting naturally defines
uniquely a linear function 𝑉𝜈 on Δ𝜈, given by
𝑉𝜈(𝒙𝒊) = 𝑉𝒙𝒊 So, for every 𝒙 ∈ Δ𝜈, 𝒙 =
∑𝑛𝑖=0𝜆𝑖𝒙𝒊, (0 ≤ 𝜆𝑖 ≤ 1, ∑𝑛𝑖=0𝜆𝑖 = 1), we
have 𝑉ν(𝒙) = ∑𝑛𝑖=0𝜆𝑖𝑉𝒙𝒊= ∇𝑉𝜈⋅ 𝒙 + 𝑎𝜈 (for
some constant vectors ∇𝑉𝜈 , 𝑎𝜈 of 𝑹𝒏) Let 𝐷Λ
be the set of all vertices of all n-simplices Δ𝜈
in Λ Then, we can use the set {𝑉𝒙|𝒙 ∈ 𝐷Λ} ⊂
𝑹 to parameterize a CPA function 𝑉 defined
on 𝐷 by 𝑉|Δ𝜈 = 𝑉𝜈 Now, we set up the
constraints for these parameters under which
the obtained function 𝑉 fulfilled (2)′
Linear programming problem (LPP): The
variables of the LPP are 𝑉𝒙 (∀𝒙 ∈ 𝐷Λ) and
𝑊𝜈,𝑖 (∀𝑖 = 1,2, … , 𝑛, ∀Δ𝜈∈ Λ) The
constraints are
Constraint 1 Set 𝑉𝟎= 0, and ∀Δ𝜈∈ Λ, with
all its simplices 𝒙𝟎, 𝒙𝟏, … , 𝒙𝒏, we need
𝑉𝒙𝒊≥ ‖𝒙𝒊‖2
Constraint 2 ∀𝑖 = 1,2, … , 𝑛, and ∀∆𝜈∈ Λ,
we need |∇𝑉𝜈,𝑖| ≤ 𝑊𝜈,𝑖, where ∇𝑉𝜈,𝑖 is the i-th
component of the constant vector ∇𝑉𝜈
Constraint 3 ∀Δ𝜈 ∈ Λ, with all its simplices
𝒙𝟎, 𝒙𝟏, … , 𝒙𝒏, and ∀𝑖 = 0,1, … , 𝑛, we require
that
∇𝑉𝜈⋅ 𝑓(𝒙𝒊) + 𝑘𝜈,𝑖∑ 𝑊𝜈,𝑗
𝑛 𝑗=1
≤ −‖𝒙𝒊‖2, where the constant 𝑘𝜈,𝑖 is defined by
𝑘𝜈,𝑖 ≔ 𝑛𝐾𝜈
2 ‖𝒙𝒊− 𝒙𝟎‖2( max
2
− ‖𝒙𝒊− 𝒙𝟎‖2), with 𝐾𝜈 is an upper bound of all second order partial derivatives of 𝑓, on ∆𝜈
In Constraint 2, ∇𝑉𝜈,𝑖 are a linear combination of (𝑉𝒙𝒊− 𝑉𝒙𝟎), ∀𝑖 = 1,2, … , 𝑛 (cf [3]) Therefore, it is indeed a linear constraint We are now going to prove that a feasible solution of LPP succeed in parameterizing a CPA Lyapunov function 𝑉 First, we need some other results
Lemma 1 Let Δ𝜈 = 𝑐𝑜𝑛𝑣{𝒙𝟎, 𝒙𝟏, … , 𝒙𝒏} is
an n-simplex Then, for every 𝒙 ∈ Δ𝜈, assuming that 𝒙 = ∑𝑛 𝜆𝑖𝒙𝒊
𝑖=0 , (0 ≤ 𝜆𝑖 ≤
1, ∑𝑛 𝜆𝑖
𝑖=0 = 1), we have
‖𝑓(𝒙) − ∑ 𝜆𝑖𝑓(𝒙𝒊)
𝑛 𝑖=0
‖
∞
≤ ∑ 𝜆𝑖𝑘𝜈,𝑖
𝑛
𝑖=0
(The proof is available in [1], Lemma 4.16)
Lemma 2 Assume that ∇𝑉𝜈⋅ 𝑓(𝒙) ≤
−‖𝒙‖, ∀𝒙 ∈ Δ𝑂𝜈 (Δ𝜈𝑂 is the interior of Δ𝜈) Then, 𝐷+𝑉(𝒙) ≤ −‖𝒙‖, ∀𝒙 ∈ Δ𝑂𝜈
(For the proof, cf [1], Theorem 4.17)
Now, we are ready to prove the above
statement
Theorem 1 Suppose that LPP has a feasible
solution 𝑉𝒙 (∀𝒙 ∈ 𝐷Λ) and 𝑊𝜈,𝑖 (∀𝑖 = 1,2, … , 𝑛, ∀Δ𝜈 ∈ Λ) Then, the CPA function
𝑉 parameterized by {𝑉𝒙|𝒙 ∈ 𝐷Λ}, (that is, 𝑉|Δ𝜈 = 𝑉𝜈, ∀Δ𝜈∈ Λ) is a CPA Lyapunov function
Proof We are going to show that the CPA
function 𝑉 parameterized a feasible solution
Trang 3{𝑉𝒙|𝒙 ∈ 𝐷Λ} fulfills (2)′ Indeed, ∀Δ𝜈 ∈ Λ,
∀𝒙 ∈ Δ𝜈, 𝒙 = ∑𝑛𝑖=0𝜆𝑖𝒙𝒊, (0 ≤ 𝜆∑ 𝜆𝑖 ≤ 1,
𝑖
𝑛
By Hölder′s inequality, Lemma 1 and
Constraint 2,3, we have
∇𝑉𝜈⋅ 𝑓(𝒙) = ∇𝑉𝜈⋅ ∑ 𝜆𝑖𝑓(𝒙𝒊)
𝑛 𝑖=0
+ ∇𝑉𝜈⋅ (𝑓(𝒙) − ∑ 𝜆𝑖𝑓(𝒙𝒊)
𝑛 𝑖=0
)
≤ ∇𝑉𝜈⋅ ∑𝑛 𝜆𝑖𝑓(𝒙𝒊)
+‖∇𝑉𝜈‖1⋅ ‖𝑓(𝒙) − ∑𝑛 𝜆𝑖𝑓(𝒙𝒊)
≤
∑𝑛 𝜆𝑖∇𝑉𝜈⋅ 𝑓(𝒙𝒊)
(∑𝑛𝑖=0𝜆𝑖𝑘𝜈,𝑖)
≤ ∑ 𝜆𝑖
𝑛
𝑖=0
(∇𝑉𝜈⋅ 𝑓(𝒙𝒊) + 𝑘𝜈,𝑖(∑|∇𝑉𝜈,𝑖|
𝑛 𝑖=1
))
≤ − ∑ 𝜆𝑖‖𝒙𝒊‖2
𝑛
𝑖=0
≤ − ‖∑ 𝜆𝑖𝒙𝒊
𝑛 𝑖=0
‖
2
= −‖𝒙‖2 Hence, ∇𝑉𝜈⋅ 𝑓(𝒙) ≤ −‖𝒙‖2, ∀𝒙 ∈ Δ𝜈 So, by
Lemma 2,
𝐷+𝑉(𝒙) ≤ −‖𝒙‖, ∀𝒙 ∈ Δ𝑂𝜈
Moreover, by Constraint 1,
𝑉(𝒙) = ∑ 𝜆𝑖𝑉𝒙𝒊
𝑛 𝑖=0
≥ ∑ 𝜆𝑖
𝑛 𝑖=0
‖𝒙𝒊‖2
≥ ‖∑ 𝜆𝑖
𝑛
𝑖=0
𝒙𝒊‖
2
= ‖𝒙‖2, ∀𝒙 ∈ Δ𝜈
The condition that 𝑉(𝒙) ≤ 𝑏‖𝒙‖2 is quiet
obvious by taking 𝑏 = maxΔ𝜇∈Λ‖∇𝑉𝜇‖2 in the
case 𝒙 ∈ Δ𝜈, where 𝟎 ∈ Δ𝜈, and 𝑏 =
(maxΔ𝜇∈Λ‖∇𝑉𝜇‖2+ ‖𝑎𝜈 ‖2
𝟎 ∉ Δ𝜈 So, 𝑉 satisfies (2)′ □
SIMPLICIAL PARTITIONS OF 𝑹𝒏
This section is sacrificed to introduce the
method of triangulate a neighborhood
𝐷 ⊂ 𝑹𝒏 of the origin 𝟎 We focus on a
strategy of partition which provides a feasible solution for LPP, in the case 𝒙∗= 𝟎 is an
exponentially stable equilibrium point Such a strategy exists (cf [1], [3])
First, consider 𝐷 = [−𝑏, 𝑏]𝑛 for some 𝑏 > 0 Assume that 0 = 𝑦0< 𝑦1< < 𝑦𝑁 = 𝑏 Let {𝒆𝟏, 𝒆𝟐, … , 𝒆𝒏} be the standard basis of 𝑹𝒏
Piecewise Scaling Functions: Consider a
function 𝑷𝑺: [−𝑁, 𝑁]𝑛⟶ [−𝑏, 𝑏]𝑛, given by 𝑷𝑺(𝒙) = ∑ 𝑠𝑖𝑔𝑛(𝑥𝑖) 𝑃(|𝑥𝑖|)𝒆𝑖
𝑛 𝑖=1
, (3) where 𝒙 = ∑𝒏 𝑥𝑖𝒆𝒊
𝒊=𝟏 and 𝑃: [0, 𝑁] → [0, 𝑏] is
a CPA function, defined by 𝑃(𝑖) = 𝑦𝑖, ∀𝑖 = 0,1, … , 𝑁, and 𝑃|[𝑖,𝑖+1] is affine 𝑷𝑺 is called a piecewise scaling function
The Reflection Function: Let 𝒥 be a subset
of the set {1,2, … , 𝑛}, and 𝒳𝒥: {1,2, … , 𝑛} → {0,1} be the characteristic function of 𝒥 We call the following function 𝓡𝒥 a reflection function, 𝓡𝒥: 𝑹𝒏⟶ 𝑹𝒏, given by
𝓡𝒥(𝒙) = ∑𝑛 (−1)𝒳𝒥(𝑖)𝑥𝑖𝒆𝒊
The Basic Triangulation: For each 𝜎 in the permutation group 𝑆𝑦𝑚𝑛 of {1,2, … , 𝑛}, and for each 𝒛 ∈ 𝒁≥0𝒏 , define an n-simplex
∆𝒥𝒛,𝜎≔ 𝑐𝑜𝑛𝑣{𝓡𝒥(𝒛), 𝓡𝒥(𝒛 + ∑𝑗𝑖=1𝒆𝜎(𝑖)), ∀𝑗 =
1,2, … , 𝑛}
We get naturally a partition of 𝑹𝒏, called the
basic triangulation of 𝑹𝒏 That is,
𝑹𝒏= ⋃ {∆𝒥𝒛,𝜎|𝒛 ∈ 𝒁≥𝟎𝒏 , 𝜎 ∈ 𝑆𝑦𝑚𝑛, 𝒥
⊂ {1,2, … , 𝑛}}
Proposition 2 The region 𝐷 = [−𝑏, 𝑏]𝑛 can
be triangulated as follows:
𝐷 = ⋃ {𝑷𝑺(∆𝒥𝒛,𝜎)|𝒛 ∈ [0, 𝑁]𝒏∩ 𝒁≥𝟎𝒏 , 𝜎
∈ 𝑆𝑦𝑚𝑛, 𝒥 ⊂ {1,2, … , 𝑛}}
Proof Because each component 𝑃𝑆𝑖(𝑥𝑖) ≔ 𝑠𝑖𝑔𝑛(𝑥𝑖) 𝑃(|𝑥𝑖|) of the scaling function 𝑷𝑺
Trang 4are CPA on [−𝑁, 𝑁], so 𝑷𝑺 is CPA on
[−𝑁, 𝑁]𝑛 Therefore, the image of an
n-simplex though 𝑷𝑺 is also an n-n-simplex in 𝐷
Moreover, 𝑷𝑺 is a surjection, and
[−𝑁, 𝑁]𝑛 = ⋃ {∆𝒥𝒛,𝜎|𝒛 ∈ [0, 𝑁]𝒏∩ 𝒁≥𝟎𝒏 , 𝜎
∈ 𝑆𝑦𝑚𝑛, 𝒥 ⊂ {1,2, … , 𝑛}} ,
𝐷 = 𝑷𝑺([−𝑁, 𝑁]𝑛) = ⋃{𝑷𝑺(∆𝒥𝒛,𝜎)|
𝒛 ∈ [0, 𝑁]𝒏∩ 𝒁≥𝟎𝒏 , 𝜎 ∈ 𝑆𝑦𝑚𝑛, 𝒥
⊂ {1,2, … , 𝑛}} □
Fan-like Triangulation: For each 𝐾 ∈ 𝑵,
consider the basic triangulation 𝒯 of
[−2𝐾, 2𝐾]𝑛 For every n-simplex ∆𝒥𝒛,𝜎∈ 𝒯
which intersects the boundary of [−2𝐾, 2𝐾]𝑛,
the intersection is exactly an (n-1)-simplex
𝑐𝑜𝑛𝑣{𝒙𝟏, 𝒙𝟐, … , 𝒙𝒏} = ∆𝒥𝒛,𝜎∩ [−2𝐾, 2𝐾]𝑛,
where 𝒙𝟎, 𝒙𝟏, 𝒙𝟐, … , 𝒙𝒏 are all vertices of
∆𝒥𝒛,𝜎 Construct a new n-simplex by
𝑐𝑜𝑛𝑣{𝟎, 𝒙𝟏, 𝒙𝟐, … , 𝒙𝒏} The set of all such
n-simplices forms a new triangulation 𝒮 of
[−2𝐾, 2𝐾]𝑛, called a fan-like triangulation
If we define a CPA function 𝑃: [0,2𝐾] ⟶
[0, 𝑐], (𝑐 > 0), given by 𝑥 ↦ 𝑐2−𝐾𝑥, we get
correspondingly a scaling function
𝑷𝑺: [−2𝐾, 2𝐾]𝑛⟶ [−𝑐, 𝑐]𝑛 defined as (3)
Hence, a fan-like triangulation 𝒮𝐾,𝑐 of
[−𝑐, 𝑐]𝑛 can be revealed by [−𝑐, 𝑐]𝑛=
⋃{𝑷𝑺(∆)|∀∆∈ 𝒮}
Proposition 3 Assume that 𝒙∗= 𝟎 is an
exponential equilibrium point of the system
(1) Consider the sequence of fan-like
triangulations {𝒮
𝐾≥0
There exists
𝐾 ∈ 𝑵 such that LPP has a feasible solution
on the region 𝐷𝐾≔ ⋃ {∆|∆∈ 𝒮
Proof Refer to [3]
ATTRACTION
Assume that 𝒙∗= 𝟎 is an exponential
equilibrium point of the system (1) We state
the algorithm
Step 1 Find a region 𝐷𝐾 in Proposition 3 Then, set 𝑦0= 0, 𝑦1= (3/4)𝐾
Step 2 For each 2 ≤ 𝑁 ∈ 𝑵, take arbitrarily a positive number 𝑦𝑁 Define a CPA function 𝑃: [0, 𝑁] ⟶ [0, 𝑦𝑁], given by 𝑃 (𝑖 (83)𝐾) =
𝑦𝑖 and 𝑃|
a scaling function 𝑷𝑺 as (3), and establish the triangulation 𝒯𝑁 of [– 𝑦𝑁, 𝑦𝑁]\[−𝑦1, 𝑦1] as in Proposition 2 Finally, triangulate [−𝑦𝑁, 𝑦𝑁]
as [– 𝑦𝑁, 𝑦𝑁] = ⋃ {∆|∆∈ 𝒯𝑁∪ 𝒮
𝐾,(34)𝐾} We denote this triangulation 𝔗𝑁
Step 3 Check that whether the triangulation
𝔗𝑁 of [−𝑦𝑁, 𝑦𝑁] guarantees the existence of a feasible solution for LPP or not If it does, then [−𝑦𝑁, 𝑦𝑁] is a subset of the region of attraction ℛ, increase 𝑁 to 𝑁 + 1 and repeat Step 2, 3 If it does not, stop the algorithm The region returned at Step 3 is an estimate of the region of attraction ℛ Note that, in order for the choose of the sequence {𝑦𝑁}𝑁≥2 to be automatically performed, we take 𝑦𝑁 ≔
𝑁, ∀𝑁 ≥ 2
Theorem 2 The algorithm always succeeds
in finding an estimate of the region of attraction
Proof By Proposition 3, the region 𝐷𝐾 in Step 1 always exists Moreover, by Theorem
1 and Proposition 1, the region returned in Step 3 is a subset of ℛ, so it gives a lower estimate for ℛ □
COMPARISON OF THE METHOD AND THE INDIRECT METHOD
This section aims to evaluate the advantage of the above method of estimating ℛ by comparing it with one secured by the indirect method First we introduce the estimation of
ℛ secured by the indirect method In this method, we find a Lyapunov function of the form 𝑉(𝒙) = 𝒙𝑇𝑃𝒙, where 𝑃 is positive
definite matrix solving the Lyapunov
Trang 5equation 𝑃𝐴 + 𝐴𝑃𝑇 = −𝐼𝑛, with 𝐴 is
Jacobian of 𝑓 at 𝟎 Let 𝜙(𝑡, 𝒙) be a solution
of (1), then 𝑉̇(𝜙) = −‖𝜙‖22+ 𝜙𝑇𝑃(𝑓(𝜙) −
𝐴𝜙) + (𝑓(𝜙) − 𝐴𝜙)𝑇𝑃𝜙 ≤ −‖𝜙‖2(‖𝜙‖2−
2‖𝑃‖2‖𝑓(𝜙) − 𝐴𝜙‖2) So, 𝑉̇(𝜙) < 0, ∀𝜙
such that
𝜙(𝑡, 𝒙) ∈ {𝝎 ∈ 𝑹𝒏|‖𝑓(𝝎) − 𝐴𝝎‖2< ‖𝝎‖2
By Taylor’s Theorem, with
𝛼𝑖𝑗𝑘 ≥ sup
𝜕𝑥𝑘𝜕𝑥𝑗(𝝎)| , ∀𝑖 = 1, 𝑛̅̅̅̅̅,
we have
‖𝑓(𝝎) − 𝐴𝝎‖2 ≤ ‖12 ∑ 𝑥𝑗𝑥𝑘𝛼𝑖𝑗𝑘𝒆𝒊
𝑛 𝑖,𝑗,𝑘=1
‖
2
≤‖𝝎‖∞2
𝑛 𝑗,𝑘=1
)
𝑛 𝑖=1
2
< ‖𝝎‖2 2‖𝑃‖2 The last inequality is valid only if ‖𝝎‖∞<
(‖𝑃‖2√∑ (∑𝑛 𝛼𝑖𝑗𝑘
𝑛
𝑖=1
2
)
−1
=: 𝛼 Hence,
the set Ωc≔ {𝝎 ∈ 𝑹𝒏|𝝎𝑻𝑃𝝎 < 𝑐} ⊂
(−𝛼, 𝛼)𝑛 for some 𝑐 > 0 is the lower
estimate of ℛ Here, 𝑐 should be taken such
that 𝜕Ωc intersects 𝜕(−𝛼, 𝛼)𝑛, so that Ωc is
the best estimate of ℛ secured by the indirect
method
EXAMPLES
Example 1[6] Consider the system:
𝑥̇1= −𝑥2, 𝑥̇2 = 𝑥1− 𝑥2+ 𝑥12𝑥2− 0.1𝑥14𝑥2
Choose 𝑁 = 8, 𝑦0= 0, 𝑦1 = 0.078, 𝑦2 =
0.28, 𝑦3 = 0.51, 𝑦4 = 0.696, 𝑦5 = 0.842,
𝑦6 = 0.96, 𝑦7= 1.024, 𝑦8= 1.056 The
indirect method returns a Lyapunov function
𝑉(𝒙) = 𝒙𝑇𝑃𝒙, where
𝑃 = [ 1.5−0.5 −0.51 ]
The calculation reveals the values 𝛼211=
2.112, 𝛼212= 𝛼221= 1.641 are all non-zero
𝛼𝑖𝑗𝑘
So, 𝛼 = 0.1025, Ω0.00867 is the best lower estimate of ℛ (cf Figure 1, the small ellipse
is ∂Ω0.00867, while the bigger loop represents the boundary of the estimate of ℛ secured by the CPA Lyapunov function.)
Example 2[6] Consider the system: 𝑥̇1=
𝑥2, 𝑥̇2= −𝑥1− 𝑥2+13𝑥12 Take 𝑁 = 8,
𝑦0= 0, 𝑦1 = 0.156, 𝑦2 = 0.513, 𝑦3 = 0.88, 𝑦4= 1.204, 𝑦5= 1.427, 𝑦6= 1.58, 𝑦7 = 1.662,
𝑦8= 1.686
By the calculation, the indirect method reveals the Lyapunov function 𝑉(𝒙) =32𝑥1 −
𝑥1𝑥2+ 𝑥22, while 𝛼 = 0.52 So, we can find out that Ω0.225 is the best lower estimate of ℛ secured by the indirect method (cf Figure 2, the small ellipse is 𝜕Ω0.225, the bigger loop is the boundary of the estimate of ℛ secured by the CPA Lyapunov function.)
SUMMARY
By constructing a CPA Lyapunov function,
we can find a good estimation for the region
of attraction The algorithm we state in this paper always succeeds in finding such an estimate if the operating equilibrium is exponentially stable The method of
Trang 6constructing a CPA Lyapunov function
undoubtedly has the upper hand of giving a
better estimate for the region of attraction
However, the drawbacks of the method
includes the complexity of the calculation, so
it takes so much time Besides, a computer
program required to support the calculation of
the algorithm for this method seems quite
sophisticated In the future, we hope to dial
with these problems, or even reduce the
disadvantage of the method
ACKNOWLEDGEMENT
I am very grateful to College of Technology
(Thai Nguyen University), who supports my
paper to public this work in Journal of
Science and Technology, TNU
REFERENCES
1 S Marinosson (2002), Stability Analysis of Nonlinear Systems with Linear Programming: A Lyapunov Functions Based Approach
Gerhard-Mercator-University, Duisburg
2 P Giesl, and S.F Hafstein (2012), “Existence of piecewise linear Lyapunov functions in arbitrary
dimensions”, Discrete and Continuous Dynamical Systems - Series A, 32-10, pp 3539-3565
3 P Giesl, and S.F Hafstein (2014), “Revised CPA method to compute Lyapunov functions for
nonlinear systems”, Journal of Mathematic Analysis and Applications, 410, pp 292-306
4 H Khalil (1992), Nonlinear Systems, New
York: Macmillan
5 S.F Hafstein (2007), “An algorithm for constructing Lyapunov functions”, Monograph,
Electronic Journal of Differential Equations
6 S.F Hafstein (2004), “A constructive converse Lyapunov theorem on exponential stability”, Discrete and Continuous Dynamical Systems - Series A, 10(3), pp 657–678
TÓM TẮT
ƯỚC LƯỢNG MIỀN HẤP DẪN CHO HỆ Ô-TÔ-NÔM BẰNG HÀM LYAPUNOV LIÊN TỤC, AFFINE TỪNG MẢNH
Trần Thị Huê, Đinh Văn Tiệp *
Trường Đại học Kỹ thuật Công nghiệp – ĐHTN
Các Định lý đảo Lyapunov chỉ cho ta biết các điều kiện đủ để suy ra sự tồn tại của các hàm Lyaponov, nhưng không cho biết cách xây dựng các hàm này Gần đây, việc xây dựng các Lyapunov liên tục, affine từng mảnh, đã được phát triển Dựa vào các kết quả này, ta có thể xây dựng được một hàm như vậy Việc xây dựng này sẽ được áp dụng để ước lượng miền hấp dẫn của
hệ Đây là kết quả chính của bài báo Ta sẽ sử dụng phương pháp này để ước lượng miền hấp dẫn cho trường hợp ổn định tiệm cận Hiện tại, phương pháp này vẫn chỉ là một phương pháp mò mẫm Ta nghiên cứu phương pháp này cho hệ ô-tô nôm
Từ khóa: hệ ô-tô nôm; miền hấp dẫn; lý thuyết Lyapunov; hàm Lyapunov; hàm Lyapunov liên tục,
affine từng mảnh
Ngày nhận bài: 15/3/2018; Ngày phản biện: 03/5/2018; Ngày duyệt đăng: 31/5/2018
*
Tel: 0968 599033, Email: tiepdinhvan@gmail.com