1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Kiểm soát và ổn định thích ứng dự toán cho các hệ thống phi tuyến P6 potx

45 190 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques
Tác giả Jeffrey T. Spooner, Manfredi Maggiore, Raúl Ordoñez, Kevin M. Passino
Trường học John Wiley & Sons
Chuyên ngành Control Systems
Thể loại Thesis
Năm xuất bản 2002
Thành phố Hoboken
Định dạng
Số trang 45
Dung lượng 3,36 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

We will see that the choice of the error system and Lyapunov candidate will actually be used in the defi- nition of the controller much in the same way a cost function will influence the

Trang 1

Part II

Jeffrey T Spooner, Manfredi Maggiore, Ra´ul Ord´o˜nez, Kevin M Passino

Copyright  2002 John Wiley & Sons, Inc ISBNs: 0-471-41546-4 (Hardback); 0-471-22113-9 (Electronic)

Trang 2

we will not consider many of the traditional control design techniques such

as using Bode and Nyquist plots Instead, we will use Lyapunov-based design techniques where a controller is chosen to help decrease a measure

of the system error

is a vector of appropriate signals for the particular application The vector

x may contain, for example, reference signals, states, dynamic signals, or combinations of any of these We will only consider controllers where the components of x are measurable signals The purpose of the controller is typically to force y -+ r(t) where r E RP is a reference signal When r is time-varying, defining a control law u = Y(Z) to force y + r(t) is called the tracking problem If T is a constant, the problem is commonly referred to

as set-point regulation

To help develop general control techniques, we will study certain canon-

Jeffrey T Spooner, Manfredi Maggiore, Ra´ul Ord´o˜nez, Kevin M Passino

Copyright  2002 John Wiley & Sons, Inc ISBNs: 0-471-41546-4 (Hardback); 0-471-22113-9 (Electronic)

Trang 3

ica,l forms of the system dynamics If the original system is defined by

then a difleomorphism may be used to create the state representation z = T(c) Here T is a diffeomorphism (a diffeomorphism is a continuously differentiable mapping with a continuously differentiable inverse) which is used to form the new system representation

J; = g&f&d = f(X,U)

Y = h&t+) = h(v), (6.3)

where f(x,u) and h(x, u) may take on a special form when dealing with canonical representations Thus in the stability analysis throughout the remainder of this book, we will typically consider the x, rather than the [ representation It is important to keep in mind that the change of co- ordinates only changes the representation of the dynamics and not the input-to-output characteristics of the system

When deriving a control law, we will first define an error system, e = x(t, x) with e E Rq, which provides a quantitative (usually instantaneous) measure of the closed-loop system performance The system dynamics are then used with the definition of the error system to define the error dy- namics, ~2 = @,x,u) A Lyapunov candidate, V(e) with V : Rq -+ R, is then used to provide a scalar measurement of the error system in a similar fashion that a cost function is used in traditional optimization The pur- pose of the controller is then to reduce V along the solutions of the error dynamics

The initial control design techniques presented here will assume that the plant dynamics are known for all x E R” Once we understand some of the basic tools of nonlinear control design for ideal systems, we will study the control of systems which posses certain types of uncertainty In particular,

it will be shown how nonlinear damping and dynamic normalization may

be used to stabilize possibly unbounded system uncertainties

The control design techniques will assume that any uncertainty in the plant model may be bounded by known functions (with possible multi- plicative uncertainty) If in reality the models and/or bounds are only valid when x E S,, where Sx is a compact set, then there may be cases when the stability analysis is invalid This is often seen when a controller is designed based on the linearization of a nonlinear plant If the state travels too far awa’y from the nominal operating point (i.e., the point about which the linearization was performed), it is possible for the plant nonlinearities

to drive the system unstable In this chapter, we will derive bounds on the state trajectory using the properties of the Lyapunov function to ensure that x never leaves the space over which the plant dynamics are under- stood Since we will place bounds on x, additional properties of the plant

Trang 4

dynamics, such as Lipschitz continuity, only need to hold on S, Through- out the remainder of this book, we will use the notation S, to represent the space over which the signal y E R” may travel

6.2 The Error System and Lyapunov Candidate

Before we present any control techniques, the concepts of an error system and Lyapunov candidate must be understood We will see that the choice of the error system and Lyapunov candidate will actually be used in the defi- nition of the controller much in the same way a cost function will influence the form of an optimization routine

6.2.1 Error Systems

For any control system there is a collection of signals that one wants to ensure is bounded or possibly converge to a desired value The size of the mismatch between the current signal values and the space of desired values

is meassured by an error variable e E R q The definition of the system error

is typically driven by both the desired system performance specification and the structure of the system dynamics Consider the system dynamics

Trang 5

The error dynamics are easily stabilized by setting u = cost + sin t so that

and e(t) decreases to zero exponentially fast However, this choice

of u yields an unstable closed-loop system because the x2-dynamics become

which defines a linear unstable system with a nonzero input Here the problem is generated by the wrong choice of the error system A better choice is given by

which defines an asymptotically stable linear system with eigenvalues

at - 1 The stability of the new error dynamics also implies that the system states x1 (t) and x2(t) are bounded functions of time since

xi (t) = er (t) + sin t and x2(t) = es(t) + cost n

From the example above, it should be clear that the choice of the error system is crucial to the solution of the tracking problem and that the error system should possess two basic features

1 e = 0 should imply y(t) = r(t) or y(t) -+ T(t)

2 The boundedness of the error system trajectory e(t) should imply the boundedness of the system state x(t)

These two requirements are summarized in the following assumption

Assumption 6.1: Assume the error system e = x(t, x) is such that

e = 0 implies y(t) + r(t) and that the function x satisfies 1x1 2 $&, let) for all t, where $J~ : RS x R+ + R is bounded for any bounded e Additionally,

& (t, s) is nondecreasing with respect to s E R+ for each fixed t

If Assumption 6.1 is satisfied, then boundedness of the error system will imply boundedness of the state trajectories of (6.3) In addition, if there

Trang 6

exists some signal r)(t) >_ (el for all t, then 1x1 5 +-&, q) since $J&, 7) is nondecreasing with respect to 77 E R? Because of Assumption 6.1, we will require not only that the error system provides a measure of the closed-loop system performance, but also that it places bounds on the system states Given a general dynamical system (6.1)) an error system satisfying As- sumption 6.1 can be found by defining the stable inverse of the system

Definition 6.1: Given a bounded reference trajectory r(t), a pair of func- tions (2C (t) , c’ (t)) is said to a stable inverse of (6.4) if, for all t > 0, xr(t) - and cr (t) are bounded, xr (t) is differentiable and

l When e(t) = 0, we have x(t) = x'(t) and thus y(t) = r(t)

l If e(t) is bounded for all t 2 0 then x(t) = e(t) + xr (t) is also bounded because xr(t) is bounded In particular, 1x1 < lel+ Ix’(t)) = $+,#, lel) for all t, where clearly $J is nondecreasing with respect to its second argument, for each fixed t

Example 6.2 We now return to Example 6.1 and find the stable inverse

of the plant for the reference trajectory r(t) = sin t To this end, according to Definition 6.1, we seek to find two bounded functions of time xr (t) and cr (t) sa,tisfying

e = x(t,x) = x - x’(t)

n

Trang 7

Notice that the error system (6.12) has dimension n Sometimes one ma,y be a%ble to find lower dimensiona, error systems satisfying Assump- tion 6.1 Once an error system has been chosen, the system dynamics ma,y be used to calculate the error dynamics Given the system dynamics governed by

to be positive definite with V(e) = 0 if and only if e = 0 Thus if a controller may be defined such that V is decreasing along the solutions of (6.15), then the “energy” associated with the error system must be decreasing If in addition it can be shown that V -+ 0, then e + 0

A common choice for a Lyapunov candidate is to use

where P E Rqxq is positive definite Assume that some control law u = V(Z)

is chosen so that p < iFi V + ka where ki > 0 and k2 > 0 are bounded constants According-to Lemma 2.1, we find

Trang 8

Thus we see that studying the trajectory of V directly places bounds on lel Using this concept, we will define controllers so that for a given positive definite V(e) we achieve v 5 -kr V + k;! (or a similar relationship) implying boundedness of V to ensure lel is bounded Assumption 6.1 will then be used to find bounds for 1x1

6.3 Canonical System Representations

To develop general procedures for control design, we will consider special canonical representations of the plant dynamics for the design model If the dynamics are not originally in a canonical form, we will use the diffeomor- phism x = T(t) to obtain a canonica,l representation Once the dynamics have been placed in a canonical form, we will find that an appropriate error system and Lyapunov candidate may be generated

We will find that the design of a controller for a nonlinear system will generally use the following steps:

1 Place the system dynamics into some canonical representation

2 Choose an error system satisfying Assumption 6.1 and Lyapuonv can- didate

3 Find a control law u = V(Z) such that V 5 -k$ + kz, where ki > 0 and k2 > 0 -

As we will see, placing the system dynamics into a canonical form often allows for an easy choice of an error system and Lyapunov candidate for which an appropriate control law may be defined We will find that the particular choice of the error system, and thus the Lyapunov candidate, will generally influence the specification of the control law used to force

v < -klV + /?z Since the goal of the control law is to specify the way in which the Lyapunov function decreases, this approach to control is referred

to as Lyapunov-based design

6.3.1 State-Feedback Linearizabfe Systems

A system is said to be state-feedback linearizable if there exists a diffeo- morphism x = T(t), with T(0) = 0, such that

li: = Ax + B (f(x) + g(x)u) , (6.19) where x E R”, u E R”, and (A, B) form a controllable pair The functions

f : R” -+ R” and g : Rn + RmX’” are assumed to be Lipschitz and g(x)

is invertible For the state feedback problem all the states are measurable, and thus we may say the plant output is y = x We will now see how

to choose an error system, Lyapunov candidate, and controller for systems sa#tisfying (6.19) for both the set-point regulat,ion and tracking problems

Trang 9

The Set-Point Regulation Problem

Consider the state regulation problem for a system defined by (6.19) where

we wish to drive 2 -+ r where r E R” is the desired state vector The regulation problem naturally suggests the error system

Consider the control law u = V(Z) (with x z x) defined by

where the feedback gain matrix K is chosen such that Al, = A + BK is Hurwitz With this choice of Y(Z) we see that the plant nonlinearities are cancelled so i: = Ake + Ar If 6 = 0 when e = 0 (it is an equilibrium point) we will require that Ar = 0 Thus r = 0 is always a valid set-point Depending upon the structure of A, however, other choices may be available (this will be demonstrated shortly in Example 6.3)

The rate of change of the Lyapunov candidate now becomes

P = eT(PAk + AlP)e = -eTQe, where PAk + ALP = -Q is a Lyapunov ma.trix equation with Q positive definite

It is now possible to find explicit bounds on lel By using the Rayleigh- Ritz inequality defined in (2.23) we find

Using Lemma 2.1 it is then

Trang 10

where c = X rnin(&>/Xmax(P)-

The above shows that if the functions f(z) and g(x) a)re known, then

it is possible to design a controller which ensures an exponentially stable closed-loop system We also saw that the error dynamics may be expressed

in the form (6.16) with ~11 = Ae + Bf(s) and ,0 = Bg(lc) We will continue

to see this form for the error dynamics throughout the remainder of this chapter In fact, we will later take advantage of this form when designing a)da.ptive controllers

Example 6.3 Here we will design a satellite orbit control algorithm which

is used to ensure that the proper satellite altitude and orbital rate are maintained The sa#tellite is assumed to have mass m and potential energy Ic, /T The satellite dynamics may be expressed as

J: = Ax + B (f (2) + g(x)u) , (6.24) with

Trang 11

which is Hurwitz for any A,, A, > 0 The values of A, and A, ma,y

be used to set the eigenvalues of the radial and angular channels,

The Tracking Problem

For the tracking problem, we will define a diffeomorphism 5 = T(c) so t,hat

k = A,x + bc (f(x) +9(x)4 7 (6.25)

where (A,, b,) fits a controllable canonical form and y = x To simplify the analysis for the tracking problem, we will assume single-input single-output systems For single-input systems, we have

and b, = [0, ,O, 1 I T, so (6.25) is equivalent to

L-in-1 = 2,

Xn = f(x) + dx)u-

We will now define conditions which may be used to help define a diffeo- morphism, x = T(e), so that we may place a system into the form defined

by (6.25) First assume that the system dynamics are affine in the control

so that < = fE([) + g,&)u Since x = T(t) we find

li: = dT x (f&9 + SE04 (6.27)

Trang 12

when u = 0, where T(t) = [5!‘i(J), ,T,([)lT Ma.tching the remaining terms we obtain

dT

0

If we find some diffeomorphism x = T(c) which satisfies

aT; aEgc=O fori=l, ,n-1

Trang 13

we choose ?Yi to be independent of & Thus

(6.37) The choice Tl ((1) = [ 1 satisfies (6.33) since then Tz(t) = [,” + (2 and

Thus the diffeomorphism

P@l-

We will now study some examples in which control laws are defined for systems satisfying (6.25) We will see that different choices for the error system and Lyapunov candidate may produce rather different control laws The first approach will use an error system similar to that used in the state regulation problem, while the second approach will define the error system ba’sed on a stable manifold

Example 6.5 In this example we will design a controller for (6.25) such that x1 + I Since x i+i is the derivate of x;i, we will define the error system

r 7:

e=x-

+ 1)

Trang 14

Now consider the Lyapunov candidate V = eTPe, where P is a posi- tive definite symmetric matrix to be chosen shortly We now choose the control law u = V(X) as

v= +) - f(x) - kTe

iI(x) ?

(6.44) where

As an alternative to the “traditional” approach (shown above) to the tracking problem for state-feedback linearizable systems, one may define the error system using a stable manifold as shown in the following example:

Example 6.6 Here we will consider the tracking problem for the single- input state feedback linearizable system (6.25) where we wish to drive

x + r(t) The error system may be defined using a sta’ble manifold

so that e = x(t,x) where

x(t, x) = k1 (x1 - r) + - - - + k,el (x,,-~ - r(-) + x, - r-), (6.47) and

I -k,-

Trang 15

is Hurwitz (we also say that the polynomial sn-’ + k’n-l~n-2 + .+kr

is Hurwitz) For now we will simply assume that this error system satisfies Assumption 6.1 and continue the control design so that e +

0 We will then show that defining the error system using a stable manifold does indeed satisfy Assumption 6.1

Taking the derivative of e we find

& = kl(X2 - f) + ’ ’ + kn-l(2.n - Ten l)) - rcn) + f(x) + g(x)u

Y(Z) = -a(t,x) - tse

and K > 0 we find v = -Ke2 = -2df so e = 0 is an exponentially stable equilibrium point (the trajectory converges to the manifold

&(t) = {x E R” : x(&x) = O})

We will now show that bounding lel implies that 1x1 is bounded It is rea*sonable that bounding lel should bound the plant states since

sn l + I?,-lP-2 + * - * + kl e(4T with the denominator poles in the left half plane To show that As- sumption 6.1 is satisfied when using the error system defined by (6.47), first let

21 - r P=

Xn-1 - T (n-2)

so b = Lp + be, where b = [O, ,O, l]’ E R”? Since L is Hurwitz,

we may use (2.91) and conclude that p(t) 5 p!+&, lel), where

Trang 16

definition of the error system, we find

where < = e - kl (x1 - r) - - kn-l(xn-l - T(~-‘)) SO

Thus we can set

$b(t,e) = (I+ lk~l + + Ian-oJut, I4 + lel + f,

From the previous example, we see that when a stable manifold is used

to define an error system, it may not be possible to define a static bounding relationship between the error system and system states This occurs since

it is possible for e = 0 as the state trajectory slides along the surface S, even though y # r(t) due to the system initial conditions Notice that e E R

in the above example, while e E R” in Example 6.5 Either choice forms a valid error system for the tracking problem

It should also be noted that the controllers defined by (6.44) and (6.49)

in the previous examples are identical, up to the choice of the gains on the various error terms This is because the feedback terms are linear in both cases, and thus each approach simply suggests different coefficients be used We will later see that it may be possible to add nonlinear feedback terms based on the definition of the error system and Lyapunov candidate

to improve closed-loop robustness In these cases the resulting control law may not be as similar

6.32 Input-Output Feedback Linearizable Systems

Consider the system

Trang 17

coordinates [qTxTIT = T(c) with q E Rd and II; E Rn We will consider the

choice of the diffeomorphism T = [Tl, , T,+dIT such that T(0) = 0 aad

for i = l, ,d and

Td+l = h(J) Td+2 = Lf, f-40

(6.52)

with the output y = xi which is said to be in input-output feedback lin-

earizable form Notice that f (q, x) = Lye h(J) and g(q, x) = L,, LTcwlh([),

where < = Tel[qT,x T T ] If d = 0, then the system is said to be simulta-

neously state-feedback linearizable and input-output linearizable (which is

in the form (6.25)) It is assumed that both the q and x state vectors are

measurable and that the functions 4, f, and g are Lipschitz The dynamics

of 4 = @(CL 0) are referred to as zero dynamics of the system The next

example provides a motivation for this definition

Example 6.7 Consider the case where the states q, x define a linear sys-

tem with transfer function from plant input-to-output given by

Trang 18

(id = qd+l = [d+l = xl - bO<l - - - bd-&cl,

where we have used the definition of xi in (6.55) Thus

(6.56)

We see that with 4 = A4q + B,x (where A, and B, are defined by (6.57)) th e ei g envalues of A, are equivalent to the zeros of P(s) It

is for this reason that 4 = +(q, 0) are often referred to as the zero

Trang 19

From (6.57) we see that even if a controller is defined so that the x states are bounded, we still have q + oo if 4 = A4q is not a stable system Because of this, it will be necessary to require additional conditions upon the dynamics governing the q states to ensure stability In particular we will assume the q-subsystem, with x as an input, is input-to-state stable,

so that there exists some positive definite V4 such that

(6.58)

(6.59) where ~~1 and yQ2 are cla’ss Ic,, and yq3, $ are class K When x = 0 the input-to-state stability assumption (6.59) becomes & 5 -y&yl), for all g, thus implying global asymptotic stability of the origin of the zero dynamics 4 = 4(q, 0), which are then said to be minimum phase With these assumptions, it is possible to design a controller by ignoring the trajectory

of the zero dynamics using the procedure for state-feedback linearizable systems The following theorem guarantees that if a controller is designed

to stabilize the x dynamics, then the q dynamics are also stable if they satisfy (6.58)-(6.59)

Theorem 6.1: Let e = x(t,x) be an error system satisfying As- sumption 6.1 Assume there exists a controller u = Y(Z) and Lyapunov function V(e) such that re(lel) < V(e) with ye class-Koo and v < 0 along the trajectories of (6.52) w en ? 2 VT assuming h v is well definid for all

4 E R”, x E R” and t E RS If the q-subsystem, with x as an input, is

input-to-state stable, that is, there exists some positive definite Vn such that (6.58)-(6.59) hold, then the controller u = Y(Z) ensures that x and q are uniformly bounded

Theorem 6.1 may be proven by first showing that the error system

is stable (independent of q), and then showing that the q dynamics are therefore bounded Notice that the above theorem only ensures uniform boundedness of the trajectories The properties of the e states may be used to prove stronger stability results If for example, we are ensured that

v 5 -kV, then e = 0 is an exponentially stable equilibrium point (though

lql may still only be bounded) The following example demonstrates how a controller may be defined for a system with nonlinear zero dynamics using feedback linearization

Example 6.8 A nonlinear system is defined by

41 = 1 - q1 - q; + q1xT (6.60)

with y = xl If we wish to drive x1 -+ 0, then consider the error system e = x1 and the Lyapunov candidate V = $e2 Using the

Trang 20

concepts of state-feedback linearization, we choose u = V(Z) with

so that p = -Ke2 (here, x = [xi, qllT) If it can be shown that q1

is bounded so that v is well defined, then e = 0 is an exponentially stable equilibrium point

From Theorem 6.1 we now simply need to show that the q dynamics satisfy (6.58)-(6.59) Let Vq = iqf which satisfies (6.58) with ~~1 = yq2 = qt/2 Then

T/b = 41 (1 - 41 - q1” + q1xf) (6.63)

<

- -4; + 1411 - 4: + q;lxl12 (6.64) Since -2x2 rt 2x9 < -x2 + y2,

The systems considered thus far are in a form so that the plant non- linearities may be easily cancelled by the input That is, we are able to use feedback to force the closed-loop system to act as a linear system with arbitrary eigenvalues using traditional pole placement techniques We will now consider a special class of feedback linearizable systems in which, to guarantee global stability, it is not necessary to cancel the plant nonlin- ea’rities and assign the closed-loop poles using the input The particular structure of the system is exploited to achieve robustness in the presence

of uncertainties, as we will see in what follows

6.3.3 Strict-Feedback Systems

A single-input system is said to be in pure-feedback form if there exists a diffeomorphism, x = T(c), which renders the system dynamics as

kl = fl (Xl > 22)

&-1 = fn-l(X)

2, =

Trang 21

where each fi is Lipschitz Since C& only depends upon the signal vector [Xl 7 - * * ? xi+11 T for i = l, , n - 1, this system has a triangular structure

A special class of pure-feedback systems, called strict-feedback systems, is found when each successive state and control input is affine so that

Theorem 6.2: (Integrator Backstepping) Let e = x(&x) be an error system satisfying Assumption 6.1 with error dynamics

whereu E R Letu= u(z) be a continuously differentiable globally stabi- lizing controller such that the radially unbounded Lyapunov function V(e) satisfies

V<-klV+k2 - along the solutions of (6.68) when u = v(z), where kl, k2 are positive costants Then there exists a stabilizing controller v = u,(x, q) for the composite system

& z 4f, 4 + P(x)q

lj = 21,

where q E R and v is a new input

Proof: We will introduce a new error term e, = q - U(Z) and Lyapunov candidate

VC(e,) = V(e) + iei (6.69)

for the composite system with e, = [eT, e,lT Taking the derivative, we find

I& = gi(u(t7 4 + Pca4 - v(z) + Y(Z))) + e&j - ti) (6.70) Since q = e, + u, we find

i/, < -klV + k2 + dV ,eP(z)e, + e&c - q, (6.71)

Trang 22

where u = v&,q) Choosing

n

In the derivation of the proof of Theorem 6.2 we found a sta.bilizing controller satisfying the theorem It should be emphasized, however, that (6.72) is just one of many controllers which satisfy Theorem 6.2 In the following example, we will now use the techniques presented in the proof

of Theorem 6.2 to create an error system and stabilizing controller for the system defined by (6.67)

Example 6.9 Consider the strict-feedback system defined by (6.67) We will see that a procedure may be used to construct an appropriate error system and stabilizing controller using the techniques employed

to prove Theorem 6.2

zl subsystem To create a stabilizing controller, we will begin by considering the subsystem defined by

il = f&h) + g1(x1)v, (6.74) where v is a virtual input If we wish to force xi to track the reference signal r(t), then we will define the first component of the error system

as er = xi - T Using (6.74) we find the error dynamics to be

61 = -7' + fl (Xl) + g1 (11;1)v

A Lyapunov ca.ndidate for the subsystem may then be defined as

r/l = $ez so that the feedback linearizing control law

Ngày đăng: 01/07/2014, 17:20

🧩 Sản phẩm bạn có thể quan tâm