1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Kiểm soát và ổn định thích ứng dự toán cho các hệ thống phi tuyến P13 doc

36 132 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 36
Dung lượng 2,69 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In these cases, it may be possible to develop a discrete-time model of the plant and directly develop a controller in the discrete-time framework.. We will be interested in the control o

Trang 1

Part IV

Extensions

Jeffrey T Spooner, Manfredi Maggiore, Ra´ul Ord´o˜nez, Kevin M Passino

Copyright  2002 John Wiley & Sons, Inc ISBNs: 0-471-41546-4 (Hardback); 0-471-22113-9 (Electronic)

Trang 2

to the system dynamics, there is typically no problem with this approach When the delays associated with the discrete-time nature of the implemen- tation are large, however, it is possible that closed-loop system performance will become poor or even unstable In this chapter, we will study how discrete-time designs may be used to improve performance when dealing with discrete-time systems

Developing a controller in the continuous-time framework (even if the controller is to be implemented in a sampled-data system) allows the de- signer to take advantage of a number of intuitive nonlinear design tech- niques such as nonlinear damping There may be times, however, when the sampled-data nature of the implementation forces us to consider the system delays In these cases, it may be possible to develop a discrete-time model of the plant and directly develop a controller in the discrete-time framework These two approaches (continuous and discrete) to the design

of a controller for a sampled-data system are shown in Figure 13.1

We will be interested in the control of discrete-time systems which may

Stable Adaptive Control and Estimation for Nonlinear Systems:

Neural and Fuzzy Approximator Techniques.

Jeffrey T Spooner, Manfredi Maggiore, Ra´ul Ord´o˜nez, Kevin M Passino

Copyright  2002 John Wiley & Sons, Inc ISBNs: 0-471-41546-4 (Hardback); 0-471-22113-9 (Electronic)

Trang 3

Continuous-time plant Continuous-time

control design

discretize plant

! Discrete-time control design

Even though the methods used to analyze a discrete-time system closely match that of a continuous-time framework, the delay associated with the feedback forces more restrictive designs Adding to the complexity of the analysis, a relatively simple continuous-time plant may become rather com- plex when moving to a discrete-time framework For example, a nonlinear continuous-time system which contains no uncertainty may have a discrete- time counterpart that does contain uncertainties not satisfying matching conditions, arising from approximating the continuous-time dynamics

In this chapter, we will provide a few basic tools that may be used to design both static and adaptive controllers in the discrete-time framework Since the design of discrete-time controllers has been the subject of research for many years, a rather large volume of techniques exist, each possessing their own particular advantages The approach we have chosen in this chapter tends to show many of the similarities in the design/analysis of continuous a,nd discrete-time systems

13.2 Discrete-Time Systems

Before learning how to design static and adaptive controllers for discrete- time systems, we will consider how to convert from a continuous-time sys- tem to canonical discrete-time representations

13.2.1 Converting from Continuous-Time Representations

Consider the continuous-time system defined by

i = f (0 + 90-J (13.2)

Trang 4

Sec 13.2 Discrete-Time Systems

with state [ E R” and input u E R For now we will assume that the functions f, g : R” -+ R” are known Later we will see that an adaptive control approach may be used when the exact forms of f and g are unknown

If the controller is implemented in a digital system with sampling period

T, then a’ zero-order hold is typically used to model the digital output In particular, we will let

r=kT

we will show how to obtain fd and gd later in this section

Then we may express the discrete-time state sequence as

There are several techniques available for the conversion of a continuous- time plant representation to a discrete-time representation (that is finding

Trang 5

some fd and gd in (13.5)) If the plant dynamics are defined by the linear representation

then it is possible to develop an exact representation assuming that the intersa,mple cha(ra,cter of the controller output may be defined by a zero- order hold This is demonstrated in the following example:

at the sample times)

The solution of (13.7) from t = kT to t = kT + T is given by

When the plant is nonlinear, it is not possible in general to define an exact discrete-time representation since the solution to the differential equa- tion may be unknown Also, if there is uncertainty in the system, it may

Trang 6

not be possible to exactly solve the differential equations In these cases, we will use an approximation to represent the effect of the integration Given

a function f([(t)), E u er’s 1 method may be used to obtain

Using the tra.pezoidal rule for integration, we find

s

kT+T r=kT

Since the trapezoid method is not causal, it is typically only used when the dynamics are linear The following example demonstrates how one may convert from a continuous-time to discrete-time representation

system

(13.10) Using trapezoidal integration on the Ii term, we ma#y use (13.9) with (13.3) to obta.in

Similarly, if Euler’s method is used for c2 we obtain

Letting xi (k) = 51 (kT) and x2(k) = & (kT) we find

x,&+1) = a(k) + $ (2x+) +T(2xl(k) - +-+) +wl(k)

x2@ + 2) = x2(k) + T(2xl(k) - q) + w2(k),

where wi and wz represent the error due to intersample behavior Choosing T = 0.1, we obtain the discrete state trajectory shown in Figure 13.2 As stated earlier, the intersample error will tend to de- crease on the order of T2 when using Euler’s method to approximate the integration This is demonstrated in Figure 13.3 where the se- quence wz(k) is shown when T = 0.1, 0.05, and 0.01 n

Trang 7

q(k+ 1) = 22(k)

where y = xl, and gd is bounded away from zero As with continuous-time systems, the q states represent the zero dynamics of the system When designing controllers for (13.11) we will typically assume that fd and gd are known Since the states in (13.11) are defined using a chain of delays, it

is often easy to place an arbitrary discrete-time system into the feedback linearizable canonical form

m + 1) = [l(k) + T&(k) b(k + 1) = 62(k) + T (C;(k) + (z(k) + u(k)) ,

Trang 8

02 I I I 1 1 I I I- I

- T=O.lO

- -

- - T = 0.05 T = 0.05 +I-' T = 0.01 015-

Trang 9

where w = [w~, ,w,]~ is a vector of unknown signals For a general nonlinear system, there is no guarantee that each wi is a bounded sequence

In practice, however, it is often reasonable to assume that the sampling rate

is chosen high enough so that the integration error due to the intersample behavior is relatively small

The following example shows how the intersample behavior may be propagated when trying to convert to the canonical form:

13.3 Static Controller Design

As with the case in the design of controllers for continuous-time systems, we will specify an error system to quantify the performance of the controller The goal of the controller design will then be to reduce the magnitude of the values in the error system over time

13.3.1 The Error System and Lyapunov Candidate

We will once again assume that the error system is designed to quantify the closed-loop system performance and satisfies the following property:

I4 I e&7 let> Where tiz(k s is nondecreasing with respect to s E R+ for > all k

Trang 10

Sec 13.3 Static Controtler Design

Thus if we are able to force the error system to be bounded, then the plant st,a#tes will also be bounded As with the continuous-time case, we will use a positive definite Lyapunov candidate to help study the trajectory

of e(k) l’Gost often the choice V(k) = eT(k)Pe(k) will be used, with P a positive definite matrix This way if V(k) is forced to zero, then e(k) + 0 The following lemma is the discrete-time counter part of Lemma 2.1 and will be helpful in studying the sequence V(k)

Trang 11

When a < 1 and kz E 0 in Lemma 13.1, then V(k) = 0 is an exponen- tially stable equilibrium point If a < 1 and k2 # 0, then V(k) is a UUB sequence with ultimate bound defined by

ki > 1 cannot occur due to the positive definiteness of V Since ki may not

be made arbitrarily large in the discrete-time case, it is often not possible

to make the ultimate bound arbitrarily small when /7~2 # 0

13.3.2 State Feedback Design

In this section we will design controllers for the discrete-time feedback lin- earizable system (13.11) where the intersample behavior w(k) is ignored for now We will be interested in the tracking problem where a controller is to

be designed so that ~1 (k) -+ r(lc) w en h f d and gd in (13.11) are known

In the continuous-time state-feedback case, we assumed that the deriva- tives of the reference signal were available to the controller In general, this could be done by forcing the plant output to track the output of a reference model Since the states of the reference model are available, it was possible

to obtain the higher-order derivatives of the output In the discrete-time case we will assume knowledge of r(k), , ~(k + n) Once again a reference model may be used As a simple example, it is possible to define the refer- ence model as r(lc) = u(&- r~) where it is desired that the plant output y(k) track u(k), and r(lc) is the output of the reference model Since the refer- ence model output is a delayed version of v(k), the signals T( Q, , r (k + n) are available for use in the controller definition

When full state feedback is available, the traditional approach to devel- oping a suitable controller for the linear discrete-time system

where

Trang 12

Sec 13.3 Static Controller Design

and b = [0, ,O, I] T is via the solution of a discrete-time Lyapunov matrix equation In particular, we choose u(k) = Kz(k) so that the eigenvalues

of A = A, + bK all 1 ie within the unit circle The following discrete-time equivalent to the continuous-time Lyapunov matrix equation may then be used with the Lyapunov candidate V(k) = x~(x)Pz(z) to establish that

V(k + 1) - V(k;) = -xT(k)Qz(lc) and V(k) 5 &,,(P)lx(k)l”, we find

(13.21)

Assume for a moment that X min(Q) > Amax( Then V(k + 1) < 0 for

Amin > Amax is not possible This completes the proof n Using Lemma 13.2 we are now ready to design static controllers for (13.11)

Since the system dynamics (13.11) are defined by a chain of delays, it is possible to just consider the transient response of e(lc) = xl (k) - r(k), so

e E R is a scalar Notice that

or

e(k + 4 = 4q4) + P(~(k))~(k), (13.22) where a = -7'(k + n) + fd(X(lc>) and P = gci(x&)) As was done for the continuous-time case, the term z(k) is shorthand used to represent all the known signals for a given representation In (13.22), for example, let z(k) = [r(k + n),xT(k)IT

Now consider the controller defined by

Trang 13

with 1~1 < 1 Using this control law, we find

e(k + n) = &e(k) (13.24)

controller defined by (13.23), we find

Letting vi(k) = VS(kn + i) for i = 1, ,n we find

Vi@ + 1) - Vi(k) = Vs((k+l)n+l:) -VS(kn+i)

-

(13.27) for each sequence vi (k), i = 1, + Since ~~ < 1, we may use Lemma 13.1

to conclude that each sequence vi(k) is exponentially stable so e + 0 n When K = 0, then the error decays to 0 in n steps This type of controller,

is often referred to as a dead-beat controller

We will now design a control 1a.w using a multi-dimensional error system Since we want xl(k) -+ r(k), assign the first error variable as el (k) = xl(k) - r(k) Defining ej(k) = zj(k) - ~(k + j - 1) for j = 2, ,n we

obtain

e

Thus

en-l (k + 1) = e,(k + 1) e,(k+l) = -r(Jc + n> + h(x) + gd+(k)

e(k + 1) = 44k)) + P~4lC)WL where

e2(k)

0

0 dx)

(k + 1) = e2(k)

(13.28)

en(k + 1) T@ + 4 + f&(k))

(13.29)

Trang 14

449

with a(k) a vector of measurable signals

Now consider the control law

It is possible to choose the coefficients cl, , en-i so that the eigenvalues

of A are all contained within the unit circle Placing the poles within the unit circle results in a stable closed-loop system as shown in the following theorem:

values of A defined with magnitude less than 1 ensures that the error system (13.29) is exponentially stable in the large

Proof: Consider the Lyapunov candidate

VS(k) = eT(k)Pe(k), where P is a positive definite matrix to be chosen shortly The difference becomes

V,(k + 1) - V,(k) = eT(k + I)Pe(k + 1) - eT(k)Pe(k) (13.33)

- eT(k) [A~PA - P] e(k)

Choosing P to satisfy the discrete-time Lyapunov matrix equation ATPA-

P = -Q with Q positive definite, we find

Trang 15

We will now see how the scalar and vector error approaches may be used

to develop a discrete-time control law

x1(k+ 1) = x2(k) x&k+ 1) = 23(k)

x3(k + 1) = x;“(k) - 223(k) + 2u(k),

(13.36)

with output g(k) = q(k)

first define the error system e(k) = y(k) - r(k), where r(k) is the desired plant output Advancing the error by three time steps we find

A= [ -:, 4, i3 ]

has all its eigenvalues within the unit circle

The performance of the two approaches is shown in Figure 13.4 In these simulations, the initial conditions were set to x(0) = [O,O,O]T and the reference was chosen to be r(k) = 1 For the scalar case

we chose K = 0.8, while for the vector case we chose ci = -0.512, c2 = 1.92, and cs = -2.4 so that the eigenvalues of A were all 0.8 L!,

Trang 16

1.2 I I 1 1 I I I 1 I

10 20 30 40 50 60 70 80 90 100

k

control law (-), and scalar-based control law (- -)

13.3.3 Zero Dynamics

We will now consider the discrete-time system defined by

(13.40) x,-l(k + 1) = x& + 1)

XT-@ + 1) = fdMm-w) + 9czWh xo3)4~), where the q(k) dynamics have been included to represent unmodeled or zero dynamics Since we assume full state feedback (we assume that q is measurable), it is still possible to define control laws that cancel the effects

of J-J and gd Thus the closed-loop error dynamics for the scalar error system case (13.24) and the vector error system case (13.31) still hold Since the error dynamics are independent of q, we have a triangular system so that Theorems (13.1) and (13.2) still hold which guarantee that x is bounded

We will assume the q-subsystem, with x as an input, is input-to-state

Trang 17

stable, so that there exists some positive definite V4 such that

%44> L vqkd L Yq2(l!ll) (13.41) T/b@ + 1) - v,(k) F -~3K#) +$(x(k)), (13.42) where ~~1 and yq2 are class Ic,, 0 < /~3 < 1, and $J : R” -+ R is nonnegative a,nd bounded for any bounded 2 When Y/J(X) = 0 the input-to-state stabil- ity assumption (13.42) becomes Vq(k + 1) - Vq (k) 5 -k3V4., thus implying global asymptotic stability of the origin

When the z-dynamics are bounded, then $J(x) is bounded Let

XES, where S, is the space to which x is confined Then V,(k + 1) - V,(k) <

also bounded

13.3.4 State Trajectory Bounds

As with the continuous-time case, we will modify the standard Lyapunov approach so that the Lyapunov candidate only needs to decay on some region By considering only regional results we may consider the use of controllers defined with finite approximators If, for example, a feedback linearizing control law is used where fd(x) is only (approximately) known over some region S,, then we want to define the controller so that x never leaves the region S, The following lemma is the extension of Lemma 13.1

to the regional case

V,(k + 1) - Vs(k) < -k&(k) - + k2,

Trang 18

Sec 13.3 Static Controller Design

Consider the case where e(lc + 1) = Ae(k) with all the eigenvalue of

A contained in the unit circle If T/‘, = eTPe, then V, (k + 1) - V, (k) = e’(ATPA - P)e = -eTQe when P is chosen to satisfy the discrete-time Lyapunov matrix equation and Q is chosen to be positive definite Now if uncertainty is included in the system dynamics, it may only be possible to establish that V,(k + 1) - V.(k) < -eT(k)Qe(k) + ks, where k2 > 0 is a constant resulting from the uncertainty (this type of term will be studied

in more detail when we consider robust control of discrete-time systems) Letting h = hnin(Q)/~max(J’)~ we find V&l + 1) - Vs(k) < -k&(k) + k2 which is in the form used in Lemma 13.3 Comparing terms, we find that /pi

is dependent upon P and Q (and thus upon the eigenvalues of the system matrix A) The term kz tends to result from uncertainties in the system

If V’(k) = eT(k)Pe(lc), then Lemma 13.3 may be used to establish max- imum and ultimate bounds on e@) In particular, le(k>12 5 Vs(k)/X,i,(P)

so e E B, for all Ic, where

The ultimate bound may also be found using Lemma 13.3 since

If e may be bounded, then it is possible to use Assumption 13.1 to place bounds on the state vector 2 Here x E B, for all k where

B, =

This may seem like an odd bound since at first glance it appears that

we may simply choose P SO that X,i,(P) is made arbitrarily large (which does not change the control law) As discussed above, kr may depend upon both Q and A (and thus P) Therefore changing the value of Q (or equivalently P) will change the value of kr Therefore it is not possible

to simply choose some parameters independent of the control law that will force B, to become arbitrarily small If a fuzzy system with valid inputs

x E S, is used in the definition of a control law, then we need to choose the controller parameters such that B, c S, This way the states x will never leave SZ

The reader should keep in mind that the results here are “regional” and not necessarily local With a local stability result, it is possible to define x(0) so that the state trajectory remains bounded for all k within a set whose size cannot be fixed, but is rather determined by the characteristics of

Ngày đăng: 01/07/2014, 17:20

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w