1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Adaptive control stability, convergence, and robustness exs

19 181 0
Tài liệu được quét OCR, nội dung có thể không chính xác
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 19
Dung lượng 607,36 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Adaptive Control- Stability, Convergence, and Robustness

Trang 1

Complement to:

“Adaptive Control: Stability, Convergence, and Robustness.”

S Sastry & M Bodson, Prentice-Hall, 1989

Homework Exercises

Dec 1991

© Marc Bodson, 1991.

Trang 2

Problem 0.1

Consider the plant P(s)=2/(s + 1) and the reference model M(s)=1/(s+ 1)

a) Plot the responses over 20 seconds of @(¢), eg(¢) and v(t)=e2(1)+k p Ø2?) for the M.LT rule, the Lya- punov redesign and the indirect scheme of chapter 0 Let r(t)=sin(¢), @(0)=1 and all other initial condi- tions be zero Comment on the responses in view of the analytical results

t

b) Repeat part a) with r(t)=e™"

Problem 0.2

Give an example of a control system where parametric adaptation is required or desirable Identify the source of variation and indicate whether it is due to a nonlinearity, a slow state variable, an external factor,

an initial uncertainty or an inherent change in the system under control

Problem 0.3

Consider the MRAC algorithm of chapter 0, using the Lyapunov redesign Let r=rp be constant Show that the system with state variables (eo, ¢) is linear time-invariant Calculate the poles of the system as functions of a, k, and ro Plot the locus of the poles as ro varies from 0 to - Discuss what happens when ro=0, and when k, <0

Problem 0.4

The stability properties of the MIT rule were investigated by DJ.G James in “Stability of a Model Refer- ence Control System,” AIAA Journal, vol 9, no 5, pp 950-952, 1971 The algorithm is the same as the one given by eqn (0.3.6) The author calculated the stable and unstable regions of the algorithm, which are shown on Figure P0.4

Figure P0.4: Stability Regions for the MIT Rule

Assuming that the plant is P(s)=1K(s + 1), the reference model is M(s)=1/(s+1), and the reference input r=sin(@ t), the x-axis of the figure becomes 2, = and the y-axis becomes z,=g Note that, for small enough @, the MIT rule is stable for all g, while for small enough g, it is stable for all ø Otherwise, the dynamic behavior of the algorithm is quite complex, with interleaving of the stable and unstable regions, a) Write the differential equations of the overall adaptive system and show that it is a nonlinear time-varying system Given that the equation for y,, can be solved independently of the others, show that the remaining equations constitute a linear time-varying system #= A(t) x

b) Simulate the responses of the adaptive system for a few values of the parameters to check the validity of the figure In particular, check the responses for a=1 and g=10,35,50 and 80, then for g=10 and

@=1,1.5 and 2.5 Let all initial conditions be zero and plot the error eo for 50 seconds Could such

Trang 3

-3-

behavior be observed with a second-order linear time~invariant system ?

Problem 1.1

a) Determine which functions belong to L,, L2, L , Lye, Loe, Lone

fij=e% for a>O and a<0

#@=——~: 2 r1 ?@=m_T! ñ@=—: - f#@0=== fa fe

b) Show that fe L, mL implies fe Ly

Problem 1.2

a) Show that A € R?* is positive definite if and only if a,, >0, đạ„; >0, and địt đạa > (đix+ a„y)Ê /4

b) Find examples of 2x2 symmetric (but not diagonal) matrices such that A20, A<0 and A is neither posi- tive nor negative semidefinite In each case, give 4;(A)

c) Find an example of two symmetric positive definite matrices A and B such that the product is neither symmetric nor positive definite Give 2,(A B) and 4,((A B)+(A B)*)

Problem 1.3

Prove the following facts All matrices are real and square (except in a)) Use only the definitions of eigen- values and eigenvectors

a) For all Ce R™, C™C20 and CÏC>0 ifrank(C)=n

b) A=A™ implies 4,(A)e #

c)A=A™ and 2,(A)#A9(A) implies x? x.=0

where x,, X2 are the eigenvectors associated with 4, (A), 42(A)

d) A20 implies Re a,(A)>0

Problem 1.4

Prove the following facts All matrices are real and square You may use the fact that A= A’ implies that

A=U' AU with A=diag 4,(A) and UT U =1

a) ForA=A? A>z0 ifandonlyif 2,(A)20

b) AZO ifandonlyif 4,(As)>0

c) For A=AT20 4„„(4)IxÊ< x” Ax<A„„(A)txÊ

a) ForA=A™>0 2„„(A ))=(3„„(A)ÿ1

e) ForA=A™>0 IAI=2„„„(4)

f) det (S)#0 implies 4,(A)=A,(S7 AS)

g) A=A™>0, B=B720 implics 4(AB)ER and 4,(AB)>0

Problem 1.5

Consider the plant P(s)=1/(s+a p) and the reference model M(s)=1/(s+a,) Let the controller be given

by

u=dy,+r

where r is the reference input (bounded), u and y, are the input and output of the plant, and d is an adap- tive gain with update law

d=—ge0¥p

Trang 4

-4-

a) Is the overall system described by a linear or nonlinear differential equation ? Is it time-invariant or time~varying ?

b) For constant g >0, indicate whether the system is stable, uniformly stable, asymptotically stable, or uni- formly asymptotically stable Does eg > 0 as t —> 00?

c) Let g(t) be a continuously differentiable function of time What stability properties does the system have

if g(t) >0 for allt?

Problem 1.6

Let x=f(t, x), x(t) =o, with f(t, x) globally Lipschitz in x (with constant /) and f(7,0) bounded (with bound b) Show that

ky ky FO) <1 x(k thy elt

Find expressions for k,, ky, ky and kq as functions of | x9 |, 6 and 7

Problem 1.7

Consider the linear time—invariant system

x=AX X(fo)= Xo

Assume that there exist matrices P = P? >0 and Q=Q" >0 such that

ATP+PA+Q=0

Find m >0 and a >Q as functions of the matrices P and @ such that

IxŒ)ISme—#—®? | rạ|

Problem 1.8

Consider the linear time-varying system (from Vidyasagar’s Nonlinear Systems Analysis, Prentice—Hall, 1978)

x= A(t) x where

-1+acos*t 1—asintcost

~l~asinfcosf —1+sin“í Show that the transition matrix (¢, 7) satisfies

(a-l)t mt

eM cog¢ et sing (1,0) = ( Seen siny e' cost }

and that the matrix A(t) has eigenvalues that are independent of ¢ and located in the left-half plane for 1<a<z2 Discuss the stability properties of the equilibrium point at x=0 What conclusions can you draw from this example ?

Problem 1.9

Determine which of the following functions are globally Lipschitz, and give the Lipschitz constant Let x and f be scalars

ce) ƒ/Œ,x)=Ax d) f(t,x)=e' x

9 /0Gx=C 9 /0x)=vz

ex+e7*

Trang 5

-5-

Hint: use the mean-value theorem of elementary analysis to relate the Lipschitz constant to df /dx

Problem 1.10

a) Given the linear time-invariant system += A x, determine whether the following A matrices correspond

to an equilibrium point x =0 that is stable in the sense of Lyapunov, asymptotically stable, or unstable

¬ +] *®*(s -3)

““{oz)} (21) “(0 0)

b) For a general matrix

a=( oH "

ay ayn give the conditions on đị, địa, đại and đ¿ so that the equilibrium point is asymptotically stable, and so that it is stable in the sense of Lyapunov

Hint: to find simple conditions on the elements of the matrix, recall the Routh-Hurwitz test

Problem 1.11

Consider the circuit depicted in Fig P1.11

Figure P1.11: Circuit

a) Let x, be the voltage on the capacitor C and x, be the current in the inductor L Show that the system is described by the differential equations

a _ 1

a Lo

b) Consider the “natural” Lyapunov function consisting of the total energy stored in the system

sxi+sz

v=

Trang 6

-6-

What stability properties of the equilibrium can you deduce from this Lyapunov function ?

c) Use the Lyapunov lemma to find a Lyapunov function such that

alxP<v<alxP

for some strictly positive constants a, a, and such that

?<-lxÊ

For R= L=C=1, give the specific values of the constants a, a What stability properties of the equilib- rium can you deduce from this Lyapunov function ?

Problem 1.12

a) Consider the system

&x

Determine whether the function on the right-hand side is locally Lipschitz, or globally Lipschitz

b) Show that the differential equation can be solved exactly and deduce what the stability properties of the equilibrium point are

c) Find a Lyapunov function to confirm the conclusions of part b)

d) Repeat parts a) to c) for the system

dx

q7 £ * x(0)= x9

e) Repeat parts a) to c) for the system

a 1 aen

Problem 1.13

a) Calculate the eigenvalues and eigenvectors of

5 -3 A= (33)

b) Find the decomposition A=U7 AU and find a symmetric positive definite matrix A’? such that

A=AU2 Al?

c) Given an arbitrary vector xe R", show that the matrix

A=xx"

is always symmetric positive semi-definite, but never positive definite What is the minimum nunber of vectors x; needed so that the matrix

N

A=¥ x, x7

isl can be positive definite ?

Problem 1.14

Consider the linear time—invariant system with transfer function

H(s) = u(s) sta

with a >0 Show that, for zero initial conditions

Trang 7

Iyll, <WAlh Ral

where Il All, is the Z, norm of the impulse response A(t) corresponding to H(s), and |I yll , Will are the Lo, norms of y(t) and u(t) respectively Give the value of Il All, as a function of a and 6 and show that there exists a nonzero u(r) such that the inequality above becomes an equality

Problem 2.1

a) Consider the identifier of section 2.2 Show that the plant transfer function may be realized by two state- space representations (R1) and (R2):

(Rl) %=A,x+b,r

yas x where

A,=A+b,b"

bu=b;¿ Cp=a and A, b,,a",b"are as defined in equations (2.2.9) and (2.2.10)

(R2) w=A,w+b,r

}p=Cã w where

A 0

¬ ba A+bibh )

mC) 8)

b) Show that (R1) is controllable Show that (R1) is observable if 7,, d, are coprime,

€) Show that (R2) is nonminimal and, from the eigenvalues of Á,, đeduce that the extra modes are those of

A,

d) Using the Popov-Belevitch-Hautus test, show that (R2) is controllable if #,, d „ are coprime In the same manner, show that any unobservable mode must (indeed) be a mode of A, so that (R2) is detectable Hint: The Popov-Belevitch-Hautus test indicates that a state-space realization is controllable if and only if

rank (s!—A B)=dim (A) for all s

and observable if and only if

Cc

bá Note that it is only necessary to check the rank for values of s for which đet (s — A) =0

rank ( )= aim cy for all s

Problem 2.2

a) Let w(t)=sin (¢) Find and plot

A@)= J w2(z) dr

0

b) Show that there exist a@,, a2 and 5 >0 such that

tạtổ

Qy> Ị w(z) dr > ơy

Trang 8

for all t9 20

c) Show that the following limit exists, independently of to

1 rs

=lim — 2,

Ry= jim + J w(t) de

fo

where R„ >0

đ) Repeat part c) with

_ sin (¢)

wo=( canara )

and replacing w?(r) by w(z) w?(r) What conditions must a, ¢ satisfy so that the matrix R,, is positive def- inite ?

Problem 2.3

a) Consider the function w() such that

wŒ)=1 te[n,n+(1/2)°?!] - for all positive integers n

wự)=0 otherwise

Plot w(t) and show that there exists 5 >0 such that

tot F

Ị wˆ()dr>0

to

for all tg 20 Show that, however, the solutions of

o=-gw 9 = GD) =O

do not converge to zero exponentially Is w persistently exciting ? Why ?

b) Consider the following proof of exponential convergence for 6=— g w w! ø, with w PE: Let v=" ¢, so that }=—2¢(¢" w)* <0 Therefore, for all t9>0, 5>0, and re [to ,t9 +]

and

lạ+ð

vữa )~ vữa + ổ)=2 ø j (x) w(t) w" (2) p(x) de (2)

to

fot d

22 867 (to+5) J wŒ)wTŒœ)dr #@Œa+ð) @)

%

where the last inequality follows from (1) If w is PE, this implies that

and exponential convergence follows as in the proof of theorem 1.5.2 Why is this proof incorrect ?

Problem 2.4

This problem investigates the parameter convergence properties of the least-squares algorithm with forget- ting factor

ð()=—P@) wữ) (0 0) w@)—y,()) 60) = (0)

P()=AP()~ P() w) w70) Pũ) P()=PT(00=Pạ>0, 4>0

Trang 9

-9-

a) Let 4=0 Use eqn (2.0.30) to show that | o(¢)— 0" 10 as t 3% if

J w(£)wf()dr>a() 1

0

and ø()—>e as tye, Show that if a(t)2kt for some k>0, then !6(t)—6" | converges to zero as 1/t

What if a 2e* ?

b) For 4 #0, find expressions similar to (2.0.29) and (2.0.30)

Hint: First derive d(P7')/ de and d(P7! 6) / dt

c) What optimization criterion does the least-squares algorithm with forgetting factor solve ?

d) Show that | @(t)— 6"! converges to zero exponentially with rate 2 if P(t) is bounded

e) Show that P(t) is bounded if w is PE

Problem 2.5

This problem constructs discrete-time algorithms for the identification of sampled-data systems The derivations follow lines parallel to the continuous-time case However, see G.C Goodwin & K.S Sin (1984) for more details

a) Consider the first order system

}p=-apyp+k,r

The continuous-time system is interfaced to a digital computer through discrete-time signals rp(k] and p[#] such that

r(@)=rp[k) for re [ÈT, (k + 1) T]

yolk] =y,(kT)

By solving the differential equation, show that yp and rp satisfy exactly a first-order difference equation

yolk] =ap yok -1]+kp rplk—- 1]

and find expressions for ap, kp, as functions of k,, a), and T (this is the so-called step-response dis- crete-time equivalent of the continuous-time system) What is the discrete-time tansfer function P(z)= ýnG)/fn(z)?

b) Extend the results of part a) to a general n-th order system

*=Ax+br

yp=Cx

c) Adapt the results of section 2.2 to the discrete-time by describing an identifier structure for an n-th order plant with transfer function P(z) In particular, show that

yp[#]=ø” w{k]

where 6° is a vector of unknown parameters and w is an observer state vector Give conditions that the polynomial 4(z) must satisfy and indicate the simplifications that arise when 4(z)= 2", i.e., when all the observer poles are at the origin What is the effect of initial conditions in that case?

d) The so-called projection algorithm is the estimate @[k] such that

IØ[È]— ø[& — 1]Ê

is minimized, subject to

yolk] =6" [x] wk)

Using Lagrange multipliers, show that the solution of this optimization criterion is

Trang 10

-10-

6" [k- 1] wie] — yolk]

6k] = o{k — 1) — wIk] wk] wik]

Show that đ[È] is also the projection of Ø{È — 1] on the set of @[k] ’s such that ØT[#] w[k]— yp[#]=0 e©) Find the (batch) least-squares estimate, which minimizes

k

7(@[kb= Y (67k) wlil— yo Li)?

J=1

Then, show that the recursive formula for the least-squares estimate is

P{k—1) w[k] wT[E] P[k — 1]

P[E— 1] w[k] (87 {k— 1] w[k] — yp [k1)

alk} = alk ~ t]— 1+ wT [k] Plk— 1] w[#]

Hint: To find the difference equation for the covariance matrix, the following identity is useful

(A4BC)'=A7T-A'B(I4+CA'BY'CAT

f) Simulate the responses of the identifier with the projection algorithm Replace the denominator w" [k] w[k] by 1+" [k] w[k] to avoid possible division by zero or by a very small number Let kø=1, a,=1, T=0.1, and all initial conditions be zero Plot rp[&], yp[#], Ø:[È], and ø;[&] for r p[k] = sin (kT) and for rp[k]= 1 Compare the convergence properties

g) Repeat part f) for the least-squares algorithm Let P[0]=/ and plot P),(k], Pag[k] and P)2[%], in addi- tion to the other variables

Problem 2.6

a) Find conditions on @,, a2, £,, 82 such that the transfer function

^ ;§+Œ

M(s) = s2+as+ By

is SPR

b) Find conditions on Ri, R2, C1, C2 (all > 0) such that the impedance Z(s)=V(s)/ f(s) (see figure P2.6) is SPR

Problem 2.7

Let M(s) be a rational, strictly proper transfer function

a) Let s(t) be the step response associated with 42(s) Show that if 7(s) is SPR, then

(i }, ? (5 J Š

Hint: Use the Kalman-Yacubovitch-Popov lemma

b) Show that M(s) is SPR if and only if there exists a minimal state-space representation

*=Ax+bu

y=cTz

such that c=6 and A+ AT =—Ø for some Q=Q" >0.

Ngày đăng: 19/04/2014, 10:13