The first is a direct adaptive approach in which a set For example, if for a given scalar error system 6 = as + pzu one is able tools for the indirect approach will be studied in greater
Trang 1Chapter 7
7.1 Overview
eas plants In addition to being able to define control laws for systems in
e(t) is allowed to vary with time
a,daptive control law The first is a direct adaptive approach in which a set
For example, if for a given scalar error system 6 = a(s) + p(z)u one is able
tools for the indirect approach will be studied in greater detail in the next cha.pter
the error system is also chosen such that if lel is bounded, then it is possible
is defined such that 1x1 5 $L&, lel) f or all t where q2 (t, s) is nondecreasing with respect to s E R+ for each fixed t
Jeffrey T Spooner, Manfredi Maggiore, Ra´ul Ord´o˜nez, Kevin M Passino
Copyright 2002 John Wiley & Sons, Inc ISBNs: 0-471-41546-4 (Hardback); 0-471-22113-9 (Electronic)
Trang 2system are defined by e = c&s) + p( ) x U, so that the error dynamics are
when the plant is autonomous
The direct adaptive control approach studied here will first assume that
.F, (2, 0) over x E S, The value of 0 E RP is chosen such that the ideal
estimate of 8 It will then be shown how to choose update laws for e(t) which result in a stable closed-loop system
7.2 Lyapunov Analysis and Adjustable Approximators
of accuracy when x E SZ, then we would expect that one could directly
system
valid when x E SZ, we will need to ensure that at no time will the trajectory
ma.y no longer hold
Trang 3over a region If, for example, the dynamics were obtained from experimen-
may lead to control design with hidden instabilities
Example 7.1 Consider the scalar plant defined by
IE: II f(x) +u
-
where 8, e E R are unknown constants and e > 0 is assumed to be
controller which will force x -+ 0 even when 6 is unknown If we wish
to drive x + 0, then define the error system e = x and Lyapunov
e = 6X2 + ox + u,
so
derivative of the Lyapunov function may now be expressed as
Vs = e (w(x) + 6x + u) , V-3)
with w(x) a bounded uncertainty when x E [-1, 11
If 6 is known, then the static control law u = vS(x, 6) with
v,(x,e) = /se - esgn(e) - 132, (7 4) and K > 0 renders
v, = -tse2 + w(x)e - clel (7 5)
<
x(t) E [-1, 11, t > 0
Trang 4But B is not known, so we will consider the use of an adaptive con-
h
If x E [-I, 11, then ]w] < E, so we find
Choosing 6 = I?xe, we obtain r/ 5 -2~3,
We might then be tempted to use the LaSalle-Yoshizawa theorem to conclude that x + 0 if x(0) E [ I, l] as was the case for the static feedback controller Unfortunately, it is not possible to conclude that
x -+ 0 even if x(0) E i-1, 11 It is possible for V to decrease while x
is increasing due to the parameter error term in the definition of the Lyapunov candidate (that is, the e2 term may increase and s2 may decrease such that the sum defined by V is decreasing) If x leaves the set [- 1, 11, then V may also start to increase since the approximation ]w] 5 6 is no longer valid, which may indicate that the closed-loop system is no longer stable Figure 7.1 shows the trajectory of x(t) for various values of 8(O) when 0 = 1 and E = 0.01 When 8(O) E (0, -2},
closed-loop system becomes unstable Thus the initial conditions of the parameter estimates may influence the stability of an adaptive system when using approximations which only hold over a compact set, such as is often the case when using system models obtained from
The above example demonstrates the need to ensure that x remains in
a region in which a good approximation may be obtained We will need
to show that for a given controller and set of initial conditions, that the state trajectory is bounded such that x E Sz for all t where Sz represents
Trang 5the region over which a good approximation is achievable In the case
of adaptive control, we will only be concerned with the region where an
may be no guarantee that given a current set of approximator parameters,
a good approximation takes place However, we will require that some ideal parameter set does exist even if we never use it This point will become more apparent later when looking at the stability analysis of the direct adaptive controller
To help guarantee that the state trajectories do not leave the region
z E S, over which a reasonable approximation may be established, we will use the following theorem
Theorem 7.1: Let V : Rq x RP -+ R be a continuously differentiable function such that
(7.10)
where YeI, ~~2, ygl, ~6~ are class-Kc0 Assume that for a given error system,
a control law u = u is defined such that both lel > b, implies v 5 0 and
141 2 be implies V 5 0 Then e E Be for all t with
where V, = Ye2 (be) + Ye2 (be)
Trang 6Proof: If V > VT, then either let > b, or 161 > b,- (or both) Thus V > VT
(7.10) we know that
The above theorem will be used to study the range over which e (and 2) may travel when an adaptive controller is used From Assumption 6.1,
we know that 1x1 5 $&, lel) where $Z is nondecreasing with respect to lel
Yei (kl) + YeI (IQ I V(e, 6 I m(lel) + y&l@, (7.13)
where yeI, ~~2, ysl, yea are class-&, Assume that for a given error system,
a control law u = Y is defined such that both e E B, - Bb implies v < 0 and
161 _> bo implies v < 0, where Bb = (e E Rq : lel < b,) and B, is defined by (7.11) Then e E B, for all t
-
We will find that Corollary 7.1 is useful in the study of adaptive systems using approximators that are defined only over a region Since we require
the range of the approximator input variables used in the control law u = ?( Z, 8) If a fuzzy system, for example, is used in an adaptive controller,
it may not be necessary for the input membership functions to cover all possible control inputs Instead, the fuzzy system only needs to be defined
region
7.3 The Adaptive Controller
The goal of the adaptive controller is to provide stable control of systems with significant uncertainty As seen in the previous chapter, control laws
Trang 7ma,y be defined for many uncertain nonlinear systems using techniques such
control approach may be used in place of a static control law, even with
may also be possible for the adaptive controller to compensate for system faults in which the plant dynamics change due to some component failure
or degradation
For a given control problem, the designer must define an error system
e = x(t, z) which quantifies the closed-loop system performance and at the same time may be used to place bounds on the system states as required
by Assumption 6.1 We will additionally assume that the error dynamics are affine in the control input so that
e = a(t,x) + P(x)u, (7.14) where e E Rq and u E Rm Note that as explained in the previous chapter this includes several classes of nonlinear systems The remainder of this section will be devoted to defining update laws 8 = 4(-t, x,8> so that the control law u = F(z, B(t)) g uarantees that the closed-loop system is stable Specifically, we will try to define an adaptive controller so that e -+ 0 for (7.14) and x and 4 remain bounded
7.3.1 a-modification
Our goal here is to design an update law which modifies the adjustable parameter vector 8 E R* so that the controller u = F(z, 8) provides closed- loop stability To ensure that it is possible to define an update law resulting
Assumption 7.1: There exists an error system e = x(t, x) satisfying Assumption 6.1 and static control law u = Y&Z) with z measurable, such that for a given radially unbounded, decrescent Lyapunov function Vs(t, e),
Trang 8we find ii;- 5 -ICI Vs + kz along the solutions of (7.14) when u = V, (z)
In addition, we must know how each input affects the states of a plant relative to the other inputs In particular, we will make the following as- sumption:
Assumption 7.2: Given the error dynamics (7.14), assume that
w h ere c > 0 is a possibly unknown scalar constant and
This requires that we know the functional form of ,8(x), though we do not necessarily need to know the overall gain Thus the scalar c allows a degree of freedom in terms of knowledge about the system dynamics The following example shows how this degree of freedom may be used when controlling poorly understood systems
Example 7.2 As shown in the previous chapter, there are a number of control problems with error dynamics defined by (7.14) where ,8 =
is satisfied even when the magnitude of the input gain is not known
A Here, we will consider using the a-modified update defined by
B = -r
K
31; - -&3(x) aqd>
de > T + ~ (j _ g) ( )I 7 (7.15) where I’ E Rpxp is a positive definite, symmetric matrix used to set the rate of adaptation and 0 > 0 is a term used to increase the robustness of the closed-loop system Here we are using the notation
(7.17)
Trang 9with rl > 0, then the parameter update law (7.15) with adaptive controller
u = F[z,@ guarantee that the solutions of (7.1.4) are bounded given B, G B,, where B, is defined by (7.25)
Proof: Consider the Lyapunov candidate
1
ri, = c [ 2 + g (a@, x) + /?(,,,z.@)] + 8Tr-1ti
and 6) = 8, we find
(7.21)
p - e”j2 2
(7.22)
Trang 10Using (7.21) and (7.22), we find
PI
ri, < -CklYel (lel) - q + d,
given in the statement of the theorem, we see that e E Be with
(which was the topic of the previous cha’pter) and a suitable approximator structure
3 Define a static control law u = V, which ensures that pS 5 -41 V, + Icz
Trang 114 Choose an approximator T(z,8) such that there exists some 0 guar-
viewed as a “best guess” of 8
5 Find some B, such that e E B, implies x E &
It should be emphasized that the static control law V, does not need to be
ma.y then be carried out as before, but with W = 0 so d = ckz + ov
in the definition of V,
Theorem 7.2 only tells us that the solutions will remain bounded such
(7.26)
s t Ye1 (lel>dT < -/yg++ (7.27)
Trang 12Since I/‘, is bounded we find that
From (7.29) we see that to improve the RMS error, one must either decrease
cl, or increase kl The value of d may be decreased by decreasing IQ, W,
the control problem
arbitrarily small, the ultimate bound on le( may be made arbitrarily small
We will now see how to apply the direct adaptive controller using the o-modification to adjust its parameters We will start by studying the problem in which the approximator used to define the adaptive controller does not have limits on its inputs Thus B, = R4 since the approximator input x may take on any value We will then study a different problem in which a fuzzy system is used with a finite domain associated with its input membership functions
Example 7.3 Assume that the system dynamics for a particular plant may be transformed into
Trang 13first two derivatives are measurable To do this, we must define an
of the system
$1 = A,(x) +v, (7.31) where v is a virtual input Define the first error variable as er = q -r
(7.32)
A second error variable might then be defined using ~1, such as ez =
Ignoring & for now, consider the error e2 = 22 + 31cer/2 - + (which is
candidate V, = $ef + $ez Since
Now consider the control law u = Y&Z) with
(7.35)
Trang 14Here x = [Y, elf ez, ~1, II;~ IT Notice that V, may not be implemented
rect adaptive controller, however, may be implemented since Assump-
Figure 7.2 Closed-loop performance when x (-) is commanded to track
a reference r (- -) defined by a square wave
To use the direct adaptive controller, we must define a linear in the
Trang 15parameter approximator Let
(7.38)
all x, it was not necessary to include a term to a$ccount for the non-
the update la#w becomes
Assume that pr = 1, p2 = -1, p3 = 2, and p4 = 1 Let IS = 10, r) = 1,
shows the trajectory of the closed-loop system when r(t) is a square
Trang 16wave Notice that there is a bit of steady-state error and ringing The
only well defined on the region x E S, so long as there is some Bar such that
adaptive controller when using a finite fuzzy system
Example 7.4 Consider the velocity control of an automobile whose dy- namics are defined by
where p E (0,0.4] is the unknown coefficient of aerodynamic drag, m
is the known vehicle mass, and x is the vehicle speed If we wish to drive x -+ r(t), then define the error system by
e=x-r
p = l/m
u = u,(x), where
Trang 17with K > 0 and z = [T, e, aIT This choice of the control law renders
unknown, however, this static controller may not be implemented Because the aerodynamic drag is unknown, we might want to use a fuzzy system to approximate a control term which compensates for its effects In particular, consider the controller u = Y(z, B): where
.F(z,B) = m(7: - hce) - r)e/m + ‘iT1 “;i”;(~) , (7.42)
2=1 ix
with q > 0 Each pi is an input membership function for the fuzzy system and
The fuzzy system will be used to cancel the effects of the aerody-
the additional nonlinear damping term in (7.17) Assume that the fuzzy system is defined using p = 10 triangular input membership
law according to
(7.43)
Before choosing the controller parameters, we will estimate a bound
on W (the magnitude of the representation error) An estimate of
W will be needed since it will influence our choices of the controller parameters to ensure that B, C B, (where B, will also be defined shortly) When the parameters of the fuzzy system are fixed, it simply
the input membership function centers, the degree of membership
is zero for the other input membership functions Considering the aerodynamic drag term we choose 0i = ,c& where ci is the center for the ith membership function so that the approximation will be perfect at the membership function centers
Define the representation error w = Y, - ?(z, 0), where Y, = V, -
ve/m Notice that