1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Analysis and Control of Linear Systems - Chapter 6 ppsx

36 411 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 36
Dung lượng 500,95 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Kalman’s Formalism for State Stabilization and Estimation We will show how, based on a state representation of a continuous-time or discrete-time linear system, it is possible to elabora

Trang 1

Kalman’s Formalism for State

Stabilization and Estimation

We will show how, based on a state representation of a continuous-time or

discrete-time linear system, it is possible to elaborate a negative feedback loop, by

assuming initially that all state variables are measurable Then we will explain how,

if it is not the case, it is possible to build the state with the help of an observer

These two operations bring about similar developments, which use either a pole

placement or an optimization technique These two approaches are presented

successively

6.1 The academic problem of stabilization through state feedback

Let us consider a time-invariant linear system described by the following

continuous-time equations of state:

)()()

(t A x t B u t

where xRn is the state vector and uRm the control vector The problem is

how to determine a control that brings x (t) back to 0, irrespective of the initial

condition x(0) In this chapter, our interest is mainly in the state feedback controls,

which depend on the state vector x A linear state feedback is written as follows:

Chapter written by Gilles DUC

Trang 2

Figure 6.1 State feedback linear control

Hence, the state feedback control affects the dynamics of the system which

depends on the eigenvalues of AB K (let us recall that the poles of the open loop

system are eigenvalues of A; similarly, the poles of the closed loop system are

eigenvalues of AB K)

In the case of a discrete-time system described by the equations:

k k

the state feedback and the equations of the looped system can be written:

k k

so that the dynamics of the system depends on the eigenvalues of FG K

The research for matrix K can be done in various ways In the following section,

we will show that under certain conditions, it makes it possible to choose the poles

of the looped system In section 6.4, we will present the quadratic optimization

approach, which consist of minimizing a criterion based on state and control vectors

Trang 3

6.2 Stabilization by pole placement

6.2.1 Results

The principle of stabilization by pole placement consists of a priori choosing the

poles preferred for the looped system, i.e the eigenvalues of AB K in

continuous-time (or of FG K in discrete-time) and then to obtain matrix K

ensuring this choice The following theorem, belonging to Wonham, specifies on

which condition this approach is possible

THEOREM 6.1.– a real matrix K exists irrespective of the set of eigenvalues

λ1 " λ

{ , , n}, real or conjugated complex numbers chosen for AB K (for

K

G

F respectively) if and only if (A,B) ((F,G)respectively) is controllable

Demonstration It is provided for continuous-time but it is similar for discrete-time

as well Firstly, let us show that the condition is sufficient: if the system is not

controllable, it is possible, through passage to the controllable canonical form (see

Chapter 2), to express the state equations as follows:

)(0)(

)(0

12 11 2

t x

t x A

A A t

)(0

2 1 12 1 1 11 2

t x

t x A

K B A K B A t

so that, the state matrix being block-triangular, the eigenvalues of the looped system

are the totality of eigenvalues of sub-matrices A11−B1K1 and A22 The

eigenvalues of the non-controllable part are thus, by all means, eigenvalues of the

looped system

Let us suppose now that the system is controllable In this part, we will assume

that the system has only one control; however, the result cannot be extended to the

Trang 4

case of multi-control systems As indicated in Chapter 2, the equations of state can

be expressed in companion form:

)(

1000

)(

)(

1

00

00

10

0

00

10

1

1

t u t

x

t x

a a

a a

n n

)()

(

1 1

t x

t x k k

k t

0

)(

)(1

00

01

1

t e t

x

t x

k a k

a t

x

t

x

n n

We see that, by choosing the state feedback coefficients, it is possible to

arbitrarily set each characteristic polynomial coefficient so that we can arbitrarily set

its roots, which are precisely the eigenvalues of the looped system In addition,

matrix K is thus uniquely determined

Theorem 6.1 thus shows that it is possible to stabilize a controllable system

through a state feedback (it is sufficient to take all λ with a negative real part in i

continuous-time, inside the unit circle in discrete-time) More generally, it shows

that the dynamics of a controllable system can be randomly set for a linear state

feedback

Trang 5

However, in this chapter we will not deal with the practical issue of choosing

the eigenvalues Similarly, we note that for a multi-variable system (i.e a system

with several controls), the choice of eigenvalues is not enough in order to uniquely

set matrix K Degrees of freedom are also possible for the choice of the eigenvectors

of matrices AB K or FG K Chapter 14 will tackle these aspects in detail

)(01

)

(

)(1

0)(

)(10

10)

(

)

(

2 1 2

1 2

1

t x

t x t

y

t u t

x

t x t

10rank )

ωξ

ω

k

Figure 6.2 shows the evolution of the output and the control, in response to the

initial condition x(0)= (1 1)T, for different values of ω and ξ: the higher 0 ω is, 0

the faster the output returns to 0, but at the expense of a stronger control, whereas

the increase of ξ leads to better dynamics

Trang 6

Figure 6.2 Stabilization by pole placement

6.3 Reconstruction of state and observers

6.3.1 General principles

The disadvantage of state feedback controls, like the ones mentioned in the previous chapter, is that in practice we do not always measure all the components of

Trang 7

state vector x In this case, we can build a dynamic system called observer, whose

role is to rebuild the state from the information available, i.e the controls u and all the

available measures The latter will be grouped together into azvector (Figure 6.3)

Figure 6.3 The role of an observer

(

)()()

(

t x C

t

z

t u B t x A

t

x

[6.18]

The equations of a continuous-time observer, whose state is marked x ˆ t(), are

calculated on those of the system, but with a supplementary term:

The observer equation of state includes a term proportional to the difference

between the real measures z (t) and the reconstructions of measures obtained from the

observer’s state, with an L gain matrix In the case of a system with n state variables

and q measures (i.e dim( x ) = dim( xˆ ) = n , dim( z ) = q ), L is an n× matrix q

Equations [6.19] correspond to the diagram in Figure 6.4: in the lower part of the

figure we see equations [6.18] of the system we are dealing with The failure term

with the L gain matrix completes the diagram

Hence, equations [6.19] can be written as follows:

)()()()(

z t and with the state matrix AL C We infer that the observer is a stable

system if and only if all the eigenvalues of AL C are strictly negative real parts

Trang 8

Figure 6.4 Structure of the observer

Let us now consider the reconstruction error ε that appears between )(t) x(t

and x(t) Based on [6.18] and [6.19], we obtain:

()(

)

(t A L C ε t

and hence the reconstruction error ε tends toward 0 when (t) t tends toward infinity

if and only if the observer is stable In addition, the eigenvalues of AL C set the

dynamics of ε Hence, the problem is to determine an L gain matrix ensuring (t)

stability with a satisfactory dynamics

6.3.3 Discrete-time observer

The same principles are applied for the synthesis of a discrete-time observer; if

we seek to rebuild the state of a sampled system described by:

k k k

x C

z

u G x F

Trang 9

From equations [6.22] and [6.23] we infer that the reconstruction error verifies:

k

In order to guarantee the stability of the observer and, similarly, the convergence

toward 0 of error ε , matrix k L must be chosen so that all the eigenvalues of

C

L

F− have a module strictly less than 1

According to [6.23] or [6.24], we note that the observer operates as a predictor:

based on the information known at instant k, we infer an estimation of the state at

instant k+1 Hence, this calculation does not need to be supposed infinitely fast

because it is enough that its result is available during the next sampling instant

6.3.4 Calculation of the observer by pole placement

We note the analogy between the calculation of an observer and the calculation

of a state feedback, discussed in section 6.1: at that time, the idea was to determine a

K gain matrix that would guarantee to the looped system a satisfactory dynamics, the

latter being set by the eigenvalues of AB K (or FG K for discrete-time) The

difference is in the fact that the matrix to determine appears on the right in product

K

B (or G K), whereas it appears on the left in product L C

However, the eigenvalues of A–LC are the same as the ones of A T –C T L T,

expression in which the matrix to determine L T appears on the right Choosing the

eigenvalues of A T –C T L T is thus exactly a problem of stabilization by pole placement:

the results listed in section 6.1 can thus be applied here by replacing matrices A and

B (or F and G) by A T and C (or T F T and C T) and the state feedback K by L T

Based on Theorem 6.1, we infer that matrix L exists for any set of eigenvalues

λ1 " λ

{ , , n} chosen a priori if and only if (A T,C T) is controllable However, we

can write the following equivalences:

),

(A T C T controllable ⇔ rank[ C T A T C T "(A T)n−1C T]=n

n n

A C

A C

# ⇔ (C,A) observable [6.26]

Trang 10

Hence, we can arbitrarily choose the eigenvalues of the observer if and only if

the system is observable through the measures available Naturally, the result

obtained from equation [6.26] can be used for the discrete-time case by simply

replacing matrix A with matrix F

6.3.5 Behavior of the observer outside the ideal case

The results of sections 6.3.2 and 6.3.3, even if interesting, describe an ideal case

which will never be achievable in practice Let us suppose, for example, that a

disturbance p (t) is applied on the system [6.18]:

=

)()

(

)()()()

(

t x C

t

z

t p E t u B t x A

t

x

[6.27]

but observer [6.19] is not aware of it and then a calculation identical to the one in

section 6.3.2 shows that the equation obtained for the reconstruction error can be

written:

)()()(

)

(t = AL C ε t + E p t

so that the error does not tend any longer toward 0 If p (t) can be associated with a

noise, Kalman filtering techniques can be used in order to minimize the variance of

)

(t

ε We provide a preview of this aspect in section 6.5.3

If we suppose that modeling uncertainties affect the state matrix of system

[6.18], so that a matrix A'≠ A intervenes in this equation, then the reconstruction

error is governed by the following equation:

)()'()()(

)

(t = AL C ε t + AA x t

so that there again the error does not tend toward 0

NOTE 6.1.– observers [6.19] or [6.23] rebuild all state variables, operation that may

seem superfluous if the measures available are of very good quality (especially if the

measurement noises are negligible): from the moment the observation equation

already provides q linear combinations (that we will suppose independent) of state

variables, it is sufficient to reconstitute nq, independent from the previous ones

Therefore, we can synthesize a reduced observer, following an approach similar to

the one presented in these sections (see [FAU 84, LAR 96]) However, the physical

interpretation underlined in section 6.3.2, where the observer appears naturally as a

physical model of the system completed by a retiming term, is lost

Trang 11

)

(

)(1

0)(

)(10

10)

(

)

(

2 1 2

1 2

1

t x

t x t

y

t u t

x

t x t

01rank

(1

0)(ˆ

)(ˆ10

10)

1 2

1 2

l

l t u t

x

t x t

(1

0)(ˆ

)(ˆ1

1

2

1 2

1 2

l

l t u t

x

t x l

12

0

2 0

2

0 1

ωξω

ωξ

l

l

[6.33]

Trang 12

Figure 6.5 Observer by pole placement

Figure 6.5 shows the evolution of the two state variables in response to the initial condition x(0) =(1 1)T and the evolutions of the state variables of the observer initialized by xˆ(0)=(0 0)T for different values of ω : the higher 0 ω is, 0

the faster the observer’s state joins the systems’ state

Trang 13

6.4 Stabilization through quadratic optimization

6.4.1 General results for continuous-time

Let us consider again system [6.1], with an initial condition x(0) ≠ 0 The

question now is to determine the control that enables to bring back x (t) state to 0,

while minimizing the criterion:

= 0 (x(t) Q x(t) u(t) R u(t))dt

where Q and R are two symmetric matrices, one positive semi-defined and the

other one positive defined:

0

=Q T

(hence, we have x T Q x≥0 ∀x and u T R u >0 ∀u ≠0) Since matrix Q is

symmetric, we will write it in the form Q= H H where T , H is a full rank

rectangular matrix

The solution of the problem is provided by Theorem 6.2

THEOREM 6.2.– if conditions [6.35] are verified, and also if:

there is a unique, symmetric and positive semi-defined matrix P, which is the

solution of the following equation (called Riccati’s equation):

0

−+ A P P B RB P Q

u

T

1

))

(

[6.38]

Trang 14

It guarantees the asymptotic stability of the looped system:

)()(

The value obtained for the criterion is then J* =x(0)T P x(0) „

Elements of demonstration

The condition of stabilizability of (A,B) is clearly a necessary condition for the

existence of a control that stabilizes the system We will admit that it is also a

sufficient condition for the existence of a matrix P, symmetric and positive

semi-defined, solution of Riccati’s equation [MOL 77] If, moreover, (H, A) is detectable,

we show that this matrix is unique [ZHO 96]

If (A,B) is stabilizable, we are sure that there is a control for which J (whose

upper bound is infinite) acquires a finite value: since the non-controllable part is

stable, any state feedback placing all the poles of the controllable part in the left

half-plane ensures that x (t) and u (t) are expressed as the sum of the exponential

functions that tend toward 0

Mutually, any control u (t) leading to a finite value of J ensures that

)

(

)

(t Q x t

x T tends toward 0, and hence that H x (t) tends toward 0 Since (H,A) is

detectable, this condition ensures that x (t) tends toward 0

Hence, let us define the function V(x(t))= x(t)T P x(t) where Pis the positive

semi-defined solution of [6.37] We obtain:

Trang 15

by noting u the control given by [6.38] For any stabilizable control u (t) we have:

Since R is positive defined, J is minimal for u(t)≡u*(t) and thus has the

announced value As indicated in the third section, the detectability of (H,A)

ensures the asymptotic stability of the looped system

NOTE 6.2.– when P >0, function V x t , which is then positive defined and ( ( ))

whose derivative is negative defined, is a Lyapunov function (condition P >0 is

verified if and only if (H,A) is observable [MOL 77])

6.4.2 General results for discrete-time

The results enabling the discrete-time quadratic optimization are the same, with a

few changes in the equations describing the solution Let us consider the system

[6.4] and the criterion to minimize:

matrices Q and R having the same properties as in the previous section

(particularly with the conditions [6.35]) The solution of the problem is provided by

Trang 16

there is a unique matrix P , symmetric and positive semi-defined, solution of the

following equation (called discrete Riccati’s equation):

x

The value obtained for the criterion is then J* = x(0)T P x(0) „

6.4.3 Interpretation of the results

The results presented above require the following notes:

– the optimization of a criterion of the form [6.34] or [6.40] does not have to be

considered as a goal in itself but as a particular means to calculate a control, which

has the advantage of leading to a linear state feedback;

– however, we can attempt to give a physical significance to this criterion: it

creates a balance between the objective (we want to make x return to 0, the

evolution of x penalizes the criterion through matrix Q) and the necessary expense

(the controls u applied penalize the criterion due to matrix R);

– the choice of weighting matrices Q and R depends on the user, as long as

conditions [6.35] and [6.36] or [6.41] are satisfied Without getting into details, it

should be noted that if all coefficients of Q increase, the evolution of x is even

more penalized, at the expense of the evolution of u controls; thus the optimization

of the criterion leads to a solution ensuring a faster dynamic behavior for the looped

system, but at the expense of stronger controls Inversely, the increase of all

coefficients of R will lead to softer controls and to a slower dynamic behavior;

– the two conditions in [6.36] or [6.41] are not of the same type: in fact we can

always fulfill the condition of detectability by a careful choice of matrix Q

Trang 17

However, the available controls impose matrix B (or G), so that there is no way of

acting on the condition of stabilizability;

– the criterion optimization provides only matrix K of expression [6.38] or

[6.43] In the absence of input (e≡ 0), the control ensures the convergence towards

the state of equilibrium x = 0 Input e makes the system evolve (in particular a

constant input makes it possible to orient the system toward another point of

equilibrium, different from x=0)

0

q

where q and r are positive coefficients Hence, we have H=( q 0) and we can

verify that (H , A) is observable :

20

0rank

In section 6.2.2 we saw that (A,B) is controllable, so that hypotheses [6.36] are

verified The positive semi-defined solution of Riccati’s equation and the state

feedback matrix are written by noting α= q r and β= q / r :

)211(

)211(

21

β++

−β

−α

αβ

We note that the latter depends only on the ratio q/r and not on q and r

separately Figure 6.6 shows the evolution of the control and the output, in response

to the initial condition x(0) =(1 1) ,T for different values of q/r: the higher q/r is,

the faster the output returns to 0, but at the expense of a stronger control

Trang 18

Figure 6.6 Stabilization by quadratic optimization

Ngày đăng: 09/08/2014, 06:23

TỪ KHÓA LIÊN QUAN