1. Trang chủ
  2. » Thể loại khác

Thiết kế bộ quan sát trạng thái tối ưu bằng sử dụng thuật toán gauss newton trong phản hồi đầu ra NM1

6 5 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Thiết kế bộ quan sát trạng thái tối ưu bằng sử dụng thuật toán gauss newton trong phản hồi đầu ra NM1
Tác giả Do Thi Tu Anh, Nguyen Doan Phuoc
Trường học Hanoi University of Science and Technology
Chuyên ngành Control Systems, Optimization, Nonlinear Model Predictive Control
Thể loại Journal Article
Năm xuất bản 2014
Thành phố Ha Noi
Định dạng
Số trang 6
Dung lượng 235,93 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Optimal State Observer Design Using Gauss-Newton Algorithm in Output Feedback Nmpc Do Thi Tu Anh, Nguyen Doan Phuoc* Hanoi University of Science and Technology No 1 Dai Co Viet Str...

Trang 1

Optimal State Observer Design Using Gauss-Newton Algorithm

in Output Feedback Nmpc

Do Thi Tu Anh, Nguyen Doan Phuoc*

Hanoi University of Science and Technology

No 1 Dai Co Viet Str Ha Noi, Viet Nam Received November 05 2013; accepted April 22 2014

Abstract

In order to utilize state feedback controllers in output feedback nonlinear model predictive control (NMPC), appropriate state observers are required sucfi that the system performance will not be affected by ttie presence of the state obsen/er in combination with the state feedback controller Due to the optimality nature of A/MPC, an optimal observer is more eligible for the existing state feedback predictive controller than other observers according to the separation principle This paper presents an optima! observer which performance The optimal observer is designed based on the iterative Gauss-Newton optimization algorithm

Keywords: State observer, NMPC, Output feedback control

Ị Introduction

Model predictive controllers are based on

optimization techniques and applied mainly to

discrete-lime systems

(1)

where x, — x^(k), ,x^{k)

independent state variables

1 vector of 7)

of the system,

I vector of m input

signals, y^ = y^{k), ,y,(k) vector of /

output signals of the system

The optimization problem in model predictive

control is solved repeatedly at every time instant

whose duration is exactly the sampling penod T of

the system input u{t) and output j/(f) Specifically,

in order to obtain u,^ = u(KTJ = u{k) from the

input and y, =y{kTJ = y{k) from the output at

lime mstant t = kT^, fc = 0,1 ., the controller

utilizes a predictive model, often constructed from

the mathematical model of the system, to determine a

fliture control sequence, namely

ậ jj^p ,,, ,J.(,^,,_| in a horizon length of M,

which minimizes the foilowmg objective function'

* Corresponding Author Teị (+844) 3869 2985

Z)/',K-^,-y(,.) • (2)

where y^^^ denotes the output of the predictive

model and p^ (-) denotes the function of prediction error at time instant t = {k+tjT^ in the future [1,2]

It is widely known that output feedback linear model predictive control (LMPC) has achieved great success m many applications m process industries [3,4] Nonlinear model predictive control (NMPC),

In order to convert a predictive controller from state feedback into dynamic output feedback form, one could thmk of combining the existing state feedback controller with an appropriate state observer If the system performance is preserved by this combination,

Although there have been many nonlinear observers with good approximation such as Lipschitz,

them has been successfully applied to output feedback NMPC according to separation principle

M

N observaiwns^; ^ predictions |

Fig 1 Principle of combining an observer with the state-feedback predictive controller

Trang 2

In spite of this fact, since the design of the

state feedback predictive controller involves the

solution of the optimization problem (2), an optimal

observer with the same structure of the objective

function, if employed with the controller, will not

affect the performance of the resulting output

feedback control system Therefore, we address in

this paper the optimal observer design problem to be

used in output feedback NMPC strategy

Once the observer and the predictive

controller have the objective functions of the same

form, we can combine them together with the only

performance can be analyzed Fig, 1 illustrates this

idea For the whole receding horizon M along the

fc + JV, ft + M — 1 contains the predicted values of

the system input and output The current time is

k + N-\ The objective function (2) of the

predictive controller is now rewritten as:

E P.K„-tt,.) ^ mm (3)

and the future optimal control sequence is:

whose first element u^^^ will be applied to the

system The remaining subinterval of \k,lt + N — 1

contains the measurements of the system input and

output They are used to estimate the system state

Jij, denoted as x^, at time instant ( = kT that

satisfies the following cntenon:

Theoretically, if the running cost q (•) as well

as the parameters A', M are selected such that the

objective fiinctions (3) and (4) defined on those

objective fiinction of the form (2) of the state

feedback predictive controller, the observer (4) will

not make any effects to the performance of the

closed-loop system The system performance is

"preserved" in the sense that the stability of the

composite moving horizon system, comprising a

stabilizing state feedback predictive controller and a

moving horizon observer, is guaranteed [6] Since the

proposed optimal observer is none but a moving

horizon observer, it is obvious that the closed-loop

system is stable

In this paper, we assume the availability of the state feedback predictive controller where the objective fimction in (2) is defined as the quadratic

errors and control inputs The objective function of the proposed optimal observer is quadratic in estimation errors in order to conform to the quadratic structure of p_ (•), The optimization problem is solved

need to compute the second derivative of a muhivariate fimchon as well as the inverted Hessian matrix as in the Newton-Raphson algorithm [2]

2 Optimal obverver design

Consider a discrete-time nonlinear MIMO system descnbed by the state-space model in (1)

Assume that the system state x^ is unbounded The

observer design problem with observation window

N is slated that, every time the window moves along

the lime axis by a sampling period T , corresponding

to setting the index k := k + 1, one need to find an estimate x^ of the system based on a approximation

of the system model {1):

* i H , = M > " J (5) and on the input and output measurements:

•^+, Vt^, 1 « = 0,1, , i V - i (6)

within the observation window such that the

difference between i , and the actual value x^,

observed from the output, is minimized

Specifically, from N pairs of consecutive

measurements (6) and the model (5), we have:

= I x^,u^,a^^^, ,u^^^_j

= fX^,M.) (7) where W = {u^, u ^ ^ ^ J and

fi,i^k'K) = ^i^ for 1 ^ 0 l=fofo o / , f o r i > l

The error e^ observed from the system output

at time instant k +1 then becomes,

e = y^^^ —/i(ij^^,Uj^ )

Trang 3

Consequently, the weighted sum of squares of

the observation errors for the whole observation

window is given by

(9)

where P = P^ > 0 denotes an arbitrary weighting

matrix We can then select this matnx so as to make

the form of the (unctions under the sum notation

conform to that of functions p^{-) of the state

feedback predictive controller in (2)

Finally, once the objective function of the

observation errors (9) is obtained, the problem of

fmding an estimated state £l which is most

appropnate for the discrete-time system (1) from its

measurements (6) reduces to the problem of solving

an unconstrained optimization:

We will now solve the optimization problem

(10) using Gauss-Newton iterative method Notice

that in equation (8), W^,, i = 0,l, ,,, ,N-1 is

known from the input measurements, it is hence

possible to write h ( £ , , U ^ i ) := ft, (i,)and the

objective function (9) can be rewritten as;

«(».) = 9{*.)''l'9(%) ^ rain ( " )

where g(i() = coi \ ( ^ ( ) i -•• '^n,,{^t) ^nd

P = diag{P, ,P) is positive definite

It IS now desired to determine Ax^ from

x^[s] such that at S^[s+1] = ^^.[sJ + AXi., the linear

approximation of q(-), i.e.,

q{x^[s + l]) = q(x^[s] + AxJ

dq(x )\

will minimize Q{x^) Let

dq{x.)\

(12)

be the Jacobian matrix of q[) ai x^[s\, this is

A a ; / jfW^ A£j, + + 2 j y p q ( x j s l ) AXj - 1 min

If the number of state variables satisfies

n <rN, the matnx J^VJ^ is invertihle and the last

quadratic optimization problem can be explicitly solved as:

Ai^ = - JjT'J^ " JjT'g(iJs]) (13)

Thus,

x^[s+l] = x^[s] + Ax^

= i j s ] - r p J ^ ~'jyq{i^{s]) (14)

and (14) is a recurrence formula to construct a

sequence x^[b] from an initial guess Sj.[0] which

converges towards the mmimizer S' of the optimization (iO)

Like other iterative optimization methods, the Gauss-Newton algorithm may not yield global solution, unless the problem is convex Without convexity, we can reasonably expect only a local solution t o ( l l )

The Gauss-Newton algorithm terminates when

at least one of the following conditions is met:

- The magnitude of the gradient of q{Ypq(-), i.e,

j j f g ( - ) , drops below a threshold £^

~ The relative change in the magnitude of A x

drops below a threshold e^

- A maximum number of iterations j is completed

We summarizes the above optimization algorithm as follows

Gauss-Newton algorithm

1 Select an initial guess x^[0] = xj ^ and positive

numbersej,£j and s

2 Perform the following steps successively with

s ^ O , L

a) Check if either of the terminating conditions is satisfied If it is true, stop the algorithm and export the answer i j = £ J s ] , otherwise go to step b)

Trang 4

b) Compute i j s +1] from x^[s] according to

(I2),(13)and(14),

c) Set s := s + 1 and go to step a)

In principle, the mitial guess x^[0]

arbitrarily selected However, the algorithm may

converge slowly or not at all if ij[0] is far from the

mimmizer Since in the proposed observer design, the

optimization problem is solved repeatedly at each

sampling instant to obtain the best approximation of

the state at that instant, we will utilize the result i * ,

of the previous iterative procedure as the initial guess

X, [0] for the next one Furthermore, to avoid the case

that the matrix J f P J , is ill-conditioned, ie., it is

invertible but can numerically run into problems, the

equation (13) would be taken place by

VVJ^ Ai^ =-.VVqix^is])

whose numencal solution can be obtained by

decomposition [7],

As a consequence, we come up with an

algorithm for obtaining the optimal states

x[, k = 0,l- ••• from input and output

measurements (6) within observation window A' as

follows

Optimal observer algorithm

1 Construct the objective function Q{x^) according

to (9) and then determine q{x^) from (II),

2 Select an initial guess x_^

3 Perform the following steps successively with

k = 0,1,

a) Compute xl by using Gauss-Newton

algorithm,

b) Set k:=k + l and go to step a)

The next theorem states a sufficient condition

for the convergence property of the proposed

observer

Theorem: If the system (!) with continuous

functions f and h is unifonnly observable and the

summalion (9) with iV = oo converges, the

proposed optimal observer is asymptotic

Fig 2 Time responses of the optimal state observer

when u{k) = 0.5 sin{k) and iV = 3

Fig 3 Time responses of the optimal state observer

with u{k) = 2 and different values of A^

Proof: We see that when the observation window expands to infinity, i.e A' —• ^^J, it IS followed that

W -^U = {uJ, fc = 0,L Thus, for the

convergent infinity sum (9), the expression under the sum notation will converge to 0 Because of the positive definiteness of the matrix P , this is equivalent to:

, - / i f,{x,.U) ^ 0 (15)

Combining with the uniform observabihty property

of the system, i e , (15) holds for all u , we induce

from the evident fact that

i/,_, = h{x, ,a,J = h Hx^.U).u^^_

Trang 5

3 Numerical example

Consider a first-order discrete-time nonlinear

system described by:

x , = —x' + u,

' ' (16)

y^ = X, + v^

where v^ =: v{k) is some sensor noise We apply the

optima! observer algorithm with the initial system

stale x„ = 0, the initial observer state

Xjj ,= x_^ = 0, and the Gauss-Newton terminating

conditions e, =£., =10 ', s^^^^ =100 To show the

effectiveness of the developed technique, we

compare the resulting estimation with the true state m

all following simulations

Given the input «(fr) = 0,5sm(fe) k>0,

and the observation window N ^3, Fig, 2 shows

that time response of the ophmal observer if v{k)

has normal distribution over interval [—0.1 , 0.1]

(dashed-dotled) is almost the same as the true

response (solid) In particular, the time response of

the observer if i;(^') = 0 (dashed) and that of the

exact system are identical In other words, the

optimal observer recovers the exact state in noiseless

case

Moreover, the time responses of the observer

with three different values of the observation window

N are shown in Fig, 3 For u(k) = 2 and the sensor

noise of normal distribution over [—0,1 , 0 1 ] , the

plots confirm our finding that increasing Af up to 8

observer It was also found, however, in this example

that the algorithm fails for A' > 9 since the

composite function /,{•) defined as in (7)

approaches infinity and hence q{-) is undetermined

Therefore, in contrast to the theory that the

observation window can be arbitrarily large, the

choice of A^ should be taken with care

Notice that although the system (16) is

uniformly observable as the output depends linearly

on the state, and the observation window is finite, i,e,,

the assumptions in the theorem in section 2 are not

satisfied, the estimates still converge to the actual

contradiction with the stated theorem since the

theorem gives only a sufficient condition for the

convergence of the observer

Further mvestigation into the proposed optimal observer concerns the convergence properly

of the iterative algorithm at each sampling instant Specifically, the Gauss-Newton algorithm is compared to the Newton-Raphson one when they are both apphed to the opHmal observer design The detailed description of the Newton-Raphson observer has been presented m [[2,]], Here, we select the terminating conditions for the Newton-Raphson algorithm to satisfy the norm of gradient of the objective function, i,e,, R^Lless than e^ and the maximum number of iterations equal to s^^^ As shown in Fig 4 with ji(fe) = 0 8 and A^ = 3, the two methods give the same optimal values of the state at almost all simulation sampling instants,

except at k = 16 and k = 28 where the estimates

obtained by the Newton-Raphson method fail to

achieve \^\ < e^ and the returned values are just those at maximum iteration s = s This c 1 be explained by conjecture that the Newton-Raphson procedures at those instants are not properly initialized

The effect of amplitude of the sensor noise has also been studied through simulation results (not shown) It was found that, as the amplitude of the

output, for instance when the output lends to zero, the performance of the observer with respect to A'^ becomes worse, because the output measurements

is often required for the system output to follow a non-zero reference in model predictive control strategy, and hence, the effect of sensor noise is not vital

Fig 4 Time responses of the opUmal state observer

Trang 6

4 Conclusions and future worl(

In this paper, we have presented a synthesis

approach of optimal state observer for discrete-time

is defmed in terms of a finite horizon quadratic

function to be minimized at each sampling instant

The use of the Gauss-Newion method in the optimal

observer algorithm leads to excellent estimation of

sensor noise of sufficiently small amplitude This has

been shown in an illustrative numerical example

In general it can be concluded that combining

an optimal nonlinear observer with a NMPC strategy

feedback model predictive control Therefore, further

research on separation principle, i.e,, the performance

of the state feedback predictive control can be

recovered by the considered optimal observer, is

required The fnst step m this research would be to

investigate the closed-loop stability of the

observer-based NMPC system Relaxed arguments of dynamic

programming might lead to some further

development in this matter This will be subject of

ftiture research

Rererences

[I.] Findeisen, R and AUgower, F, (2007)' An introduction to nonlinear model predictive control Research report University Stuttgart

[2 ] Tu Anh, D.T va Phii6c, N.D (2013): Thi^t U b^

quan s^t trang thai loi uu cho bQ dteu khten NMPC phan hoi dau ra To he presented at Vietnamese Conference on Control and Automation VCCA2013,

Da Nang,

[3,] Tu Anh,_D,T vh Phuoc, ND (2013): Giiii thi^u vS

dieu khien dy bao Phan T He tuyen tinh Proceedings

of Scientific Conference Faculty of Electronics Engineering, Thai Nguyen University of Technology,

pp 129-138

[4,] Wang, L C, (2009): Model predictive control systems design and implementation using MatLab, Springer

[5.] Besancon, G, (2007): Nonlinear Observers and Applications, Sponger

[6.] Michalska H, and Mayne D.Q (1995): Moving horizon observers and observer-based control, IEEE 995-1006

[7,] Golub G.H, and Van Loan CF (1996) Matnx Computations John Hopkins University Press

Ngày đăng: 08/12/2022, 17:27

🧩 Sản phẩm bạn có thể quan tâm

w