Receding horizon control is a common control concept dealing with control approaches, where their parameters are updated frequently along the time axis by using process informations in the past. Various methods of receding horizon control have been proposed, under which also optimization based receding horizon control methods, that is often known as model predictive control (MPC).
Trang 1RECEDING HORIZON CONTROL: AN OVERVIEW AND SOME EXTENSIONS FOR CONSTRAINED CONTROL OF DISTURBED
NONLINEAR SYSTEMS (INVITED PAPER)
Nguyen Doan Phuoc1,*, Tran Duc Thuan2
Abstract: Receding horizon control is a common control concept dealing with
control approaches, where their parameters are updated frequently along the time
axis by using process informations in the past Various methods of receding
horizon control have been proposed, under which also optimization based receding
horizon control methods, that is often known as model predictive control (MPC)
This paper gives a rough overview of MPC methods together with their main
advantage as well as disadvantage From this point, the paper proposes a
nonlinear receding horizon control strategy which can be applied to constrained
output tracking control by output feedback for a wide range of various nonlinear
objects, which are perturbed additionally by system disturbances All output
feedback control methods corresponding to this proposed strategy are established
based on piecewise linear quadratic optimizing subjected to required constraints
for state feedback control and then combined with either a suitable system state
observation EKF/UKF for noise filtering or a disturbance attenuationt unit, to
become a conformed output feedback receding horizon controller
Keywords: MPC, EKF/UKF, Adaptive control, Tracking control, Receding horizon control, Constrained optimization, LQR
1 INTRODUCTION
Receding horizon control with its well known representation named model predictive control (MPC), is an advanced method of process control, which has been applied successfully in industry since many decades ago [1] The MPC uses the mathematical process model to predict future changes of process dynamic from measured system states
at the current time instant These predictive future changes of process dynamic will be then calculated to hold process outputs close to desired values, while honoring constraints
on both process state and process inputs Fig.1 illustrates the principle structure of MPC with three main components in it: the prediction model, the objective function and a suitable optimization algorithm
Figure 1: Basic structure of a closed loop control system using MPC
Since the very complicatedness of output prediction y k i
at the current time instant k
during the control horizon 0 i N, and moreover, for a possibility of the usage of an
{w k}
k i
k
y
k
x
k i
y
Controlled subject
Objective function
Predictive model Optimization algorithm
Trang 2appropriate constrained optimization algorithm afterward, the application range of MPC in
practice is restricted initially on discrete time linear systems describing by the discrete
state model:
,
r k
R
(1)
or by discrete transfer function [2]:
1
1
( )
m m n n
G z
While fast of all real processes in practice are not linear, the application of MPC
requires obligatory a linearization of the process model over a small operating range This
causes obviously an undesired effect on system performance To avoid this effect by
linearizing, some nonlinear approaches are proposed in [3] However, this technique for
nonlinear model predictive control (NMPC) requires additionally a penalty function for
objective function in order to guarantee the stability of the closed system Unfortunately
the question how to choose this penalty function suitably is still open, even till today
To overcome these all circumstances, the moving LQRs/LQGs along the time axis
looks to be a promising remedy and which is the main content of the extension, which is
proposed in this paper
2 CONVENTIONAL MODEL PREDICTIVE CONTROL METHODS FOR
DISCRETE TIME LINEAR SYSTEMS: A ROUGH OVERVIEW
MPC is based on iterative, finite horizon optimization of a process model At the
current time instant t k kT, where T is sampling time, the current process states x k are
measured and together with past outputs y k j, j 1, 2, ,M
a cost minimizing control strategy is solved via a numerical optimization algorithm for a relatively short time
horizon [ ,k k N ) in the future to obtain future inputs u k i, i0,1, ,N Only the first
input value u k of them is sent to process, then the calculations are repeated starting from
now current states x k1, yielding a new control u k1
Nowaday, there are many basic MPC methods are available, such as [2]:
Model algorithmic control and Dynamic matrix control,
Generalized predictive control,
State feedback MPC
and they are all classified mainly by predictive model and optimization algorithm to be
used in it
2.1 Model algorithmic control (MAC)
The MAC uses the impuls response of SISO process (single input-single output):
1
for output prediction, where {} denotes the z-transformation With this model, the
process output y k i in the future 0 i N will be predicted as follows:
for all i1, 2, ,N , where u if l 0 l and 0
Trang 31 1 2 2 0
k i
Next, for output tracking purpose e k y kw k , where 0 {w k} is the desired trajectory, the following objective function belonged to current horizon [ ,k k N ) will be used:
0
min
N
i
where Q R are two arbitrarily chosen symmetric positive definite matrices With the k, k symbols:
0 0
0
g
all N predictive outputs (4) are rewritten in yG p c and therefore (5) becomes:
p
which implies:
p G Q G R G Q c w
for unconstrained case, or
*
arg min k
p P
by using an appropriate constrained optimization method introduced in [4], for constrained circumstance
Finally, only the first value *
1,0, ,0
k
u p of them is implemented to the process At the next time instant k 1 the whole calculating steps above are repeated again for determining the new control signal u k1 with the prediction horizon moving forward The following algorithm presents this iterative working performance of MAC
Algorithm 1: MAC
1 Set k: 0, u0 Choose arbitrarily 0 N 2 Determine G
2 Choose appropriately two symmetric positive definite matrices Q R k, k
3 Calculate , c i i 0,1, ,N and determine the vector c
4 Determine the optimal solution p* and the element u of it k
5 Send u to the process for a while of sampling time k T , then set :k and go back k 1
to the step 2
It is immediately recognizable from this algorithm, that MAC is an open loop controller Therefore it is very sensible with disturbances and can be applied only for stable processes
2.2 Dynamic matrix control (DMA)
On contrary to MAC, the DMA uses step response { }h instead of (3) for output k
prediction:
Trang 41 1 0
for 0,1, ,
i
with u k u k u k1 and
1
k i
j i
Therefore the DMA algorithm is completely equivalent to MAC as follows:
Algorithm 2: DMA
1 Set k: 0, u0 Choose arbitrarily 0 N 2 Determine
0
0
h
H
2 Choose appropriately two symmetric positive definite matrices Q R k, k
3 Calculate d i i, 0,1, ,N given in (7) and determine the vector
0, , 1 , NT
as well as yy y k, k1, ,y k N T with y k i , i0,1, ,N given in (6)
4 Determine the constrained optimal solution
* arg min k
p P
with
J y w Q y w p R p and ww w k, k1, ,w k N T
5 Send u k 1,0, ,0p* to the process for a while of sampling time T , then set
k k and go back to step 2
The same as MAC, the DMA algorithm given above is an open loop controller It is
therefore very sensible with system disturbances and can be applied for stable processes
only
2.3 Generalized predictive control (GPC)
In GPC, the transfer function (2) of a process with an integral unit in it, will be used
for output prediction Such a process has the mathematical model in form of difference
function as follows:
where z xj k x k j
A z a a z a z B z b b z b z
Denote E z i( 1), F z i( 1), i 0,1, ,N
the solutions of Diophaltine equations:
1E z i( ) (A z )z F zi i( ), i0,1, ,N
and
Trang 51 1 1
G z E z B z
the equation (8) will be rewritten in:
1
k i i j k j i j k i j
y f y g u
where f i j, , g are the parameters of i j, 1 1
( ), ( ), 0,1, ,
n
m i
Hence, all predictive outputs yy k1,y k2, ,y k N T will be performed in:
with
1,0 1
1
1
2
0
m m
g
E
,
n n
F
By substituting all predictive outputs of Eq (9) in the objective function (5) it is obtained:
J p G Q G R p b Q G p b Q b
where
b E u Fy w and ww k1,w k2, ,w k N T
which implies:
Algorithm 3: GPC
1 Choose arbitrarily N 2 Determine E z i( 1), (F z i 1), i1, ,N and E E1, 2, F
Set k0, u b 0, y b 0
2 Choose appropriately two symmetric positive definite matrices Q R k, k
3 Measure the current output y Rearrange , k u y Determine the vector b b b
4 Determine the constrained optimal solution * arg min k
p P
1,0, ,0
k
u p to the process for a while of sampling time T , then set
k and go back to step 2 k
It is recognizable from this algorithm, that GPC is an output feedback controller Therefore it is robust with output constant disturbances and can be applied also for unstable processes The GPC algorithm can be easily reperformed for MIMO systems Such a version of GPC is already proposed in [2]
Trang 62.4 State feedback MPC for linear systems
The state feedback MPC for LTI systems uses the given model in (1) with an additive
integral unit in it:
1
z k1Az kB u k
(10) for state prediction, where
1
k
k
k
and is the null matrix The alternative model (10) has an integral unit in it, because the
matrix A
with
( 1)
m
has m eigenvalues z 1 This integral behaviour of prediction model guarantees that the
steady error of a stable closed loop system will be definitely zero
Together with prediction model (10) the system output y is rewritten in: k
,
k
where CC ,
(11) and now, from prediction model (10) and (11) it is obtained:
1
k i
which deduces:
k
z
where
1
1
k N
k N
Finally, the subtitution of (12) into objective function (5) implies:
with
k 1, k 2, , k N
w
and correspondingly, the following algorithm performs desired state feedback MPC by
summarizing all caculations given above
Algorithm 4: State feedback linear MPC
1 Set k: 0, u10 Choose arbitrarily N 2 Determine , , , ,A B C
D F
2 Choose appropriately two symmetric positive definite matrices Q R k, k
3 Measure the current states x k Determine * arg min k
p P J
, , ,
k
u I u to the process for a while of sampling time T , then set
k and go back to step 2 k
Trang 72.5 Output feedback MPC for linear systems
In model predictive controllers that consits only of linear models, the superposition principle of linear control theory enables an opportunity to convert the state feedback controller to output feedback one by using additionally a state observer This state observer has a purpose to produce approximately process state xk and then the state feedback controller uses this observed states instead of the real state x k measured from process Fig.2 illustrates this separation principle output feedback control strategy
Figure 2: Using state obsever to convert a state feedback controller
to an appriopriate output feedback one
Algorithm 5: Output feedback linear MPC
1
Choose arbitrarily N 2 and an initial process state x0 Determine , , , ,A B C
D F
2 Choose appropriately two symmetric positive definite matrices Q R k, k
3 Set x k xk
Determine * arg min k
p P J
, , ,
k
u I u to the process for a while of sampling time T
5 Measure the output y from process Set : k k k 1 and estimate xk by using an appropriate observer Go back to the step 2
3 MODEL PREDICTIVE CONTROL FOR PERTURBED NONLINEAR DISCRETE TIME SYSTEMS
Consider a nonlinear system, which is described generally in:
(13) where both functions ( ), ( )f g are assumed to be smooth in x k and u k, as well
1[ ] , , n[ ]T
k
is the vector of all system states at the current time instant t k kT, where T is the sampling time, and
1[ ] , , m[ ]T, 1[ ] , , r[ ]T
are vectors of inputs and outputs signals respectively at the same time instant Both ,
are white noises, which could propagate nonlinearity in system, and d k is a vector
of slow disturbances, which can be seen obviously as the model errors
System noises
Output disturbance
u
w
y
x
Controlled plan
State feedback controller State observer
Trang 8The here regarded control problem for the given nonlinear system (13) above is an
output feedback controller u x k( k) to design, which is subjected to the input constraint
m
k
u U R , so that its output vector y will be convergence asymptotically to any k
desired output vector w k, and this tracking control performance will not be affected by
white noises k, k and by system errors d k
For solving the above formulated tracking control problem this paper proposes the
control concept with three following steps to be carried out:
1 Replace approximately the original model (13) by a set of infinite of LTI models
, 0,1,
k
H k as depicted in Fig.3 This set of infinite of LTI models H will be k
called in this paper the moving horizon predictive model of the original nonlinear
system (13)
2 Then each of those LTI models will be used subsequently at the time instant
, 0,1,
k
t k , together with moving finite control horizon [ ,t t k k N ] along the time
axis toward, to design correspondingly a state feedback controller u x k( k) subjected to
the constraint u kU for tracking control the original nonlinear system (13) during the
current time interval [ ,t t k k1), where t k1t kT and T is the sampling time of the
system (13)
3 Replace the states x k in the above obtained state feedback controller u x k( k) by
observed states xk, which is received from an applying extended (EKF) or unscented
Kalman filer (UKF), to obtain an output feedback controller u x k(k)
3.1 Receding horizon LTI predictive model
If all noises k, k and disturbance d k in (13) are negligeable, then from (13) the
corresponding nominal model is obtained:
1 ( , )
( )
k
k
(14) Since the smooth property, both function vectors ( ), ( )f g of the nominal model (14) can
be now approximated at the previous time instant t k1 and during time interval [t k1, )t k
afterwards as follows:
1
k
k
x
g
x
where
(15)
Trang 9are all now determined at the current time instant k , because all past system values
1, 1
x u are already known For the controller design hereafter both vectors d h in k, k (15) will be considered as constant during the whole current control horizon [ ,k k N )
Hence, it is deduced:
1
k
k
H
(16) and this model will be used afterward for the prediction of system outputs y k i
in the current prediction horizon 1 i N
Figure 3: Using infinite number of LTI system models instead of nonlinear one
3.2 State feedback controller
At the current time instant k and based on the already measured system states x k, all predictive system states x k i , 1 i N can be now obtained from the LTI predictive model (16) as follows:
1
k i
i
Now, if all predictive output vectors are rewritten as a mergence vector:
k 1, k 2, , k N
y
then it is obtained:
F p
where:
2
1
1
1
1
with
k k
k N
i
N
C A
u
d
(18)
It is easily to recognize, that the predictive mergence vector y given in (17) depends only
on all inputs p in the future associated in the current horizon
k
k
H Hk1
the current predictive horizon the next predictive horizon
k
Trang 10With the expression (17) of obtained predictive outputs y k i, 1 i N
, all tracking errors during the current control horizon will be deduced as follows:
Fp
where:
k 1, k 1, , k N
is the mergence desired output values during the same control horizon
Next, according to the output tracking purpose y kw k or e0 associated with the
current control horizon, the mergence input vector p would be determined by minimizing
the following objective function:
where Q Rk, k are any arbitrarily chosen symmetric positive matrices This objective
function is clearly equivalent with:
which is obtained by replacing (19) into (21), or:
since the last term wdT Q w k d is independent on p
Easily to see that the obtained objective function (22), which is to be minimized, is
quadratic Hence for solving this optimization problem subjected to the constraint pP
with:
or:
* arg min k/( )
p P
it is obviously [4]:
the QP method could be used, if the constraint U is linear (described by linear
inequations), or
the SQP method is an appropriate one, if the constraint U is nonlinear
For unconstrained case it is:
Finally, the control value u k for the original perturbed nonlinear system (14) is then
getting from the received optimal solution p of the optimization problem (24) as follows: *
*
, , ,
k
and this control value u k, which is clearly dependent on current system states x and k
therefore will be denoted by u x k( k), is only valid during the short current sampling time