1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Integrating the receding horizon LQG for nonlinear systems into intelligent control scheme

12 46 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 12
Dung lượng 409,97 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In this paper the principle operation of a conventional output feedback receding horizon controller based on LQR is presented quickly first and then some opportunities to improve its performance by integrating into intelligent control scheme are discussing.

Trang 1

INTEGRATING THE RECEDING HORIZON LQG FOR

NONLINEAR SYSTEMS INTO INTELLIGENT CONTROL SCHEME

Nguyen Doan Phuoc

Abstract: In this paper the principle operation of a conventional output feedback

receding horizon controller based on LQR is presented quickly first and then some opportunities to improve its performance by integrating into intelligent control scheme are discussing The paper is ending with a simulative application of the intelligent receding horizon LQG for temperature tracking control inside of a thick slab in a heating furnace The simulation results have confirmed the effectiveness of integrating intelligent components into a conventional receding horizon controller

Key words: Receding horizon controller; Intelligent control; GA; PSO; Fuzzy estimator

1 INTRODUCTION

The intelligent control is portrayed in [[1]] as the “control of tomorrow” with many characteristics of human intelligence and biological systems in it such as planning under large uncertainty, adaptation, learning Since then it is arised an actual question, whether the intelligent control could replace completely the conventional control in tomorrow? Based on a particular receding horizon controller the paper will show that the pure intelligent control can not and never supersede the conventional control in the future The intelligent control is gainful only for being integrated additionally into a conventional controller to improve its control performance

The organization of this paper is as follows The main results are presented first in Section 2, including a short description of output feedback receding horizon controllers for nonlinear systems, a brief introduction of intelligent control concept with its components, then some approaches to integrating intelligent components into a conventional receding horizon controller, and at last an example for the illustration of intelligent receding horizon LQG performance Finally some concluding remarks are given in Section 3

2 MAIN CONTENT 2.1 Brief description of receding horizon LQG for nonlinear systems

It is already presented in [[2],[3]] the main content of a state feedback receding horizon LQR for output tracking control of a time-variante, nonlinear continuous-time systems:

( , , ) ( , )





x f x u t

where

, , and u y

 

are state disturbances, measurement noise, vector of inputs and outputs respectively, as well as the number of inputs and outputs are assumed to be equal With this controller the output y of closed-loop system will converge asymptotically to a desired reference r t( )

An extension of this controller to the compatible output feedback receding horizon LQG based on separation principle is illustrated in Figure 1 At every time instant , 0,1,

k

t k , where t kt k1 and  is a sufficiently small positive value, and together with measurable system outputs ( )k

k

yy t , the nonlinear system (1) can be replaced approximately during a short time interval t k  t t k1 by a LTI system:

Trang 2

:

, 0,1,

k

x A x B u

H

y C x k



where

1 1

f x u t A x B u g x t C x

(3)

and x k is an optimality estimated value of x t( ) at the current time instant t k in order to eliminate the effects of both disturbances and  

In the case of particularity nonlinear structure of (1) as follow:

( , , , ) ( , , , ) ( , , ) ,

x A x u y t x B x u y t u

y C x u t x



the determination of matrices A B C k, k, k given in (3) will be simplified with:

0 and 0

A A x u y t B B x u y t C C x u t

After approximating the original nonlinear system (1) to LTI (2) for a short time interval t k  t t k1, the design of linear output feedback controller for (2) would be carried out Obviously the obtained controller is then valid for (1) only during the corresponding time interval t k t t k1

Figure 1 Repeatedly operational performance of the receding horizon LQG [[2],[3]]

Based on the variation technique, the tracking controller u t( ), which guarantees definitely the asymptotic convergence of the output y of LTI system (2) to the longing constant reference:

k ( )k (k 1) 1

k

wr tr t  y  , (6)

is determined as follows [[1],[3]]:

1

1

0

[ ] , ( ) and

T t

k

k

u K x x k T z u k P B R B P A P P A Q

K R B P

x k

A B

z y t w dt

where Q kQ k T 0, R kR T k 0 and T kT k T 0 are chosen arbitrarily The constant reference (6) for LTI in (2) is established from the desired value r t( )k of the original

( )

u t

( )

u t

k

t t k1

t

( )

u t

2

k

t

Calculation unit

Getting

( )k

y t

Trang 3

nonlinear system (1) at the current time instant t k and the tracking error (k 1) 1

k

r t  y

remaining from the previous horizon k1, which will be compensated now during the current control horizon k, i.e during the time interval t k  t t k1

2.2 Meaning of intelligent control and its components

As it was remarked in [[1],[4]-[16]], there are involved in intelligent control concept all unconventional control approaches, in which the control performance imitates the acting behaviour of biological system or human knowledge to solving a control problem The unconventionality of intelligent control here means that to design an intelligent controller the mathematical models of controlled plants or processes could not be required

as usually by conventional control

In a precise view, the intelligent control is a class of control techniques, where various control approaches of artificial intelligence are applied either separately or simultaneously, such as neural networks, fuzzy logic, evolutional computing and machine learning [[4]-[9]]

1 Artificial neural network (ANN) is a computerized system of enormously connected

artificial neurons, which attempt to copy the behaviour of biological neurals in the nature and the message exchange between them Every artificial neuron i in an ANN is essentially a sub-system with multi inputs u i1, ,u in and single output y i, which is

modelled by an associated activation function:

1

yf u  u

In an ANN, all neurals are arranged in different layers, including input-, hidden-, and output layer Any artificial neuron i of each layer is connected permanently with an neural j of the other layer through a connection bond Every connection bond ij is always assigned with a weight w ij to perform the information flow u jkw y ij i from single output signal y i of neural i to an input u jk of another neural j The set of weighted factors w(w ij) plus a suitably updating strategy for them (most optimimal), together with the accordingly chosen connection topology (feedforeward or backward) and an activation functions f j( ) in each neural j, decide the essentially operational performance of ANN, under which the weighted factors w(w ij) cover most useful informations for ANN to solve a particular control problem [[6],[7]]

Figure 2 Multi-layers model with two hidden layers

ij

w

z

f z j( )

w 1j

w nj

1

u

2

u

n

u

1

y

2

y

m

y

Input layer

Output layer Hidden layers

Trang 4

Figure 2 illustrates the typically feedforward topology of an ANN with one input layer, one output layer and two hidden layers Although this connection topology is a quite simple one, it has been nevertheless shown in [[4],[6],[7]] that this feedforward ANN with suitably chosen nonlinear, continuous and differentiable activation functions f j( )

for all neurals j are adequately for universal approximation and control through adjusting optimality the weighted factors w(w ij) in it

2 Fuzzy controller (FC) is a computerized system, in which the human knowledge about

how to control a plant is reproduced and implemented [[6],[9]]

To design an output feedback fuzzy controller, the human knowledge for control a plant with n input u1, ,u n and m outputs y1,  ,y m will be first reproduced in a

so-called rule-base as follows:

and and and

If

where U Y ij, ik, 1 i p, 1 j n, 1 k m are the linguistic values of plant inputs and outputs repectively These linguistic values are obtained from numeric values of plant

via a so-called fuzzification process Each item R i of the rule-base above is called the

inference clause

To implement the rule-base is needed an inference mechanism, which is established

based on axiomatic implication of fuzzy logic [[9]] Since many inference mechanisms are available for an implementation, there are also many compatible fuzzy controller available to represent a fixed human control knowledge (8)

Finally, since the inference mechanism returns a result of rule-base (8) as a linguistic value in output, for being useable to control plants it has now to be converted correspondingly into a numeric value, and which will be realized by using an

appropriate so-called defuzzifiacation process [[9]]

Hence, a fuzzy controller consists of three main components:

 Fuzzification for transforming the numeric values of plant inputs and output into linguistic values to be useable for the inference mechanism

 The rule-base and the inference mechanism for realizing the human control experience Whereas the rule-based is a set of rules for reproducing the human knowledge about how to control, the inference mechanism is a tool to implement it

 Defuzzification for converting the linguistic results of rule-base into a numeric value to be useable for the plant control

Figure 3 below illustrates the main optional structure of a fuzzy controller

Figure 3 Fuzzy controller

3 Evolutional computing is a concept of all unconventional approaches or algorithms,

inspired by biological evolution such as natural inheritance, mutation, selection and crossover, or by social behaviour of biological systems such as bird flocking, fish schooling, to determine the global solution of an optimization problem

1 ( , , n)T

1

( , , m)T

Rule-base Fuzzification Defuzzification

Inference mechanism

Trang 5

In evolutional computing, an initial set of candidate solutions is generated first and then updated or moved iteratively towards the optimal solution Each new set of candidate solutions is produced by using artificial selection, mutation principle or flocking behaviour of biological systems for removing less desired solutions and supplementing other solutions, which increase the fitness (objective function) The optimal solution is picked finally then from the last set of candidate solutions, which is obtained when either a maximum number of iterations has been exceeded or a satisfactory fitness level has been reached The main advantage of evolutionary computation techniques is that for applying them no any particular assumtions about problems being optimized are needed, such as derivate or convexness, which are required commonly by using conventional techniques [[7],[11],[12],[13]]

Genetic algorithm (GA) and Particle swarm optimization (PSO) are two representative examples of evolutionary computation techniques Nowadays these two evolutionary techniques become very popular, because they are not only suited for solving a wide range of optimization problems, but also have been applied successfully to self-adaptive and self-organized control differently nonlinear systems [[6],[7],[10]]

Genetic algorithm (GA) is a computational search technique for finding the optimal

parameters x* of:

* arg min ( )

x X

 , subjected to x X Rn, (9) where x is a vector (or a point) of n variables x x1, 2, ,x n to be optimized

In the language of evolutionary computation generally, the objective function value ( )

f x at a definite poin x and the constraint XRn are often referred to as the

fitness of particle x and the search space for optimal solution respectively

GA is different from conventional optimization methods in the senses, that firstly it

is working initialy with a set of parameter codes (i.e bit strings of particles) instead

of parameters themselves, secondly GA updates iteratively a set of particles (i.e of population) not a single particle, and finally GA uses probability updating rule instead of deterministic one [[6],[10]]

GA begins with a selected population of N particles denoted with x i i,  1, ,N

(typically random) Then each particle x i will be assigned a bit string of length L

It means that each string will consist of 2L discrete values

After that, GA will update iteratively this initial population of strings step by step to regenerate a new one (new population) in the evolutionary sense, that based on the invididual fitness of each string the worse strings will be discarded and some betters will be supplemented Keys for the reproduction of new population of strings are selection, crossover and mutation rule

Denote all strings in the iteration k, i.e in the kth population, with:

( ) ( ), ( ), , N( )

G kx k x k x k , then each string in G k( ) will be assigned a selection probability:

p if x ki( ) f , where  

1

1

( )

N j j

f f x k

N

Correspondingly, the string x k i( ) will have a chance of p i to be copied into a so-called intermediate population I k( ) With this copying approach more fit strings will tend to I k( ) more often

Trang 6

After completing the copy of G k( ) into I k( ), the crossover can occur over I k( ) in the way that a pair of strings in I k( ) is picked randomly first and then recombined with a specified probability p c (usually very big) by swapping their bit fragments as illustrated in Figure 4 Two new obtained strings will replace the original pair in intermediate population I k( )

Finally, the new population G k( 1) for the next iteration k1 is created from I k( ) with a randomly chosen mutation probability p m (often very small) so, that all strings of I k( ) satisfied p ip m will be copied directly into G k( 1) The others have to be flipped in a bit via mutation rule before copying into G k( 1)

Figure 4 Crossover occures through swapping two fragments of strings

In summary, the procedure of GA is as follows: (a) Use a random generator to create an initial population of strings (b) Use the fitness function to evaluate each string (c) Use a suitable selection method to select best strings (d) Use crossover and mutation rules to create the new population of strings

Back to the step (b) or exit if a stop condition is encountered

Particle swarm optimization (PSO) is an optimization technique to solving the

problem (9), in which a random chosen population of particles, also called the swarm, will be moved iteratively around search space, instead of updating it iteratively what happened in GA The moving rule of PSO was inspired by collective behaviour of bird flocking [[7],[11]-[13]]

Motivated by social behaviour of bird flocking, PSO will solve the optimization problem given in (9) above by iteratively moving m randomly chosen particles within bounds (i.e swarm or population):

i, 1, , 

Sx im

in the search space X according to formulas (11) over their own position x i and velocity v i The movement of each particle is influenced by its local best position

i

p and the global best position g among all particles in the swarm, which are also updated iteratively together with the moverment At the end of iterations it is expected, that the whole swarm will flock to the optimal solution x*

Denote the position, velocity, local best position and global best position of each particle x iS during the iteration k with k i, k i, k, k

i

x v p g respectively, then the basic fomulas for moving them involve [[12]]:

1

If ( ) ( ) then set , otherwise

arg min ( )

i i

v wv c r p x c r g x

x x v X

(11)

( )

i

x k

( )

j

x k

Trang 7

In formulas (11) above 0 w 1.2, 0 c1 2, 0 c2 2 are user-definited constants and 0 r1 1, 0 r2 1 are random values regenerated for each iterative update

Following steps exhibit the iterative algorithm of PSO: (a) Create an initial swarm

of particles within bounds and assign them initial velocities (b) Evaluate the fitness of each particle (c) Update velocity and position of each particle

(d) Update local and global best fitnesses and positions Go back to the step (b) or exit if a stop condition is encountered

4 Machine learning, as mentioned in [[14]-[18]], is a field of computer science, which

provides computer systems the learning ability with data, without being explicitly programmed Machine learning is related closely to many different domains of

computer science such as pattern-recognition, decision-making, prediction-making, optimizing from past data, data analysis, machine vision and etc

From the data analysis and optimization characteristic points of view, the machine learning is then for control engineering very helpful, if there the controlled systems are too complex to modelling mathematically, or the obtained model may be too complicated for a conventional synthesis of controllers, as well as when the systems are effected additionally by large environmental disturbances, which are difficult to predict

or eliminate A subfield of machine learning faciliates to overcome these circumstances, and additionally to find some meaningful anomalies of systems, is the

learning control

Because controllers provided from learning control framework are established based on data analysis and intelligent optimization, they would possess properties of human knowledge and therefore they could be considered as intelligent controllers These

controllers are also called learning controllers Figure 5 exhibits a control configuation

using learning controllers, which is nowadays often met in high-tech manufacture, robot manipulator and flying machine control [[17],[18]]

Figure 5 Control a complex system with an additional learning controller

At a certaint time instant t k the learning controller uses past data of process inputs, system references and system errors, i.e of u t r t e t t t c( ), ( ), ( ),  k, to determine the correction signals u t l( )k for controlled systems in oder to minimize the prediction

tracking error by using a so-called learning mechanism

The most essential component of every learning controller is a function approximator ( , )

f x in it, where x are parameters to be determined frequently (during the control) by learning from the past data so that the predictive errors of tracking behaviour will be minimized In this sense, the learning controller could be considered as a model predictive controller (MPC) or an adaptive controller, but it is applicalbe for more larger class of systems, espectially for such complex systems, which are difficult or even unable to modelling mathematically [[18]]

The design of a learning controller will occur through following steps:

y

l

u u

Conventional controller

Plant or process Learning

controller

Trang 8

 Select a suitable function approximator f x( , ) The suitableness here means that the application of this selected function approximator must guarantee the long term stability of systems despite the undesired changing of plants Furthermore, the convergence behaviour of tracking error must be indicated

At the present time there are many function approximators are available for the

selection, such as the Multi Layer Perceptron network (MLP), B-Spline Network (BSP) or Cerebellar Model Articulation Controller network (CMAC) [[18]]

 Choose an appropriate learning mechanism The learning mechanism denotes a computational algorithm for updating x frequently from past data so that during control the tracking error of system will be constantly minimized

Learning mechanisms are distinguished mainly to each other through their frequency for the update of parameters x Some gainful mechanisms are the

iterative learning, the repetitive learning and the non-repetitive learning

The frequency of applied learning algorithm and the charactertistics itself effect obviously the stability of the system Therefore, according to a certain learning mechanism being used the stability of closed-loop system has to be investigated individually and it is usually carried out with theoretical approaches [[18]]

2.3 Integrating receding horizon LQG into intelligent control scheme

It is simply recognisable from Subsession 2.1 above that there are for the presented receding horizon LQG some opportunities to improve its control performance through changing intelligently parameters in it Precisely there are:

1 Since all positive definite matrices Q R T k, k, k in each control horizon k0,1, are chosen arbitrarily, it arises here an oppportunity to select them so that the obtained controller could satisfy additionally some more required constraints, such as the input contraint u kU, the state constraint x kX, the constraint of transient time and etc However, an analytically unified objective function by combining all these constraints would be too compicated for the utilization of any conventional optimization method, and fathermore it is even impossible in some theoretical situations Hence, in these circumstances, the usage of an intelligent optimization method is a good remedy

Figure 6 Intelligent receding horizon LQG

The utilization of GA, PSO, DE (differential evolution) and ICA (imperialist competitive algorithm) to determine optimale weighted matrices Q R, for multi-constrained LQR design on a flexible robot manipulator LTI model had been presented and compared in detail in [[19]], which could be here employed gainfully to determine

Intelligent output-feedback controller

y r

x

u

,

Q R

a

Receding horizon LQR

Plant or process

Intelligent optimizing

Intelligent estimator

State observer

Trang 9

optimality weighted matrices Q R T k, k, k for the receding horizon LQG (7) subjected to several constraints of each horizon moving k0,1,

2 To be converted correlatively a state feedback controller for nonlinear systems into output feedback one based on separation principle is needed a suitable state observer There are many intelligent state observers for nonlinear systems (1) available, under which the Kalman filter and the particle filter are known as representative examples In spite of the fact that these observers often require system models or measurement models for their implementation and hence they are related closely to theoretical approaches, these observers would be still considered as intelligent components of

[https://en.wikipedia.org/wiki/Intelligent_control]

3 Despite the receding horizon controller (7) was designed originally for certain systems (1), it could be also outspreaded to employ for an analytically uncertain one:

( , , , )

( , , )

x f x u a t

y g x a t



where a is a vector of all system parameters, which depend on system inputs and outputs u y, but can not be determined analytically

However, if the dependance of a on system inputs u and outputs y could be captured

by human experiences, then together with a fuzzy estimator to reproduce the human knowledge of how the vector a depends on u y, , the receding horizon controller (7) is applicable again for this analytically uncertain nonlinear system (12)

Figure 6 exhibits an example of intelligent receding horizon LQG, which is obtained

by integrating intelligent control components into the original receding horizon LQG (7)

2.4 An illustrated example

To illustrate the integration of intelligent components into a conventional receding horizon LQG for improving intelligently its control performance it will be considered here

an intelligent controller based on the receding horizon LQR for the temperature tracking control at a particular position z inside of a thick slab in a heating furnace, which has been basically presented in [[20]]

Figure 7 Simulation configuration

Intelligent receding

horizon LQG

( ), ( )

c TT

Slab temperature at the position z

u

x

( )

e t

Measurement noise Actuator disturbance

z

,

Q R r

Galerkin model

Fuzzy estimator

Receding horizon LQR

Slab in heating furnace

PSO

Trang 10

The principle integration for this LQG controller to becoming intelligent is depicted in Fig.7 and which will be used here as the rightful simulation configuaration This controller would be called intelligent, because it consists of two intelligent control components in it The first intelligent component is the fuzzy estimator for determining heat transfer and conductive parameters c T( ), ( )T of the temperature transfer inside the slab respectively The second component is the evolutional optimization algorithm PSO, which is applied for finding out all weighted matrices Q R k k, k, 0,1, appropriately to oblige constraints of the control problem Furthermore, in this simulation configuaration the Galerkin model will be used for the state observation, instead of any state observer as usual such as Kalman or particle filter Hence it would be called the internal model controller

The simulation according to the configuration given above will be carried out as close

as possible to real envionments Concretely, it will be realized in presence of both impulsive disturbance in inputs and high-frequency measurement noise at system outputs The controlled object, i.e the thick slab in a heating furnace, is modelled with Simscape libraries and the desired reference r t( ) is a continuous-time function as follows:

( ) 289 711 1 exp( 0.005 )

r t      t (13)

Figure 8 Simulation results

As shown in obtained simulation results given in Figure 8, due to using additionally a fuzzy estimator and an intelligent optimization algorithm PSO, the receding horizon LQG has tracked the slab temperature in the middle asymptotically to the desired value (13) and the required constraints u 2000, T5% 2500s are satisfied as expected, without having

to determine analytically c T( ), ( )T as well as the unified objective function

3 CONCLUSIONS

It was shown in this paper, the intelligent control alone with all its components, such

as ANN, Fuzzy controller, evolutionary computing, machine learning , can not supersede entirely the conventional output-feedback receding horizon controller for nonlinear systems However, the intelligent components can be integrated additionally into a conventional controller for improving its control performance, when the systems are too complex or are affected by large environmental disturbances

REFERENCES

[1] P.J.Antsaklis (1993): Defining intelligent control Report of the IEEE CSS talk force

on intelligent control

Ngày đăng: 10/02/2020, 00:50

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN