The state of the system can be represented as a vector within that space Intelligent Control Theory: ** o Intelligent control is a class of control techniques, that use various artifici
Trang 1Introduction to Intelligent Control
2.4 Neural Network Controller
2.5 Building Neural Network Controller with Matlab
2.6 Summary
3 Fuzzy Logic (FL)
3.1 Overview
3.2 Foundations of Fuzzy Logic
3.3 Fuzzy Inference System
3.4 Building systems with Fuzzy Logic Toolbox - Matlab
3.5 Sugeno-Type Fuzzy Inference
3.6 Summary
4 Conclusions
Trang 2HCMC University of Technology Intelligent Control
1 Introduction
The subject of automatic controls is enormous, covering the control of variables such as temperature, pressure, flow, level, and speed
Example 1.1: Control the velocity of motor
Fig 1.1 A Simple control motor’s velocity Assume here are the velocity speeds of motor:
Control signal Set point Control value (assume)
0 1 2 3 4 5 6 7 8 9 10 0
500 1000 1500 2000 2500
0 500 1000 1500 2000 2500 3000
U
Trang 3And therefore, control signal is the most important for your control strategies
Assume ݑ(݅)is control signal at present time, and it is given ݑ(݅) = ݑ(݅ − 1) + ∆ݑ (1.1)
Here, ∆ݑ is the value added and it is depended on your control methods
Example 1.2: A simple of control system (*)
Fig 1.2 Manual control of a simple process
In the process example shown (Figure 1.2), the operator manually varies the flow of water
by opening or closing an inlet valve to ensure that:
The outcome of this is that the water runs out of the tank at a rate within a required range
At an initial stage, the outlet valve in the discharge pipe is fixed at a certain position The operator has marked three lines on the side of the tank to enable him to manipulate the water supply via the inlet valve The ideal level between level 1 and level 2
The Example (Figure 1.1) demonstrates that:
o The flow valve is referred to as the Controlled Device
o To operate the controlled device is known as Control Signal
o The water itself is known as the Control Agent
o The change in water level is known as the Controlled Variable
o The level of water trying to be maintained is known as the Set Point
o The water level at steady state conditions, referred to as the Control Value
o The difference between the Set Point and the Control Value is Deviation
o If the inlet valve is closed to a new position, the water level will drop and the deviation will change A sustained deviation is known as Offset
(*):http://www.spiraxsarco.com/resources/steam-engineering-tutorials/basic-control-theory/an-introduction-to-controls.asp
Trang 4HCMC University of Technology Intelligent Control
Elements of automatic control
Fig 1.3 Elements of automatic control
o The operator's eye detects movement of the water level against the marked scale indicator His eye could be thought of as a Sensor
o The eye (sensor) signals this information back to the brain, which notices a deviation The brain could be thought of as a Controller
o The brain (controller) acts to send a signal to the arm muscle and hand, which could
be thought of as an Actuator
o The arm muscle and hand (actuator) turn the valve, which could be thought of as a
Controlled Device
Trang 5Example 1.3: Depicting a simple manual temperature control system (*)
The task is to admit sufficient steam (the heating medium) to heat the incoming water from
a temperature of Tଵ; ensuring that hot water leaves the tank at a required temperature of
Tଶ
Fig 1.4 Simple manual temperature control
Fig 1.5 Typical mix of process control devices with system element
Trang 6HCMC University of Technology Intelligent Control
(*):http://www.spiraxsarco.com/resources/steam-engineering-tutorials/basic-control-theory/an-introduction-to-controls.asp
Note:
There are three major reasons why process plant or buildings require automatic controls:
o Safety - The plant or process must be safe to operate
o Stability - The plant or processes should work steadily, predictably and repeatably,
without fluctuations or unplanned shutdowns
o Accuracy - This is a primary requirement in factories and buildings to prevent
spoilage, increase quality and production rates, and maintain comfort These are the fundamentals of economic efficiency
Other desirable benefits such as economy, speed, and reliability are also important, but it
is against the three major parameters of safety, stability and accuracy that each control application will be measured
How to control the system?
A closed-loop controller uses feedback to control states or outputs of a dynamical system Its name comes from the information path in the system: process inputs (e.g voltage applied to an electric motor) have an effect on the process outputs (e.g velocity or torque
of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is used as input to the process, closing the loop
Closed-loop controllers have the following advantages over open-loop controllers:
o Disturbance rejection (such as unmeasured friction in a motor)
o Guaranteed performance even with model uncertainties, when the model structure does not match perfectly the real process and the model parameters are not exact
o Unstable processes can be stabilized
o Reduced sensitivity to parameter variations
o Improved reference tracking performance
In some systems, closed-loop and open-loop control are used simultaneously In such systems, the open-loop control is termed feed-forward and serves to further improve reference tracking performance
Trang 7Classical Control Theory: (*)
o Classical control techniques can be broken up into Frequency Domain techniques and Time Domain techniques Frequency Domain techniques are a suite of analysis and design tools that include Root Locus, Pole Placement, Bode Plots, and Nyquist State-Space techniques are the only part of Classical control that is done in the Time Domain
o The limitations of Classical Control are: Assume that the system we are trying to control is a linear system and model (the transfer function of the system) should be given as well as initial condition should be equal by zero
Modern Control Theory: (*)
o In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations
o To abstract from the number of inputs, outputs and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear) The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system
o Unlike the frequency domain approach, the use of the state space representation is not limited to systems with linear components and zero initial conditions "State space" refers to the space whose axes are the state variables The state of the system can be represented as a vector within that space
Intelligent Control Theory: (**)
o Intelligent control is a class of control techniques, that use various artificial intelligent computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms
(*):http://en.wikipedia.org/wiki/Control_theory#Classical_control_theory
(**):http://www.nd.edu/~pantsakl/control.html
Trang 8HCMC University of Technology Intelligent Control
o Intelligent control does not restrict itself only to those methodologies In fact, according to some definitions of intelligent control (not all neural/fuzzy controllers) would be considered intelligent The fact is that there are problems of control which cannot be formulated and studied in the conventional differential/difference equation mathematical framework To address these problems in a systematic way, a number of methods have been developed that are collectively known as intelligent control methodologies
o The area of intelligent control is in fact interdisciplinary, and it attempts to combine and extend theories and methods from areas such as control, computer science and operations research to attain demanding control goals in complex systems
o In intelligent control problems there may not be a clear separation of the plant and the controller; the control laws may be imbedded and be part of the system to be controlled This opens new opportunities and challenges as it may be possible to affect the design of processes in a more systematic way
Conclusions
o Goal: Control system that works the same our desire
o System: Complex, uncertainty and nonlinearity and cannot derive the mathematic equations
o Control strategies: Computing the value added for next step that help the system reaches to the desire goal
o Control method: Intelligent control that uses artificial intelligent computing without regard the mathematic model of system
Trang 9Fig 2.1 Biological neuron
A neuron receives input from other neurons (typically many thousands) Inputs sum (approximately) Once input exceeds a critical level, the neuron discharges a spike - an electrical pulse that travels from the body, down the axon, to the next neurons or other receptors
(*)http://www.willamette.edu/~gorr/classes/cs449/brain.html
Trang 10HCMC University of Technology Intelligent Control
The axon endings (Output Zone) almost touch the dendrites or cell body of the next neuron Transmission of an electrical signal from one neuron to the next is effected by neurotransmitters, chemicals which are released from the first neuron and which bind to receptors in the second This link is called a synapse The extent to which the signal from one neuron is passed on to the next depends on many factors, e.g the amount of neurotransmitter available, the number and arrangement of receptors, amount of neurotransmitter reabsorbed, etc
Synaptic Learning:
Brains learn Of course, from what we know of neuronal structures, one way brains learn is
by altering the strengths of connections between neurons, and by adding or deleting connections between neurons Furthermore, they learn "on-line", based on experience, and typically The efficacy of a synapse can change as a result of experience, providing both memory and learning through long-term potentiation One way this happens is through release of more neurotransmitter Many other changes may also be involved
Fig 2.2 Synaptic Learning
Artificial Neuron Models
Once modeling an artificial functional model from the biological neuron, we must take into account three basic components First off, the synapses of the biological neuron are modeled as weights Let’s remember that the synapse of the biological neuron is the one which interconnects the neural network and gives the strength of the connection
Trang 11For an artificial neuron, the weight is a number, and represents the synapse A negative weight reflects an inhibitory connection, while positive values designate excitatory connections The following components of the model represent the actual activity of the neuron cell All inputs are summed altogether and modified by the weights This activity is referred as a linear combination Finally, an activation function controls the amplitude of the output
Fig 2.3 Neuron model (a): Detail of neuron model (b): Neuron model shorten description
o The weighted sum net is called the net input to unit i, often written net i
o Note that w i refers to the weight from unit i
o The function f is the unit's activation function In the simplest case, f is the identity
function, and the unit's output is just its net input This is called a linear unit
a = f net
Trang 12HCMC University of Technology Intelligent Control
Trang 13Artificial Neural Network
A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation
Neural network with 1 layer (R input and S output)
Multi layer Neural network
Fig 2.5 Multi layer neural network
Trang 14HCMC University of Technology Intelligent Control
Type of neural networks
Fig 2.6 A taxonomy of feed-forward and recurrent/ feedback network architectures (*)
Multilayer perceptron
A multilayer perceptron (MLP) is a feed-forward artificial neural network model that maps sets of input data onto a set of appropriate output An MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function MLP utilizes a supervised learning technique called back-propagation for training the network
Fig 2.7 Multilayer perceptron (**) (*)Ani1 K Jain - Michigan State University, USA
(**)http://www.mathworks.de/help/toolbox/nnet/
Trang 15Radial basis function network
A radial basis function network is an artificial
neural network that uses radial basis functions as
activation functions, such as a Gaussian
function, as the activation function Radial basis
function (RBF) networks typically have three
layers: an input layer, a hidden layer with a
non-linear RBF activation function and a non-linear output
layer A simply a Gaussian, f(x)=exp(-a*x2) (*)
Fig 2.8 Gaussian active function
Fig 2.9 Radial basis function network (**)
Competitive neural network
A competitive neural network is an artificial neural network that uses competitive functions
as activation functions Competitive learning is a process in which the output neurons of the network compete among themselves to be activated with the result that only one output neuron, is on at any time The output neuron that wins the competition is called winner-takes-all neurons and as a result of the competition, they become specialized to respond to specific features in the input data
Fig 2.10 Competitive neural network (**)
(*)http://www.willamette.edu/~gorr/classes/cs449/Maple/ActivationFuncs/active.html
(**)http://www.mathworks.de/help/toolbox/nnet/
Trang 16HCMC University of Technology Intelligent Control
Kohonen’s self-organizing maps (SOM)
A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map Self-organizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological
properties of the input space
(a)
(b) Fig 2.11 SOM topology (a): Simple Neural network topology
(b) SOM topology
Trang 17`Hopfield network
A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield A recurrent neural network (RNN) is a class of neural network where connections between units form a directed cycle This creates an internal state of the network which allows it to exhibit dynamic temporal behavior Unlike feed-forward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs
(a)
(b) Fig 2.12 Hopfield network (a): Hopfield network topology (*)
(b) Hopfield network in general with Matlab (**)
(*)http://en.wikipedia.org/wiki/Hopfield_network
Trang 18HCMC University of Technology Intelligent Control
ART model (Adaptive Resonance Theory Neural network)
The ART1 simplified model consists of two layers of binary neurons (with values 1 and 0), called F1 (the comparison layer) and F2 (the recognition layer) Each neuron in F1 is connected to all neurons in F2 via the continuous-valued forward long term memory (LTM)
Wf , and vice versa via the binary-valued backward LTM Wb The other modules are gain
1 and 2 (G1 and G2), and a reset module Each neuron in the comparison layer receives three inputs: a component of the input pattern, a component of the feedback pattern, and a gain G1 A neuron outputs a 1 if and only if at least three of these inputs are high: the 'two-thirds rule.' The neurons in the recognition layer each compute the inner product of their incoming (continuous-valued) weights and the pattern sent over these connections The winning neuron then inhibits all the other neurons via lateral inhibition Gain 2 is the logical 'or' of all the elements in the input pattern x Gain 1 equals gain 2, except when the feedback pattern from F2 contains any 1; then it is forced to zero Finally, the reset signal
is sent to the active neuron in F2 if the input vector x and the output of F1 di
er by more than some vigilance level
Fig 2.13 ART model neural network (*)
(*) http://www.learnartificialneuralnetworks.com/art.html
Trang 19Neural network applications
Figures 2.14 summaries various learning algorithms and their associated network architectures (this is not an exhaustive list) Both supervised and unsupervised learning paradigms employ learning rules based on error-correction, Hebbian, and competitive learning Learning rules based on error-correction can be used for training feed-forward networks, while Hebbian learning rules have been used for all types of network architectures However, each learning algorithm is designed for training a specific architecture Therefore, when we discuss a learning algorithm, a particular network architecture association is implied Each algorithm can perform only a few tasks well The last column of Fig 2.14 lists the tasks that each algorithm can perform
And for control application, multi layer perceptron neural network is applied
Fig 2.14 Neural network applications (*)
(*)Ani1 K Jain - Michigan State University, USA
Trang 20HCMC University of Technology Intelligent Control
2.2 Neural Network Computation
Simple neuron computation
Fig 2.15 Simple neuron computation (*) For example in Fig 2.15
Trang 22HCMC University of Technology Intelligent Control
Multi layers of neurons
Fig 2.18 Multi layers of neurons (*) Number of neurons in the hidden layer (Sm) depends on complexity of the problem
Trang 24HCMC University of Technology Intelligent Control
Example: Computation of Feed-forward neural network
Trang 25Fig 2.22 Computation of Feed-forward neural network (*) 2.3Training
A simple of training of neural network
Fig 2.23 Supervised training method for neural network
(*)http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html
Trang 26HCMC University of Technology Intelligent Control
Example of training
(continuous the previous computation)
Training
Trang 28HCMC University of Technology Intelligent Control
Trang 29Fig 2.24 Example of Training (*)
(*)http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html
Trang 30HCMC University of Technology Intelligent Control
Back propagation training algorithm
Theory of back propagation method
Fig 2.25 Multi layers neural network (*) Training Set:
Mean Square Error
Vector Case
Approximate Mean Square Error (Single Sample)
Approximate Steepest Descent
-=
f n w ( ( ) )
d w d
- d f n ( )
n d
- d n w ( )
w d
-× ( – sin ( ) n ) 2e ( 2w) ( – sin ( e2w) ) 2e ( 2w)
Trang 31Application to Gradient Calculation
Gradient Calculation
w i j m , ( k + 1 ) = w i j m , ( ) k – αs i m a m j – 1
b i m ( k + 1 ) = b i m ( ) k – αs i m
Trang 32HCMC University of Technology Intelligent Control
Example Function Approximation (*)
Fig 2.26 Example function approximation Here is a network
Fig 2.27 Proposed Network for function approximation
(*):Martin T Hagan: Neural Network Design
n F s
F
Trang 33Fig 2.28 Structure of proposed neural network Initial Conditions
Network response with sine wave
Fig 2.29 Network response with sine wave
0.41 –
0.13 –
=
W 2 ( ) 0 = 0.09 0.17 – b 2 ( ) 0 = 0.48
Network Response Sine Wave
-1 0 1 2 3
Trang 34HCMC University of Technology Intelligent Control
1
1 e + 0.54 -
0.321 0.368
d n
=
0 0.233
0.227 –
0.429
0.0495 –
0.0997
Trang 350.420 –
0.0997
0.140 –
=
Trang 36HCMC University of Technology Intelligent Control
Give function
It is obtained from the capacity of function approximation of neural network
Fig 2.31 the capacity of function approximation of neural network
2.4 Neural Network Controller
A simple self turning PID controller using neural network
Fig 2.32 A simple self turning PID controller using neural network
=
Trang 37Fig 2.33 Block diagram of neural network The active function is given as below:
) 1
(
) 1
( 2 )
*
Yg x g
Yg x
e Y
e x
By the way to turn Y g, we have the function shapes are in the figure 2.34
Fig 2.34 The active function shapes
How about the working of controller?
The input signal of the active function in the output layer, x, becomes:
) ( ) ( )
( ) ( )
( ) ( )
Where,
Trang 38HCMC University of Technology Intelligent Control
sequence discrete
k transform Z
of operator z
time sampling
T
T
z k
e
k
e
T n e
k
e
k k
f
p
: , :
, :
) 1
(
) ( ) (
)
(
1 1
)
(k
r
θ and θ (k) are set point and output of joint of the manipulator, respectively
To tune the gains of PID controller, the steepest descent method using the following equation was applied
d d d
d
i i i
i
p p p
p
K
k E k
K k
K
K
k E k
K k
K
K
k E k
K k
∂
∂
−
= +
∂
∂
−
= +
) ( )
( )
1 (
) ( )
( )
1 (
) ( )
( )
1 (
η η η
where ηp, ηi, ηd are learning rates determining convergence speed, and E(k) is the
error defined by the following equation:
2
1 )
Using the chain rule, we get the following equations:
d d
i i
p p
K
k x x
k u u
k k
E K
k E
K
k x x
k u u
k k
E K
k E
K
k x x
k u u
k k
E K
k E
(
) ( ) ( ) ( ) ( )
(
) ( ) ( ) ( ) ( )
(
θ θ
θ θ
θ θ
The following equations are derived:
Trang 39( )
) ( ) ( );
( ) ( );
( )
(
) ( ' ) (
) ( )
( ) ( )
(
k e K
k x k
e K
k x k
e K
k x
k x f x
k u
k e k
k k
E
d d
i i
p p
p r
) ( ) ( ) ( ) ( )
(
) ( ) ( ) ( ' ) ( )
( ) ( ' ) ( ) (
) ( ) ( ) ( ) ( )
(
) ( ) ( ' ) ( )
( ) ( ' ) ( ) (
) ( ) ( ) ( ) ( )
(
2
k e k e k x f u
k k
e k x f u
k k
e
K
k x x
k u u
k k
E K
k
E
k e k e k x f u
k k
e k x f u
k k
e
K
k x x
k u u
k k
E K
k
E
k e k x f u
k k
e k x f u
k k
e
K
k x x
k u u
k k
E K
k
E
d p d
p
d d
i p i
p
i i
p p
p
p p
θ θ
θ θ
θ θ
θ θ
θ θ
*
) 1
( 4 )
Yg x
e
e x
(
4 ) ( ) ( )
( )
1 (
) 1
(
4 ) ( ) ( )
( )
1 (
) 1
(
4 ) ( ) ( )
( )
1 (
Yg x
Yg x d
p d d
d
Yg x
Yg x i
p i i
i
Yg x
Yg x p
p p p
p
e
e k
e k e k
K k
K
e
e k
e k e k
K k
K
e
e k
e k e k
K k
= +
+ +
= +
+ +
= +
η η η