Number of generations for the second Layer of the DLPSO algorithm for the learning of the weights of the ANN………..79 Figure 6.21: Simulink diagram showing a sample of the architecture of
Trang 1EVOLUTION OF ARTIFICIAL NEURAL NETWORK CONTROLLER FOR A BOOST CONVERTER
VASANTH SUBRAMANYAM (B E., Anna University, India)
A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF ENGINEERING
DEPARTMENT OF ELECTRICAL & COMPUTER ENGINEERING
NATIONAL UNIVERSITY OF SINGAPORE
2007
Trang 2Acknowledgements
I would like to thank all the people who have helped me during my study at the National University of Singapore First and foremost, I would like to thank my supervisors Assoc Prof Dipti Srinivasan and Assoc Prof Ramesh Oruganti They have been extremely enthusiastic and supportive regarding this research which has helped me learn many new aspects of artificial intelligence and control I appreciate both of them for their innovative ideas and profound knowledge in Artificial Intelligence and Controls Without their encouragement and support, this study would not have been possible
I had the pleasure of interacting with many research students from the Power Systems Laboratory, Power Electronics Laboratory and Electrical Machines Laboratory
My sincere thanks to all of them for the wonderful time we had working and helping each other in the laboratories
My warmest thanks and regards to the Laboratory Officer Mr Seow Heng Cheng, for his helpful nature and dedication in making the laboratory such a nice place to work Without his support, it would have been impossible to carry out the research in the laboratory
And finally, there are no words suffice to express my heart felt gratitude to my parents I would have never reached so far in life without their constant love, support and encouragement
Trang 3Contents
ACKNOWLEDGEMENTS II
CONTENTS III SUMMARY VII LIST OF FIGURES XI LIST OF TABLES XVI LIST OF SYMBOLS AND ABBREVIATIONS XVII
CHAPTER 1 1
INTRODUCTION 1
1.1 M OTIVATION 1
1.2 T HESIS O BJECTIVES AND A CHIEVEMENTS 3
1.3 P ROBLEM S TATEMENT 5
1.4 S TRUCTURE OF THE THESIS 5
CHAPTER 2 8
BACKGROUND 8
2.1 A RTIFICIAL N EURAL N ETWORKS 8
2.2 P ARTICLE S WARM O PTIMIZATION A LGORITHM 12
2.2.1Adaptive PSO 15
Trang 4CHAPTER 3 19
LITERATURE SURVEY 19
3.1 L ITERATURE S URVEY 19
3.2 S UMMARY 24
CHAPTER 4 26
OVERVIEW OF A BOOST CONVERTER AND ITS CONTROL DIFFICULTIES 26
4.1 I NTRODUCTION TO DC-DC P OWER C ONVERTERS 26
4.2 O PERATING P RINCIPLE OF A B OOST C ONVERTER 27
4.2.1Continuous Conduction Mode 28
4.2.2Discontinuous Conduction Mode 30
4.2.3Limit between Continuous and Discontinuous Modes 32
4.2.4Effect of Parasitic Resistances 35
4.3 C ONTROL D IFFICULTIES WITH THE B OOST C ONVERTER 37
4.4 S UMMARY 38
CHAPTER 5 39
PROPOSED DESIGN APPROACH FOR ARTIFICIAL NEURAL NETWORK CONTROLLER 39
5.1 C ONTROL S CHEME OF THE ANN C ONTROLLER 40
5.2 C ONFIGURATION OF N EURON AND N EURAL N ETWORK 44
5.3 M ATHEMATIC M ODELING AND S TABILITY ANALYSIS OF THE FEED FORWARD ANN C ONTROLLER 46
Trang 55.4 N EURAL N ETWORK D ESIGN USING A D UAL L AYERED P ARTICLE S WARM
O PTIMIZATION A LGORITHM 49
5.4.1Principle of the Dual-Layer Particle Swarm Optimization Algorithm 50
5.4.2Implementation of the DLPSO Algorithm 51
5.5 Benchmark PI Controller Design for a Boost Converter 58
5.6 W EIGHT OPTIMIZATION USING GA AND PSO BASED H YBRID A PPROACH 61
5.7 S UMMARY 64
CHAPTER 6 65
SIMULATION RESULTS AND ANALYSIS 65
6.1 O PTIMIZATION OF THE P ARAMETERS OF THE DLPSO A LGORITHM 65
6.2 S IMULATION R ESULTS 72
6.3 T ESTS CONDUCTED ON THE ANN CONTROLLER DESIGNED USING THE DLPSO ALGORITHM AT VARIOUS OPERATING POINTS 86
6.4 C OMPARISON OF A B ENCHMARK PI C ONTROLLER AND ANN C ONTROLLER D ESIGNED BY DLPSO A LGORITHM 92
6.5 C OMPARISON OF THE GA USED IN THE HYBRID APPROACH AND THE PSO USED IN THE DLPSO A LGORITHM FOR LEARNING OF THE WEIGHTS OF THE ANN CONTROLLER 94
6.6 D ISCUSSION ON THE PERFORMANCE OF THE PROPOSED APPROACH 98
6.7 S UMMARY 98
CHAPTER 7 100
Trang 67.1 C ONCLUSIONS 100
7.2 F UTURE W ORK 103
BIBLIOGRAPHY 105
LIST OF PUBLICATIONS 110
C ONFERENCE P APER 110
APPENDIX 111
SOFTWARE CODES FOR THE DLPSO ALGORITHM 111
A F IRST LAYER OF PSO FOR STRUCTURAL OPTIMIZATION 111
B S ECOND LAYER OF DLPSO FOR WEIGHT OPTIMIZATION 115
C F ITNESS E VALUATION OF THE ANN CONTROLLER FOR THE DLPSO ALGORITHM 117
D N EURAL NETWORK ARCHITECTURE BEING CREATED IN SIMULINK 119
E N EURON CONFIGURATION IN AN ANN CONTROLLER OPTIMIZED BY DLPSO ALGORITHM 120
Trang 7Summary
In recent years, Artificial Intelligence techniques such as neural networks and biologically inspired algorithms are gaining immense popularity due to their unconventional ability to solve complex problems Utilization of such unconventional artificial intelligent techniques to solve complex control engineering problems proves to open a new dimension to Control Engineering This thesis focuses on the design of controllers, which is one such complex problem, where the application of artificial intelligent techniques is justified
The complexity involved in the design process of a controller comprises of the optimization of the following decision parameters:
· the total number of signal processing blocks to be employed in the controller
· The type of each block (e.g., lead, lag, gain, integrator, differentiator, adder, inverter, subtractor, and multiplier)
· The tuning of all the parameters for all the blocks and the topological interconnections between the blocks
· Whether or not to employ internal feedback (i.e., feedback inside the controller)
The optimization of these decision parameters combined with the expertise and knowledge of the system to be controlled, constitutes the design of the controller The emphasis of this thesis is automation of the controller design process using artificial intelligent techniques without prior knowledge of the system to be controlled
Trang 8This thesis investigates the feasibility of applying a hybrid approach to the automated design of controllers which encapsulates the concepts associated with Artificial Neural Networks (ANN) and Swarm Intelligence The problem which needs to
be addressed in order to automate the design of the ANN controller is the design of the ANN itself The designing process of the ANN constitutes the following decisions:
· The structure of the ANN
· The tuning of the weights of the ANN
Thus, an algorithm is proposed in this thesis called the “Dual- layered Particle Swarm Optimization (DLPSO) algorithm” for effectively designing the structure and tuning the weights of the ANN This algorithm consists of two operation layers where one is used for the design of the architecture and the other for the tuning of the weights The concept of two layers is the key feature because, for every configuration developed
by the algorithm, the weights are tuned to the optimum Hence, the ANN controller designed by this method can be considered to have an optimal structure for the particular application The advantage brought about by this aspect is the need of minimum amount
of human intervention and least domain knowledge of the system for designing the controller Moreover, the neural network learning used here is classified as unsupervised since the outputs vary depending on the inputs and hence, there are no fixed input-output training data for training of the neural network This justifies the application of the DLPSO algorithm for designing the structure and weights of the ANN controller
Thus, in this thesis, the ANN controller designed based on the above method, is tested on a classical boost converter because it’s a non- minimum phase, non-linear system which makes it difficult to control Generally, the non- minimum phase
Trang 9characteristic is solved by controlling the output voltage in an indirect way, that is, by selecting a different measure of the output voltage, to make the system a minimum phase one and commonly, the inductor current used as the measure for t h e minimum phase output Thus, by doing this, the dynamic response of the system is improved Dynamic response here refers to effectiveness of the actual output voltage to track the reference output voltage To improve the dynamic response and effectively stabilize and control the output voltage at different operating points, the advantages of ANNs viz., interconnectivity and learning capabilities, are used The ANN controller uses the feedback input from the output of the system and thus calculates the error between the reference input and the output, which acts as the input to the ANN The control signal to the boost converter is its duty cycle ratio By controlling the duty cycle ratio of the boost converter, the output voltage of the converter is regulated The performance of the controller is evaluated based on its input transient response The transient analysis of the boost converter is carried out by providing a step change to the reference output voltage and hence, determining the dynamic performance of the ANN controller by analyzing the actual output voltage The dynamic performance indicators used are typically, the overshoot voltages and settling times This is carried out for various values of reference voltages i.e at different operating points This performance is compared to that of a conventional PI controller which is used as a benchmark to evaluate the performance of the ANN controller
Thus, in this thesis, the proposed DLPSO algorithm is benchmarked using a conventional PI controller It has been brought out from the simulation results and
Trang 10settling time and is better or comparable to the PI controller in terms of overshoot Hence, for this application, the dynamic performance of the ANN controller designed using the DLPSO algorithm is better than the conventional PI controller Moreover, the performance of the PSO algorithm for the training of the weights is shown to be better than that of a conventional genetic algorithm, where the parameters for comparison are computational time and the number of generations needed to obtain the resulting weights
of the ANN A detailed analysis and simulation studies have been presented in an articulate manner to substantiate the novelty of this controller design strategy
Trang 11List of Figures
Figure 1.1 Block Diagram of the DLPSO algorithm………4
Figure 2.1: Schematic of a Biological Neuron……….7
Figure 2.2: Multi Layer Perceptron……… 10
Figure 4.1: Schematic of the Boost Converter 26
Figure 4.2: Voltage and Current Waveforms for CCM 26
Figure 4.3: Voltage and Current Waveforms for DCM 29
Figure 4.4: Evolution of the Normalized Output Voltage of an ideal Boost Converter with the normalized output current……… 32
Figure 4.5: Evolution of the Output Voltage of a Boost Converter with the duty cycle when the parasitic resistance of the inductor increases 35
Figure 5.1 : Control Scheme Block Diagram ………39
Figure 5,2 : Simulink Diagram of the State Space Averaged Model of the Boost Converter……….40
Figure 5.3: Principle of PWM Generator……… 41
Figure 5.4: Structure of a Perceptron……… 43
Figure 5.5: Structure of a feed- forward ANN with one hidden layer and an output neuron……… 44
Figure 5.6: System Representation 45
Figure 5.7: Configuration of the ANN Controller 46
Figure 5.8: Flowchart of the DLPSO Algorithm…… 50
Trang 12Figure 5.11: Bode Plot of Control to output transfer function of a Classical
Boost Converter at VS = 12.5 V, V0 = 25 V and R = 12.5 ………57 Figure 5.12: Flowchart of the Hybrid Algorithm 61 Figure 6.1: Simulink diagram showing the Control Scheme of the
Boost Converter……… 64 Figure 6.2: Plot of the Best Fitness vs Number of Generations for the
Architecture Optimization for Case 1……… 65 Figure 6.3: Plot of the Best Fitness vs Number of Generations for the
Architecture Optimization for Case 2……… 66 Figure 6.4: Plot of the Best Fitness vs Number of Generations for the
Architecture Optimization for Case 3……… 67 Figure 6.5: Plot of the Best Fitness vs Number of Generations for the
Weights Optimization……… 68 Figure 6.6: Plot of Output Voltage (top) and Inductor Current (down) for
the Structure and Weight Optimized ……… 70 Figure 6.7: Waveform of Output Voltage at Input Voltage of 12.5V,
Load Resistance is13 ohms and Output Current of 2 A ……… 72 Figure 6.8: Waveform of Inductor Current at Input Voltage of 12.5V,
Load Resistance is 13 ohms and output current of 2 A ……….…….……73 Figure 6.9: Waveform of Output Voltage at input voltage of 12.5V,
load resistance is 52 ohms and output current of 0.5 A……… 73 Figure 6.10: Waveform of Inductor current at input voltage of 12.5V,
load resistance is 52 ohms and output current of 0.5 A……… …74
Trang 13Figure 6.11: Waveform of Output Voltage at input voltage of 15V,
load resistance is 26 ohms and output current of 1 A………74 Figure 6.12: Waveform of Inductor current at input voltage of 12.5V,
load resistance is 26 ohms and output current of 1 A………75 Figure 6.13: Large Step Change of 4 volts Waveform of Output Voltage at input
voltage of 12.5V, load resistance is 13.5 ohms and output current of 2A…….75 Figure 6.14: Large Step Change of 4 volts Waveform of Inductor current at input
voltage of 12.5V, load resistance is 13.5 ohms and output current of 2A……76 Figure 6.15: Large Step Change of 4 volts Waveform of Output Voltage at input
voltage of 12.5V, load resistance is 54 ohms and output current of 0.5A…….76 Figure 6.16: Large Step Change of 4 volts Waveform of Inductor current at input
voltage of 12.5V, load resistance is 54 ohms and output current of 0.5A…….77 Figure 6.17: Large Step Change of 4 volts Waveform of Output Voltage at input
voltage of 15V, load resistance is 27 ohms and output current of 1A……… 77 Figure 6.18: Large Step Change of 4 volts Waveform of Inductor current at input
voltage of 15V, load resistance is 27 ohms and output current of 1A……….78 Figure 6.19: Plot of the Best Fitness vs Number of generations for the
Architecture Optimization for the Structural Optimization phase………78 Figure 6.20: Plot of the Best Fitness vs Number of generations for the second Layer
of the DLPSO algorithm for the learning of the weights of the ANN……… 79 Figure 6.21: Simulink diagram showing a sample of the architecture of the
optimized ANN used in the proposed controller design approach 80
Trang 14Figure 6.22: Output Voltage and Load current waveforms
for the proposed approach………81 Figure 6.23: Final architecture of the DLPSO based ANN controller……….82 Figure 6.24: Output Voltage and Load current waveforms for both overshoot
and undershoot for the proposed approach……… 83 Figure 6.25: Waveform of Output Voltage at input voltage of 15V,
load resistance is 13 ohms and output current of 2 A……… 85 Figure 6.26: Waveform of Inductor current at input voltage of 15V,
load resistance is 13 ohms and output current of 2 A……… 85 Figure 6.27: Waveform of Output Voltage at input voltage of 15V,
load resistance is 52 ohms and output current of 0.5 A……… 86 Figure 6.28: Waveform of Inductor current at input voltage of 15V,
load resistance is 52 ohms and output current of 0.5 A……… 86 Figure 6.29: Waveform of Output Voltage at input voltage of 17V,
load resistance is 13 ohms and output current of 2 A……… 87 Figure 6.30: Waveform of Inductor current at input voltage of 17V,
load resistance is 13 ohms and output current of 2 A……… 87 Figure 6.31: Waveform of Output Voltage at input voltage of 17V,
load resistance is 52 ohms and output current of 0.5 A………88 Figure 6.32: Waveform of Inductor current at input voltage of 17V,
load resistance is 52 ohms and output current of 0.5 A……….88 Figure 6.33: Waveform of Output Voltage at input voltage of 17V,
load resistance is 26 ohms and output current of 1 A………89
Trang 15Figure 6.34: Waveform of Inductor Current at input voltage of 17V,
load resistance is 26 ohms and output current of 1 A………89 Figure 6.35: Plot of the Best Fitness vs Number of generations
for learning by PSO……….… 94 Figure 6.36: Plot of the Best Fitness vs Number of generations
for learning by GA……….94
Trang 16List of Tables
Table 5.1 : Parameters Of The Boost Converter………… ………41 Table 5.2: Benchmark PI controller optimization 58 Table 6.1: Overshoot values and settling times of the ANN controller designed
using the DLPSO at the 3 operating points……… 69 Table 6.2: Performance Results and Comparison of an ANN controller
optimized using a DLPSO Algorithm at 5 different operating points ……….90 Table 6.3: Comparison between the DLPSO based ANN and Conventional
PI controller 91 Table 6.4: Comparison of the Real Value GA and PSO for Training of the
Neural Network………92
Trang 17List of Symbols and Abbreviations
Symbol
AI Artificial Intelligence
DLPSO Dual Layered Particle Swarm Optimization
V ref Reference Voltage of the Boost Converter (V)
V S Supply Voltage to the Boost Converter (V)
V 0 Output Voltage of the Boost Converter (V)
I 0 Load Current of the Boost Converter (A)
Trang 18Chapter 1 Introduction
Artificial intelligent techniques have started replacing the conventional techniques for many different applications In control engineering, artificial neural network (ANN) based controllers bring several unique advantages, such as high interconnectivity, high parallelism, lesser human intervention and faster dynamic response Moreover, optimization methods such as genetic algorithms (GA) and swarm intelligence have become widely acclaimed due to their non-conventional ideology and efficiency in optimizing different parameters A brief review on the artificial intelligent techniques used in this thesis viz., ANN, GA and swarm intelligence is provided in Chapter 2 Hence, it sounds logical to combine two or more of these methodologies with many distinct salient features thereby producing a very robust control technique In this thesis, the strengths of the combination of neural networks, genetic algorithms and swarm intelligence are investigated for the controller design
In this chapter, the motivation for the work carried out is presented explicitly The objectives and achievements are explained and finally, the structure of the thesis is summarized
1.1 Motivation
Many works of research have been carried out on neural network controllers due to its attractive features and potential applications Most of these applications have used the following salient features of ANN:
Trang 19Ø Learning capability of the neural networks by training the weights on a training dataset
Ø Using different architectures of the ANN (such as Support vector machines) whose choice is based on the application used
Though the ANN controllers designed, worked well for certain applications, the designers needed to have domain knowledge of the system to be controlled for designing such controllers Apart from the above, in the case of multi layer perceptron (MLP) architecture of the ANN, the choice of the configuration (number of hidden layers and number of neurons in each layer), is carried out arbitrarily by trial and error and there is
no rule of thumb for designing the most optimal configuration
Conventional controllers are generally linear in nature and their dynamic performance is measured based on the settling time and overshoot values of the actual output of the system, when a step change is provided to the designed output Although, the dynamic performance of these linear controllers designed for non- minimum phase, non- linear systems work well in the linearized regions around particular operating points, their dynamic performance may not be satisfactory at different operating points Additionally, tuning of their parameters is necessary, for example, the proportional and integral constants for a PI controller, which can be another limitation of such conventional controllers since the tuning is again done by trial and error
In this thesis, emphasis is laid on the design of the neural network controller, which can be defined by finding the optimal configuration of the MLP and the learning scheme used for learning of the weights Also, the design scheme uses biologically
Trang 20optimization of the configuration and training the weights Since, the application of the above mentioned biologically inspired algorithms, requires least domain knowledge and human interference, it can be claimed that, by using such techniques, the design of the controller is automated
In this thesis, a classical boost converter, which is a non- linear, non-minimum phase system, is used as a testing vehicle for observing and comparing the performance
of the ANN controller designed For linearising the non-linearity, the state space averaging technique is used to linearize the boost converter at particular operating points The choice of the boost converter in this thesis can be justified due to the fact that controlling a non- minimum phase system can be quite challenging compared to other non- linear systems
1.2 Thesis Objectives and Achievements
This thesis deals with automating the design process of an ANN controller using biologically inspired algorithms, in general, and particle swarm optimization (PSO) algorithm in particular The major objectives and achievements of this thesis are as follows:
Ø The ANN controller used here is designed, using the PSO algorithm which is based on the movement of bird swarms
Ø The weights and the configuration of the MLP based ANN is optimized and designed using a two layer algorithm called the Dual Layered Particle Swarm Optimization (DLPSO) algorithm, where the first layer is used to decide the number of layers of the MLP and number of neurons in each layer while the second layer is used for the learning or training of the connection weights of the
Trang 21neural network Fig 1.1 shows the block diagram of the DLPSO algorithm which
is self explanatory
Fig 1.1: Block Diagram of the DLPSO algorithm
Ø The controller proposed ensures that; at different operating points, the actual output voltage of the boost converter follows the reference output voltage efficiently without deteriorating its performance
Ø The dynamic performance evaluation of the boost converter is carried out by providing a step change to the reference output voltage and observing the response of the actual output voltage of the system and thereby, determining the dynamic performance of the ANN controller The parameters used for measuring the dynamic performance of the ANN controller are: a measure of the overshoot voltage and a measure of the settling time at various operating points
Ø The performance of the ANN controller designed using the DLPSO algorithm is benchmarked with a conventional PI controller because the PI controller is the most commonly used industrial controller due to its ease of implementation and design
Ø The performance of the DLPSO algorithm is compared to a generic genetic
Trang 221.3 Problem Statement
Designing controllers for non-minimum phase systems is a challenging task and it
is an even more challenging problem when the domain knowledge about the systems is inadequate Additionally, while designing controllers for such systems, parameters are optimized using manual trial and error methods Also, such systems are operating point dependant and hence, the controller optimized at one operating point, may not provide sufficient performance at other operating points
This thesis deals with the automated design of an ANN controller for such a minimum phase systems using the proposed approach which is based on swarm intelligence, to overcome the above designing difficulties as well as improving the dynamic performance
non-1.4 Structure of the thesis
This thesis is organized into Seven Chapters The structure and description of the thesis can be described as follows
The First Chapter provides a brief introduction, motivation behind the thesis, the thesis objectives and achievements and organization of the thesis
Chapter 2 provides the basic background and brief review of the different artificial intelligence techniques viz., Artificial Neural Network, P article Swarm Optimization and Genetic Algorithm
Chapter 3 describes the state of the art review on the different research carried out
in the area for easier understanding of the importance of the proposed contribution
Trang 23Chapter 4 provides an overview of the boost converter, its control difficulties and conventional control methods, typically the PI controller for better understanding of the system to be controlled In this Chapter the choice of the boost converter as a testing vehicle and the design difficulties of controller design are also elucidated The Problem Statement has been defined in this Chapter
Chapter 5 presents the basic implementation of the proposed neural network controller called Duel Layered Particle Swarm Optimization (DLPSO) Algorithm which has been proposed for mitigating the Problem Statement in the previous chapter In the first section, control scheme of ANN, then the configuration of neural network i s discussed In the following sections, the mathematical modeling, neural network design, principal of DLPSO, implementation of DLPSO are detailed The next sections consists
of basic design configuration of the benchmark PI Controller and weight optimization using GA and PSO based Hybrid approach are discussed
Thus, in Chapter 6, detailed Optimization, Simulation Results and Comparisons are presented The proposed algorithm is for the design and optimization of both the configuration and the connection weights In the first section optimization parameters of DLPSO is explained and following section describe Simulation Results of DLPSO Then the performance of the proposed algorithm is compared to a conventional PI controller and the advantages explained and finally the proposed approach for the design of the architecture of the ANN is compared to that of a basic genetic algorithm to prove the effectiveness of a PSO based algorithm over a genetic algorithm for this application The performance is evaluated based on parameters such as computational time and number of
Trang 24Finally, Chapter 7 concludes this thesis highlighting the major contributions of this research A brief future research directive based on this thesis is also included
Trang 25Chapter 2 Background
This chapter provides the basic background and brief review of the different artificial intelligence techniques viz., Artificial Neural Networks, Particle Swarm Optimization Algorithm and Genetic Algorithms
2.1 Artificial Neural Networks
Artificial neural networks are computational networks which attempt to simulate the network of biological nerve cells (neurons) in the biological central nervous system The human brain is made of millions of individual processing elements that are highly interconnected A schematic of single biological neuron is shown in figure 2.1
Fig 2.1: Schematic of a Biological Neuron [8]
Information from the outputs of neurons, in the form of electrical pulses, is received by the cell at the connections called synapses This mechanism of signal flow is not via
Trang 26diffusion of ions These synapses connect to the cell inputs, or dendrites and the single output of the neuron appears at the axon
Artificial neural networks are made up of individual models of the biological neuron connected together to form a network These neuron models are simplified versions of the actions of a real neuron In simulating a biological neural network, artificial neural networks allow using simple computational operations to solve complex, mathematically ill-defined and non- linear problems
One aspect of artificial neural networks which is different and advantageous to conventional computation is its high degree of parallelism The insensitivity of artificial neural networks to partial hardware failure is made possible only due to this aspect
Another important feature of artificial neural networks is its learning capability The learning mechanism is often achieved by appropriate adjustments of the weights in the synapses of the artificial neuron models Training is done by non- linear mapping or pattern recognition If an input set of data corresponds to a definitive signal pattern, the network can be ‘trained’ to give correspondingly a desired pattern at the output This capability to learn is due to the distributed ‘intelligence’ contributed by the weights which can be done either online or offline A properly trained neural network is able to generalize to new inputs by providing sensible outputs when presented with a set of input data that it has not been exposed to
The simplest artificial neural network model is based on the McCulloch-Pitts neuron defined by Warren S McCulloch and Walter Pitts in 1943 This neuron was static and did not include changing input weights It dealt with variable inputs multiplied with fixed synaptic weights, with the products being summed If this sum exceeded the
Trang 27neurons threshold, the neuron turned on or stayed on If the sum was below the threshold
of an inhibitory pulse was received, the neuron turned off or stayed off The output of the neuron, y(i), is represented by:
å=
i w i x i y
1 (2.1) Where wi is the weight value, xi is the input and n represents the number of inputs
In 1958, Frank Rosenblatt put together a learning machine, the perceptron by modifying the McCulloch-Pitts and Hebb models This merged the concepts of synapse changes as a function of activity as well as the effects of combining multiple inputs to a single neuron The perceptron is the simplest form of neural network consisting of a single neuron with adjustable synaptic weights and bias This model is limited to performing pattern classification with only two linearly separable classes The perceptron forms the basis of an adaline (adaptive linear neuron) proposed by B.Widrow in 1960 This is a single neuron model involving weight training according to the least square error algorithm, defined by the following equation:
W h () ( ) (2.2)
Where W is the desired weight,W is the current weight, e(i) is the error term calculated
by taking the difference between the desired and actual output, x(i) is the input to the neuron and h is the learning rate The above mentioned models can be generalized under
a specific class known as the single layer perceptron (SLP) Another popular artificial neural network architecture is the multiple layer perceptron (MLP) This network consists
of an input layer, a number of hidden layers and an output layer as shown in figure 2.2
Trang 28Fig 2.2: Multi Layer Perceptron
The output of each node is connected to the inputs of all the nodes in the subsequent layer Data flows through the network in one direction from input to output The network
is trained in a supervised fashion involving both network inputs and target outputs The algorithms proposed for training and designing the architecture of the MLP are inspired biologically such as the genetic algorithms and the swarm optimized algorithms A detailed description of these algorithms is given in the next section
Back-propagation (BP) is a supervised learning technique used for training artificial neural networks It was first described by Paul Werbos in 1974, and further developed by David E Rumelhart, Geoffrey E Hinton and Ronald J Williams in 1986
As the algorithm's name implies, the errors (and therefore the learning) propagate backwards from the output nodes to the inner nodes So technically speaking, BP is used
to calculate the gradient of the error of the network with respect to the network's modifiable weights This gradient is almost always then used in a simple stochastic gradient descent algorithm to find weights that minimize the error Often the term "back-propagation" is used in a more general sense, to refer to the entire procedure encompassing both the calculation of the gradient and its use in stochastic gradient
Trang 29descent BP usually allows quick convergence on satisfactory local minima for error in the kind of networks to which it is suited such as multi layer perceptron ANN
It is important to note that BP networks are necessarily multilayer (usually with one input, one hidden, and one output layer) In order for the hidden layer to serve any useful function, multilayer networks must have non-linear activation functions for the multiple layers, whereas a multilayer network using only linear activation functions is equivalent to a single layer, linear network Non-linear activation functions that are commonly used include the logistic function, the softmax function, and the Gaussian functions
2.2 Particle Swarm Optimization Algorithm
The particle swarm approach was first reported in 1995 by James Kennedy (social psychologist) and Russell C Eberhart (engineer) The techniques have evolved greatly since then, and the original version of the algorithm is barely recognizable in the current form Particle swarm optimization (PSO) is a stochastic, population-based computer problem-solving algorithm; it is a kind of swarm intelligence that is based on the behavior of swarm of birds, fishes etc which search for food, and from these the socio-psychological principles for the algorithm are derived and these provide insights into social behavior, as well as contributing to engineering applications
Social influence a n d social learning enable a person to maintain cognitive consistency People solve problems by talking with other people about them, and as they interact their beliefs, attitudes, and behaviors change; the changes could typically be
Trang 30As stated above, the particle swarm optimization algorithm has been adduced from the behavior of flock of birds and this is used in the following scenario Suppose the following scenario: a group of birds are randomly searching food in an area There is only one piece of food in the area being searched All the birds are unaware of the location of the food, but they are aware of the distance of the food The effective strategy to find the food is to follow the bird which is nearest to the food This is the basic ideology used to develop the PSO algorithm
A PSO algorithm learns from the scenario and uses this knowledge to solve the optimization problem In PSO, each single solution is a "bird" in the search space and it is known as a "particle" All of the particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities, which direct the flying of the particles in the search space The particles fly through the problem space by following the current optimum particles
PSO is initialized with a group of random particles (solutions) and then the problem space is searched for optima by updating generations In every iteration, each particle is updated by following two "best" values The first one is the best solution (fitness) that has been achieved so far by that particular particle This value is the personal best and called pbest Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population This best value is the global best and called gbest After calculating the two best values, the particle updates its velocity and positions using the following equations (2.3) and (2.4)
v[] = v[] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] - present[]) (2.3)
Trang 31v[] is the particle velocity, present[] is the current particle (solution) pbest[] and gbest[] are defined as stated before rand () is a random number between (0,1) c1, c2 are learning factors, usually c1 = c2 = 2
The pseudo code of the procedure is as follows:
For each particle
Initialize particle
END
Do
For each particle
Calculate fitness value
If the fitness value is better than the best fitness value (pBest) in history
set current value as the new pBest
End
Choose the particle with the best fitness value of all the particles as the gBest
For each particle
Calculate particle velocity according equation (2.3)
Update particle position according equation (2.4)
End
While maximum iterations or minimum error criteria is not attained
Particles' velocities on each dimension are clamped to a maximum velocity Vmax If the sum of accelerations would cause the velocity on that dimension to exceed Vmax, which is a parameter specified by the user, then the velocity on that dimension is
Trang 322.2.1 Adaptive PSO
The adaptive PSO is just another variation of the PSO where the swarm size and the constants vary adaptively as the iterations proceed as a function of the pBest and gbest This can be described as given in equation (2.5):
(2.5)
In which a=0.5, adaptive b, adaptive swarm size N in [3, [, adaptive neighborhood size
in [3, N]
At each time step, the adaptive variables of a given particle are:
Ø its current position x(t) and the corresponding objective function value,
Ø its best position found so far, pi(t), and the corresponding objective function value
Ø its previous position (to estimate its improvement)
Ø its neighbourhood size,
Ø the swarm size (global information)
Thus, it is important to note that, by using the adaptive PSO, the convergence can
be achieved effectively and faster as compared to the generic PSO This adaptive PSO is used in this thesis for optimization of the configuration and the weights of the ANN
2.3 Genetic Algorithms
A genetic algorithm (or GA) is a search technique used in computing to find true
or approximate solutions to optimization and search problems Genetic algorithms are categorized under global search heuristics They are a particular class of evolutionary
=
+
+
-=
+
11
1
t v t
x
t
x
t x t p beta t
v
t
Trang 33algorithms (EA) (also known as evolutionary computation) which use techniques inspired
by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination) A population of abstract representations (called chromosomes or the genotype or the genome) of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem evolves toward better solutions Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible The evolution usually starts from a population of randomly generated individuals and happens in generations In each generation, the fitness of every individual
in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population The new population is then used in the next iteration of the algorithm Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population If the algorithm has terminated due to reaching the maximum number
of generations, a satisfactory solution may or may not have been reached [30]
The following are the basic features of a genetic algorithm:
Ø Randomness: First, it relies in part on random sampling This makes it a
nondeterministic method, which may yield somewhat different solutions on different runs even if you haven't changed your model In contrast, the linear, nonlinear and integer Solvers also included in the Premium Solver are deterministic methods they always yield the same solution if you start with the same values in the decision variable cells
Trang 34Ø Population: Second, where most classical optimization methods maintain a
single best solution found so far, an evolutionary algorithm maintains a population of candidate solutions Only one (or a few, with equivalent objectives)
of these is "best," but the other members of the population are "sample points" in other regions of the search space, where a better solution may later be found The use of a population of solutions helps the evolutionary algorithm avoid becoming
"trapped" at a local optimum, when an even better optimum may be found outside the vicinity of the current solution
Ø Mutation: Third, inspired by the role of mutation of an organism's DNA in
natural evolution, an evolutionary algorithm periodically makes random changes
or mutations in one or more members of the current population, yielding a new candidate solution (which may be better or worse than existing population members)
Ø Crossover: Fourth, inspired by the role of sexual reproduction in the evolution of
living things an evolutionary algorithm attempts to combine elements of existing solutions in order to create a new solution, with some of the features of each "parent." The elements (e.g decision variable values) of existing solutions are combined in a "crossover" operation, inspired by the crossover of DNA strands that occurs in reproduction of biological organisms As with mutation, there are many possible ways to perform a crossover operation some much better than others and the Evolutionary Solver actually employs multiple variations of two different crossover strategies
Trang 35Ø Selection: Fifth, inspired by the role of natural selection in evolution, an
evolutionary algorithm performs a selection process in which the "most fit" members of the population survive, and the "least fit" members are eliminated In
a constrained optimization problem, the notion of "fitness" depends partly on whether a solution is feasible (i.e whether it satisfies all of the constraints), and partly on its objective function value The selection process is the step that guides the evolutionary algorithm towards ever-better solutions
The drawback of a genetic algorithm is that a solution is "better" only in comparison
to other, presently known solutions; such an algorithm actually has no concept of an
"optimal solution," or any way to test whether a solution is optimal (For this reason, evolutionary algorithms are best employed on problems where it is difficult or impossible
to test for optimality.) This also means that an evolutionary algorithm never knows for certain when to stop, aside from the length of time, or the number of iterations or candidate solutions, it explores
In view of the explanations made above, Genetic Algorithm may not be able to definitely predict an optimal result
2.4 Summary
In this chapter, the artificial intelligent techniques used in this thesis, viz., Artificial Neural Networks, Particle Swarm Optimization Algorithm and Genetic Algorithm have been described adequately to provide a complete review and background
Trang 36Chapter 3 Literature Survey
3.1 Literature Survey
Design of controllers for non- minimum phase systems is a challenging problem and the conventional controllers such as PI controllers designed for such non-minimum phase systems are based on the small signal models Hence, the performance of such controllers
is dependant on the operating points for which they are designed, in addition to load and line conditions
A classical boost converter is used for evaluating the proposed design of the swarm intelligence based ANN controller As mentioned previously, the boost converter
is used only as a testing vehicle in this thesis to iterate that the controller can be used on non- minimum phase systems of similar or greater complexity The boost converter is also
a form of dc-dc converter which converts lower values of voltages to higher values of voltages [1] The conventional control methods have long been employed in boost converter However, due to the non- minimum phase characteristics, the control of these converters is challenging Hence, the conventional controllers are designed according to the worst case conditions, resulting in lower bandwidth operations
Dynamic performance of the controllers can be improved through the use of artificial intelligence (AI) techniques such as ANNs for the design and optimization In references [2] and [3], many state of the art control techniques of power electronic devices are provided, in the last part of the paper which includes microcomputer control, VLSI control, fuzzy control and neural networks This paper discusses the neural network control in general and also, the different advancements which are:
Trang 37Ø implementation of a neural network controller using special-purpose analog computer or a cluster of DSPs
Ø the development of a neural network chip which can be used for feedback signal processing as well as control
Ø An example of the application of neural networks in a non-linear robotic manipulator is provided to explain the functionality of ANN
ANNs are biologically inspired algorithms which are based on the principle of operation of neuron cells in the human brain[8] The employment of this aspect has the advantage of minimum amount of human intervention during the control process Research on the application of neural networks in the field of power electronics has been documented in papers [4]-[5] A novel concept of application of neural network for generation of optimal switching pattern in voltage controlled inverters is described in paper [4] This paper also has described the PWM based on hardware analog neural network, which responds with high accuracy to any desired value of modulation index and the idea pursued in this study is to use neural network as a nonlinear signal converter producing output values of primary switching angle corresponding to the reference value
of modulation index applied to the network input The paper [5] presents a simple adaline controller using only one sensor for critical applications such as DVR or UPS An important feature of the controller proposed in this paper is that it can be designed and implemented without knowing the exact parameters of the PWM inverter system Although, the applications in these papers are inclined towards inverters, it serves as a motivation to apply neural networks to the field of power electronics
Trang 38More studies on the use of neural networks to switching power converters especially boost converter is given in [6]-[7] Paper [6] investigates the use of neural networks for identification and control of power converters In addition the results obtained show the feasibility of implementing complex nonlinear control laws for dc-dc switching converters by means of neural networks Analog implementation of ANN is discussed in paper [7] It also explains the use of very large, reconfigurable networks aimed at more complicated tasks require large number of neurons and high level of connectivity are probably better served by mixed analog and digital implementation to take advantage of both
The design of the Artificial Neural Network (ANN) plays a very crucial role in the information processing capability [8] and hence, in our application, more effective control There have been many attempts to design ANN architectures such as constructive and pruning algorithms [9] These algorithms use a neural network that is larger than necessary and then remove the unwanted parts and then evaluate the performance to finally arrive at the best structure However, such techniques are susceptible to be trapped in structural local optima and the results depend on initial network architectures Moreover, the chances of overtraining of the neural network are present due to the fact that this algorithm is based on curve fitting strategy The design of the architecture can be formulated as a search problem in architectural space where each
point represents a particular architecture Reference [11] indicates that evolutionary
algorithms are better candidates for searching the architectural space than the
Trang 39constructive and pruning algorithms as both EAs and neural networks are biologically inspired Evolution of ANNs using genetic algorithms with connectionist learning
strategy has been suggested to be more efficient than the structural hill climbing techniques such as constructive and pruning algorithms for designing optimal architectures [12] References [11] and [12] state that the evolutionary algorithms prove
to be better than constructive and pruning algorithms for neural network architecture design because they are biologically inspired and also, searching for the global optimum values are effective
Reference [13] provides an architecture design process of a neural network using genetic algorithms for coin recognition In this paper, the ANN is trained using a Back Propagation algorithm and then the genetic algorithm varies the architecture of the ANN
to fit the environment The major objective of this paper is to design the smallest possible ANN structure with 100% accuracy Also, GA has been used to modify the structure of a
feed-forward network [14] This evolutionary design of neural network proves to be a
milestone in the architecture design of multi layer perceptron and thus, making them more robust, noise tolerant and performance resistant This paper also describes the present design through trial and error techniques or applying traditional engineering methods for the task, does not guarantee that an optimal set of parameters will be found, apart from it being time consuming and of high cost Many other methods of evolutionary
structure optimization procedures have been introduced since In reference [15], an ensemble structure where different new modifications are made to the basic GA to develop faster and efficient evolutionary methods to design ANN architecture One
Trang 40important feature of all the above mentioned methods is that the training algorithm used for tuning the weights of the ANN is the back propagation algorithm
Particle swarm optimization (PSO) has very recently emerged as an important combinatorial meta-heuristic technique for both continuous time and discrete time optimization Penn State University researchers have focused on particle swarms for the development of quantitative structure activity relationships models used in drug design [16] The models applied artificial neural networks, k- nearest neighbor, and kernel regression techniques Binary and niching particle swarms solved feature selection and feature weighting problems Particle swarms also have influenced the computer animation field Rather than scripting the path of each individual bird in a flock, Reference [17] elaborates on a particle swarm using simulated birds as the particles The simulated flock’s aggregate motion behaves much as a real flock in nature—the dense interaction comprises the relatively simple behaviors of each of the simulated birds choosing its own path One application of the PSO algorithm is the optimization of the
architecture and learning of the weights of the artificial neural networks Recently, the
PSO algorithm has been used in the optimization of ANN in terms of connection weights [18]-[20] and these three papers also compare the results of the PSO with both GA and
BP to show the effectiveness of the PSO over these methods Paper [21] brings out newly
evolved ANNs where both architecture and weights are evolved by a PSO, it means that the network architecture is adaptively adjusted, then the PSO algorithm is employed to
evolve the nodes of ANNs with a given architecture Also, PSO based algorithms for the
optimization of both the connection weights and architecture in a single algorithm have been proposed [22]