1. Trang chủ
  2. » Ngoại Ngữ

Large scale structural identification by multi civilization genetic algorithm approach

121 203 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 121
Dung lượng 853,15 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Summary The approach of Genetic algorithm GA has been proven to be a relatively robust search and optimization method over the years.. It has recently been applied to parameter identific

Trang 1

• Assoc Prof Wang Quan (currently at University of Central Florida), and Prof Zhang Linmin (at Nanjing University of Aeronautics & Astronautics), for their valuable suggestions and guidance on system identification and control theory

• My fellow research students in the department of Civil Engineering of NUS, and in particular, Hong Bo, Chen Yuefeng, Zhao Shuliang, Zhang jing, Cui Zhe, Shen Lijun, Wang Shenying, etc, for their encouragement and helpful assistance when problems were encountered

• My friend Xu Zhijie for his righteous character and trust; and Yi Fan, Qiu Wenjie, Esther Chang, for their faithful prayer for my thesis

• My family, my wife, parents and sister for their eternal love and supports Finally, the financial support from National University of Singapore is highly appreciated

Trang 2

CONTENT

ACKNOWLEDGMENTS I CONTENT II SUMMARY IV LIST OF TABLES VI LIST OF FIGURES VII

CHAPTER 1 INTRODUCTION 9

1.1 BACKGROUND 9

1.2 LITERATURE REVIEW 10

1.2.1 Conventional Methods 11

1.2.2 Unconventional Methods 13

1.3 OBJECTIVE AND SCOPE OF STUDY 18

1.4 ORGANIZATION OF THE THESIS 19

CHAPTER 2 GA, GA-LS AND PGA 22

2.1 INTRODUCTION 22

2.2 GENETIC ALGORITHM (GA) 22

2.3 GA-LS METHOD (GA-LS) 26

2.4 PARALLEL GENETIC ALGORITHM (PGA) 27

2.5 NUMERICAL STUDY 30

2.6 CONCLUDING REMARKS 33

CHAPTER 3 DISTRIBUTED COMPUTING FOR SINGLE-POPULATION AND MULTI-POPULATION PGA 47

3.1 INTRODUCTION 47

3.2 CHOICE OF COMPUTING PLATFORMS 47

3.3 SINGLE-POPULATION PGA 48

3.3.1 Server-Client Communication Model 49

Trang 3

3.3.2 Program Structure for Single-Population GA 52

3.3.3 Interfaces to link different languages 56

3.3.4 Data Security Mechanism 58

3.3.5 Resetting Mechanism 60

3.3.6 Load Balance Mechanism 61

3.4 MULTI-POPULATION PGA 63

3.4.1 Migration Server 63

3.4.2 Migration Client 65

3.5 NUMERICAL EXPERIMENTS 65

3.5.1 Computation and Communication Times 66

3.5.2 Relationship between Speed-up Rate and Computer Number 67

3.5.3 Single-Population PGA 69

3.5.4 Multi-Population PGA 71

3.6 CONCLUDING REMARKS 72

CHAPTER 4 MULTI-CIVILIZATION GENETIC ALGORITHM 80

4.1 INTRODUCTION 80

4.2 INNOVATIVE IDEAS IN MCGA 81

4.3 STRATEGIES OF MCGA 82

4.3.1 Population Strategy 83

4.3.2 Topology Strategy 85

4.3.3 Strategy for Crossover and Mutation 86

4.3.4 Strategy for Local Search 88

4.3.5 Multiple Criteria Strategy 90

4.3.6 Migration Strategy 92

4.4 NUMERICAL EXPERIMENTS 96

4.5 CONCLUDING REMARKS 99

CHAPTER 5 CONCLUSIONS AND RECOMMENDATIONS 112

5.1 CONCLUSIONS 112

5.2 RECOMMENDATION FOR FUTURE STUDY 114

REFERENCES 116

Trang 4

Summary

The approach of Genetic algorithm (GA) has been proven to be a relatively robust search and optimization method over the years It has recently been applied to parameter identification of structural systems, with encouraging results, in particular when it is combined with suitable local search operator However, for realistic problems, which involve large number of unknown parameters and degrees of freedom, the total trial number increases substantially and the forward analysis becomes very time-consuming Thus the total search time increases dramatically, and the expensive computational cost makes the GA approach infeasible Meanwhile, the identified results deteriorate even when the trial number is greatly increased, since the search is more likely to fall in local optimum when there are a large number of unknown parameters To tackle this problem, the sequential algorithm should be changed into parallel model correspondingly

In this study, a distributed computing method based on JAVA language is proposed to meet the above requirements Satisfactory speed-up rate is achieved because of the high computation-communication ratio There is virtually no limit on the number of computers that could run simultaneously via this method The proposed distributed computing method in this study reduces the simulation time from half a month to half a day, by using available networked computers during off-peak hours

In addition, inspired from the observation of the virtues of existing multi-culture societies in the world, a multi-civilization genetic algorithm (MCGA) is presented in this thesis MCGA executes a set of special population strategies, selection strategies,

Trang 5

migration strategies and local search strategies etc Numerical simulation results in this study show that this innovative approach significantly improves the identified results

of common parallel genetic algorithm (PGA)

Keywords: System Identification (SI); Structural System Identification; Structural Health Monitoring; Genetic Algorithms (GA); Local Search (LS); Parallel Genetic Algorithm (PGA); Distributed Computing

Trang 6

List of Tables

Table 1-1 Comparison of Evolutionary Algorithms 20

Table 2-1 Control Parameters List 34

Table 2-2 Exact Parameters of the 10-DOF LMS 34

Table 2-3 Identification Results of Case 1 without I/O noise 35

Table 2-4 Identification Results of Case 2 & 3 36

Table 2-5 Case 4 Identification Result by GA - LS 37

Table 2-6 Sequential GA & GA-LS 38

Table 3-1 Speed-up rate by distributed computing method 73

Table 4-1 LS Control Parameters in MCGA 100

Table 4-2 Case 1: GA-LS Control Parameters in Common PGA 101

Table 4-3 Case 2, 3 and 4 GA-LS Parameter Used in MCGA 101

Trang 7

List of figures

Figure 1-1 Procedure of System Identification 21

Figure 2-1 The Flow Chart of Sequent GA 39

Figure 2-2 Illustrate of Crossover Operator 40

Figure 2-3 Flow Chart of Local Search Operator 41

Figure 2-4 Classification of Parallel Genetic Algorithm 42

Figure 2-5 Results from Chen, Y.F (2001) M.Eng Thesis (Search Range: +30%) 43 Figure 2-6 Identification Result by GA+LS for 50-DOF LMS without noise (Search Range: ±50%) 44

Figure 2-7 Identification Result by GA+LS for 50 DOF LMS With 5% Noise (Search Range: ±50%) 45

Figure 2-8 Identification Result by GA+LS for 50 DOF LMS With 10% Noise (Search Range: ±50%) 46

Figure 3-1 Flow Chart of Server-Client Model 74

Figure 3-2 Program Structure for Single-Population GA 75

Figure 3-3 Comparison of Active Server Method and Passive Server Method 76

Figure 3-4 Program Structure for Multi-Population GA 77

Figure 3-5 Proportion of Computer Times for Evaluation, Communication & Other GA Operations 78 Figure 3-6 Relationship between speed-up rate and No of participating worker machines 79

Trang 8

Figure 4-1 (a) Classification of Topology and (b) MCGA Topology 102 Figure 4-2 Jump-Out-Effect of Local Search 103 Figure 4-3 Illustration of Sensitivity of Evaluation Error to Individual Parameter

104

Figure 4-4 Identification Results Based on Common PGA 105 Figure 4-5 Identification Results Based on MCGA for 102 Unknowns without Noise (200 generations) 106 Figure 4-6 Identification Results Based on MCGA for 102 Unknowns without Noise (300 generations) 107 Figure 4-7 Identification Results Based on MCGA for 102 Unknowns without Noise (400 generations) 108 Figure 4-8 Identification Results Based on MCGA for 102 Unknowns with 5% Noise (200 generations) 109 Figure 4-9 Identification Results Based on MCGA for 102 Unknowns with 10% Noise (200 generations) 110 Figure 4-10 Improvement of Evolution Error of Best Individual due to migration (migration rate 20% & interval 20 generations) 111

Trang 9

Chapter 1 Introduction

1.1 Background

In the field of civil engineering, the nature of analysis can be broadly categorized as direct analysis and inverse analysis (Bekey, 1970) In direct analysis, system parameters (such as stiffness, mass and damping ratio) are known and the responses (output) of the system under a certain excitation (input) can be calculated by various methods In inverse analysis, the response is known and the unknown system parameters can be determined through polynomial times of direct analysis (forward analysis) In general, the process of determining parameters (e.g stiffness, mass) of a structural system based on given input and output (I/O) information is called structural identification (Figure 1-1)

Structural identification (SI) can be applied to determine the actual values of parameters, which can be used instead of relying on assumed or estimated values in structural analysis Consequently, a better prediction of structural response under environmental excitations can be obtained Structural identification is also the core process of Structural Health Monitoring, Damage Detection and Safety Assessment

But for real world problems, three factors in particular make structural identification difficult:

• Generally speaking, the problem contains many unknown parameters, which makes the solution space large (with high dimensions) As a result,

Trang 10

the computational time required for SI is normally too long for any practical purpose

• Individual parameters collectively affect the response and there is no obvious rule that could be used to guide the search

• It is difficult to accurately measure response Input and output (I/O) data are always contaminated by noise because of electronic interference, resolution

of sensors and ambient vibration

Correspondingly, a good optimization strategy generally should perform well

in three aspects Firstly, the computational strategy should minimize the execution time without losing the ability in optimization Secondly, the objective function to determine the fitness between estimated responses and actual responses should actually reflect the influence of parameters on the response Thirdly, the identification strategy should not be too sensitive to I/O noise

1.2 Literature Review

Generally speaking, structure identification methods can be classified into two categories i.e conventional methods and unconventional methods Least Square Method, Kalman Filter Method and Maximum Likelihood Method are representatives

of conventional methods They perform point-to-point search strategy Hence they often prematurely converge to the local optima rather than the global optima These methods are not suitable for large-scale structural system identification problems due

to the ill-conditioned nature of inverse analysis Moreover, they normally involve gradient or higher-order derivatives of the objective function Some of them even need good initial guess of unknown parameter in order for the methods to work These drawbacks of conventional methods make them very difficult, if not impossible, to

Trang 11

identify structural systems with large number of unknowns Therefore, unconventional methods, e.g Neural Network, Simulated Annealing and Evolutionary Algorithms, are more feasible alternatives for large-scale system identification problems

1.2.1 Conventional Methods

Least Square Method

The least square method identifies the parameters of a given structural identification problem by minimizing the sum of squared deviation (Least Square Error) between the estimated and measured responses Loh et al (2000) used the least square method to identify parameters of an arch dam in a vibration test This method needs the calculation of derivatives and is very sensitive to noise, thus not suitable in some cases

Maximum Likelihood Method

Maximum likelihood method determines the values of parameters for a given statistic that makes the known likelihood distribution maximum It is expressed by the logarithm of a probability density function, which is used to match estimated and measured responses Shinozuka et al (1982) applied maximum likelihood method to identify the parameters of a two-dimensional model of a suspension bridge Oguamanam et al (1995) used both the method of moments and maximum likelihood method to indicate the condition of a gear Stoica et al (1997) proposed a non-iterative maximum likelihood approach for the detection and estimation of mathematical problems The maximum likelihood method is one of the successful unbiased approaches However, it usually needs good initial guess and large amount of computational time on statistic calculations, thus requires higher computational costs

Trang 12

Extended Kalman Filter

The Extended Kalman filter method is based on the state form of the differential equation of motion, and the observation equation can be written as

)()()

Generally speaking, the Extended Kalman filter method is relatively insensitive

to I/O noises, and it can be used as an effective tool in non-linear optimization problems However, it needs good initial guesses for the method to converge to accurate results

Trang 13

1.2.2 Unconventional Methods

Neural Network

Neural network is a relatively new method that is able to reflect and identify complex input/output relationships This method simulates the function of human brain

in the following two ways:

• A neural network accumulates knowledge through training & learning

• It stores knowledge in a series of parameters known as synaptic weights During training or learning, the desired output is usually known When the network present any input data, the corresponding output data is compared to the desired output Then the error can be calculated and fed back to the network, so the weights are adjusted The iteration of this trial and error makes neural network learn the relationship between Input and Output, no matter it is linear or non-linear

From the description above, training and learning are the key steps of neural network method And this method requires a huge input/output database, it is also very difficult to model and train the network to fit a class of practical problems Moreover,

if the structural systems are complex, a large amount of training data is required by neural networks in order to obtain good identification results This drawback makes neural networks difficult to apply in structural identification, where the large amount

of training data required by neural networks are very difficult to obtain owing to the difficulty of conducting experiments or field measurements of real structures

Simulated Annealing

Simulated annealing is a Monte Carlo approach for minimizing multivariate functions It derives from the similarity of the physical process of heating and then

Trang 14

slowly cooling a substance to obtain a strong crystalline structure In simulation, a cost function value indicates the energy of the state And the minimum of the function value represents the frozen state of the substance The simulated annealing process lowers the temperature of the system until the function value has no further changes If the energy of this new state is lower than that of the previous one, the change is accepted unconditionally and the system is updated If the energy is greater, the new configuration is accepted probabilistically upon the Boltzmann distribution

In practice, the temperature is decreased in stages, and at each stage the temperature is kept constant until thermal equilibrium is reached This is the fundamental procedure of simulated annealing This procedure allows the system to move consistently towards the lowest energy states, yet still “jump” out of local pitfall due to the probabilistic acceptance of some upward moves

Thus, based on its random searching nature, simulated annealing is often the first choice for optimization problem, in particular, when the evaluation is simple and million times of trial-&-error is feasible But for structural identification problems, because of the complex process of evaluation, simulated annealing becomes infeasible due to the exponential increase of time required by the problem size

Evolutionary Algorithms

Evolutionary Algorithms are a group of search methods inspired by natural evolution Its philosophy is based on the evolution in the natural world from chaotic to ordering state Better individuals are generated and subsequently survive to reproduce under the selection pressure till the fittest takes the dominant status of the group Generally speaking, evolutionary algorithms can be categorized into three groups: evolutionary programming (EP), evolution strategies (ES) and genetic algorithms (GA)

Trang 15

Evolutionary Programming (EP)

EP was devised to achieve machine intelligence at the beginning (Fogel et al

1965, 1966) The algorithm contains simulated procedures such as the evaluation of the fitness to environment, mutation of each parent combination, and the selection determining the survival combinations They were the original forms of genetic operations: evaluation, mutation and selection Other than mutation, mating operating between different combinations was also proposed but not implemented Moreover, in order to enhance the algorithm’s efficiency, a self-adaptation, which evolves the mutation variances according to the evaluation result of last generation, was added as

an utmost important part of the EP algorithms (Fogel et al 1991, 1992) Besides these aspects, the population size need not be kept constant and there can be a variable number of offspring per parent, much like the (μ+λ) methods offered in ESs In contrast to these methods, selection is often made probabilistic in EP, giving more or less chance of surviving to the next generation for those unfit individuals In contrast to GAs, no effort is made in EP to support schema processing, nor is the use of random variation constrained to emphasize specific mechanisms of genetic transfer This feature perhaps provides greater versatility to tackle specific problem domains that are unsuitable for crossover Nevertheless, for those problems that require strong convergence, EP seems to need more time to find out the results

Evolution Strategies (ES)

Evolution strategies (ESs) were a joint development of Bienert, Rechenberg, and Schwefel, (Rechenberg 1965) At the very beginning, only one parent and one descendant per generation ( (1+1)ES ) were tested on a mechanical calculating machine by Schwefel Subsequently, Rechenberg and Schwefel analyzed and

Trang 16

improved the (1+1) ES Rechenberg developed a convergence rate theory for n >> 1, in which n represents the dimensional number of the model functions (Rechenberg, 1973) Schwefel developed the (1+1) ES further to (μ+λ) ES in his PhD thesis in 1974-1975 The convergence theory of (1+λ) ES was proposed subsequently More importantly, the self-adaptation idea was incorporated that the mutation parameters became various with respect not only to the standard deviations of the mutation, but also to the correlation coefficients Consequently, the application of this strategy became wider The parallelism of ES also developed very early The idea of using several populations and niching mechanisms for global optimization was considered (Schwefel, 1977), but

it could not be tested thoroughly at that time due to the lack of computing resources Nowadays, ES and parallel ES are extensively used, in particular in parameter optimization

Genetic Algorithm (GA)

GA derives from the investigation of adaptive systems, which are capable of self-modification in response to their interactions with the environments (Holland 1962) The “competition and innovation” was the key feature of robust natural system that dynamically responds to unanticipated events and changing environments Simple models of biological evolution appear to capture the above ideas well via notions of survival of the fittest and the continuous production of new offspring These ideas distinguished Holland’s GA from contemporary ES for optimization techniques (Rechenberg 1965) or EP for the intelligent agents (Fogel et al 1964) In general, GA also contains the most operators, such as mutation, crossover, selection, and evaluation

It always carries out group-search from one generation to next It uses binary encoding mechanism as well as float encoding mechanism to represent real solution

Trang 17

Compared to ESs and EPs, GAs have been more widely employed in civil engineering problems in recent years The major fields involved are construction scheduling, structural optimization and structural identification Friswell (1998) applied GA to the problem of damage detection using vibration data to identify the position of one or more damage sites in a structure, and to estimate the extent of the damage at these sites Leu et al (1999) employed GA to solve construction-scheduling problem by using a multi-criteria optimal model Li et al (2000) used GA to optimize the complicated design problem of integrating the number of actuators, the configuration of the actuators and the active control algorithms in buildings excited by strong wind force Gulati et al (2001) implemented real-coded GA to find optimal cross-sectional size, topology and configuration of 2-D and 3-D trusses and achieve minimum weight of truss-structures Chou et al (2001) used GA to identify the changes of the characteristic properties of structural members such as Young's modulus and cross-sectional area, which are indicated by the difference of measured and estimated responses

It may be argued that GA seems to be more general-purpose in comparison with EP and ES Table 1.1 shows some of the differences between EP, ES and GA As

a powerful robust search algorithm, GA is considered to have several advantages over other optimization or identification methods

• GA searches the problem domain guiding by the objective function from population to population Thus it needs no mathematical information such

as derivatives or gradient

• GA does not require good initial guess to obtain accurate results

Trang 18

These advantages make GA approach relatively simple to implement even for complicated non-linear structural problems However because of the stochastic nature

of the GA, it may, sometimes, fail to converge to an optimal solution due to its weakness in fine-tuning Local search (LS) is therefore often incorporated to accelerate the convergence after GA entering into the global optimum area Shi et al (1999) developed a new optimization algorithm that combines the genetic algorithm and a recently proposed global optimization algorithm called the nested partitions method to maximize the market share of the producer Koh et al (2003) proposed a multi-variate (MV) LS method (GA-LS method) to identify a fairly large system with 52 unknown parameters By means of many control rules and peak-depot, Zhao S L (2002) proposed a LS algorithm to search around several peak-points, which increases the probability of obtaining the best global optimum solution Jaszkiewicz (2002) proposed a LS algorithm, which draws at random a utility function and constructs a temporary population composed of a number of best solutions among the prior generated solutions Chelouah (2003) proposed a continuous hybrid algorithm, performing the exploration with a GA, and the exploitation with a Nelder–Mead SS for

the global optimization of multi-minimum functions

It can be concluded that the combination of GA and LS retains the benefits of both methods; it has a better fine-tuning ability without losing the global search capability The single-population GA in this thesis adopts this GA-LS method

1.3 Objective and Scope of Study

The thesis aims to formulate an effective genetic algorithm approach to identify large-scale systems with many unknown parameters, taking advantage of the intrinsic parallelism of GA

Trang 19

Firstly, the performance of sequential GA (including GA with local search) and single-population GA are examined The purpose is to reveal their respective potential for large-scale system identification But the expensive computational cost limits GA’s application in real world situations

Secondly, a simple and effective distributed computing method is developed to tackle the problem of time-consuming GA for large systems Single-population PGA greatly saves the computational time without changing the search result of sequent single-population GA

Finally, a multi-civilization genetic algorithm (MCGA) is proposed and tested Numerical simulations illustrate its efficiency and accuracy of searching solutions in a high dimensional space

Throughout the whole study, lumped mass systems (LMS) with many unknown parameters are studied as numerical examples (Figure 1-2)

1.4 Organization of the Thesis

The thesis is composed of five chapters In Chapter 1, a literature review of relevant research works has been done, together with the introduction of the objectives and scope of this study

Chapter 2 starts with a primary exploration of the fundaments of GA, GA with local search (GA-LS) and parallel GA (PGA) The built-in parallelism is revealed through the analysis of the program structure Numerical experiments are presented in the chapter showing the performance of GA and GA-LS The results also illustrate that these approaches have to be parallelized due to the huge computational cost involved

Trang 20

Chapter 3 explores the distributed computing method to speed up GA search

Its applications in single-population PGA and multi-population PGA are introduced

respectively It overcomes the bottleneck that hinders the application of GA in most of

the real world problems Its programming structure and some network programming

details are explained

Chapter 4 proposes a MCGA approach It applies the same distributed

computing method as the multi-population PGA to expedite the search And six

integrated strategies are implemented in MCGA to improve the searching algorithm

Chapter 5 presents conclusions derived from the research and recommendations

for future study

Table 1-1 Comparison of Evolutionary Algorithms

Representation Real-valued Real-valued Binary-valued &

Real-valued Self-adaptation Only used in

Meta-EP

Standard deviations and co-variances

NIL

Fitness Objective function

value Objective function value Objective function value Crossover

(recombination) NIL Use variants, important different

for self-adaptation

Main operator

Mutation Only operator Main operator Background

operator Selection Probabilistic

extinctive Deterministic extinctive Probabilistic preservative

Trang 21

Model to be identified

Figure 1-1 Procedure of System Identification

Trang 22

Chapter 2 GA, GA-LS and PGA

2.1 Introduction

As the basis of the entire research reported herein, the foundational theory of

GA, GA-LS and PGA are discussed and reviewed in this chapter The control parameters of various operators of GA-LS are investigated through a series of numerical experiments By analyzing those results, the limitation of genetic algorithm

is revealed Local search operation is hence added to GA in order to strengthen the fine-tuning and convergence capabilities This chapter provides some numerical experiment cases to demonstrate the robustness and effect of GA and hybrid GA for structural identification problem

2.2 Genetic Algorithm (GA)

Genetic Algorithm is a probabilistic search technique Different from other methods of randomly searching points or following a specific pattern, it explores the solution space from population to population Thus, no point-to-point information (derivation, gradient etc) is required Generally speaking, it mimics the behavior of natural world that species evolve from one generation to another under a certain selection pressure The “evaluation” operator gives every individual a fitness value indicating to what extent the individual fits the environments Based on the fitness values, GA selects potential good individuals to form new generation Due to this selection principle, some or even only one of the excellent individuals will gradually become dominating These individuals are supposed to be the solution of search The

Trang 23

basic procedure of GA is shown as Figure 2-1 For the convenience of description, the basic GA that follows a certain empirical parameters (Table 2-1) is called as “Standard GA” in this study

Evaluation

For structural identification, every point in the search space represents a trial set of unknown parameters GA evaluates the trial set by carrying direct analysis of the following equation of motion

P Ku u C

where α and β are the coefficients related to the damping ratios of two selected modes

Given any trial set of parameters, the above equation is solved The error ε

between estimated acceleration and real acceleration value is defined as below:

NL

j i u j i u N

i

L

j

e m

Trang 24

u&&= Acceleration solution according to the estimated M, K value;

N = number of measurement locations, and L = number of time steps

A smaller error value indicates the estimated M, K parameters are closer to this

actual values The smaller the error is, the better is the fitness value By minimizing the error or maximizing the fitness value, GA searches unknown solution space

Selection

Selection is the process of deciding which individuals are fit enough to survive and possibly reproduce offspring in the next generation In a way, it is the most critical operator of GA, for the completing pressure caused by selection controls the evolution direction and speed The selection pressure is closely related to the expanding ratio of population, the selection region and the selection strategy etc

In most cases, the individuals of higher fitness are selected from the whole generation But in the case of PGA with migration, selection may happen in a) relatively independent sub-population; b) ensemble of all the sub-populations The selection may also be realized asynchronously or synchronously For instance, for rank-proportional mating selection, the population has to be sorted and individuals are chosen sequentially; correspondingly, for tournament mating selection, selection happens concurrently Selection can also happen within different sub-populations at the same time

From the view of algorithm, selection methods can be classified roughly into two groups: fitness-proportionate and ranked-based selection The former one selects individuals probabilistically depending on the ratio of the fitness and the average fitness of the population Some examples are roulette-wheel selection (Holland, 1975),

Trang 25

stochastic remainder selection (Booker, 1982), and stochastic universal selection (Baker, 1987) The latter one firstly sorts all the individuals, subsequently assigns a corresponding probability that is proportional to the rank of the individuals in the population The examples include linear ranking (Baker, 1985), tournament selection (Brindle, 1981) and truncation selection (Muhlenbein & Schlierkamp-Voosen, 1993), etc

In the procedure of GA search, the individuals with higher fitness generate more copies (or offspring) than those with low fitness Intuitively, the more copies it makes, the higher selection pressure it has A selection with higher pressure makes GA converge faster than that with low pressure, but obviously it sacrifices the diversity, which is critical to finding a global solution and avoiding premature convergence

In this study, roulette-wheel selection is used in most of cases But in the cases

of multi-population PGA both roulette-wheel and ranking selection are used concurrently

Mutation

Mutation operator adds a random disturbance to one bit (or gene) of the individuals in the new generation Each position is given a chance (=M_Rate) of undergoing mutation If mutation does occur, a random value is chosen from {0,1} for that position Whenever the mutated structure differs from the original structure, the structure is marked for evaluation A more adaptive mutation is self-adaptive mutation, where the adaptive parameters are correspondingly changed The parameters are modified based on the outcome of the disturbance Besides the mutation rate that decides how many bits or genes are going to be modified, there is the mutation step size that decides the additional value of each mutation

Trang 26

• One-point crossover: crossover happens on a randomly chosen point

• Multi-point (two-point) crossover: crossover happens on two or more points

• Mask crossover: a mask chromosome is generated to indicate locations where swap of bits between the parent chromosomes take place

• Inverse crossover: based on two-point crossover, an additional inverse operation is executed to reverse the sequence of the exchanged parts of chromosomes

An important concept of crossover is crossover-rate, which indicates how many chromosomes of the generation are chosen to crossover

2.3 GA-LS Method (GA-LS)

The random nature of genetic algorithm results in poor searching efficiency or even worse: the search leave correct region It is because that common genetic algorithm may still evolve too slowly to find out the right value though the searching has reached the correct region To avoid this problem and enhance the searching efficiency, a local search strategy as proposed by Koh et al (2003) is adopted In this thesis, the GA-LS method is integrated through parallel and distributed model An

Trang 27

interface is created to link LS as if it is an operator Figure 2-3 shows the flow chart of the local search operator

2.4 Parallel Genetic Algorithm (PGA)

The reasons for choosing parallel genetic algorithm are stated as follows: Firstly, evaluation and local search operators are independent Secondly, they are very time-consuming and, relatively speaking, the communication time can be neglected Thirdly, sequential GA may get trapped in a local optimal region of the search space thus unable to find the correct solutions With PGA, additional diversity can be re-introduced to the prematurely converged sub-population This prevents search from being trapped in a local pitfall Finally, by dividing one big initial population into several small sub-populations, PGA can search a bigger initial space

PGA can be classified into single-population GA and multi-population GA mainly Figure 2-4 outlines a classification of PGA The main classes are introduced as below

Single Population Genetic Algorithm

Single population GA does not divide the initial population into several populations The simplest parallel way is “master-slave model”, which merely increases the computational speed by parallelizing evaluation and local search This method does not change the nature of algorithm When the fitness evaluations are expensive relative to the GA operations, linear speed-up could be obtained The distributed computing method introduced in the next chapter obtains nearly linear speed-up By this method, Koh et al reduced the GA searching time from nearly one week to 10 hours (2001)

Trang 28

sub-A synchronous master-slave model is chosen in this study rather than asynchronous parallelism because a global selection strategy can easily be implemented after the whole evaluation process has finished A significant improvement in implementation speed also can be expected through this model However, the whole process has to wait for the slowest processor to finish its parallel operation before proceeding to the next generation The asynchronous method overcomes this, but the algorithm is too complicated to control and realize By correctly handling the load balance between master and slave, synchronous master-slave model can also avoid such time wasting The detail implementation of the proposed method will be discussed in the next chapter

Fine-grained Genetic Algorithm

Fine-grained GA consists of a single spatially structured population It is, in a way, a single population and has special “in-population” topology Ideally, each individual has its own processor, so that the evaluation of fitness can be performed simultaneously on multiple processors for all individuals Selection and mating are restricted to a small neighborhood around the individual The neighborhoods should overlap, so that the good traits of a superior individual can eventually spread to the entire population Robertson (1987) parallelized all the components of the algorithm: the selection of parents, crossover, and mutation He suggested that its execution time was independent of the population size, if sufficient number of processors were used, which allowed the users to experiment with much larger populations than common population size This type of algorithm better simulates the local selection and mating that occur in natural populations (Manderick and Spiessens, 1989)

Trang 29

Coarse-grained Genetic Algorithm

Coarse-grained GA divides the initial population into several sub-populations

At selected time, intervals single individuals may move from one sub-population to another The GA multi-population migration method is inspired by the biological phenomena that isolated environments, such as islands, often produce animal species that are more specifically adapted to the peculiarities of their environments So this method is also named as island model Pettey, Leuze and Grefenstette (1987) published one of the first works on this method They proposed several sub-populations evolving concurrently

Hierarchical Hybrids Genetic Algorithm

Hierarchical hybrid GA parallelizes the multi-population method at higher level with single population PGA at low level Manderick and Spiessens (1989) proposed this algorithm as an extension to the conventional fine-grained algorithms that they examined

Stepping Stone Model Genetic Algorithm

While island model migration method allows individuals migrate to arbitrary sub-populations, as another migration genetic algorithm, stepping stone model restricts its movement to neighbored sub-populations only Tanese (1987, 1989b) implemented above method on an NCube machine to solve function optimization problems Keeping the size of the overall population constant, an almost linear speed-up was achieved on

up to 64 processors

Trang 30

Diffusion Model Genetic Algorithm

Different from the migration methods in migrating intervals, diffusion model randomly accesses all individuals, which are settled within a specified neighborhood These individuals can be considered as moving around within their neighborhood Genetic information is transferred around through those overlapping neighborhood The degree of overlap determines the degree of isolation of sub-populations The group

of Muhlenbein (1991, 1993, 1996) extensively investigated this diffusion model And a selection mechanism, which allows individuals to select a partner for recombination out of a two-dimensional neighborhood, is innovated hereby

2.5 Numerical Study

A set of standard control parameters is implemented in the following cases These parameters are empirically deducted from former experiments The parameter details are listed at Table 2-1

The objectives of the following numerical examples are to:

• Show the efficiency and feasibility of GA in system identification

• Examine the searching capability and limitation of GA

• Show the improvement of searching capability of GA-LS

Case 1: 10-DOF Lump Mass System (LMS)

A 10 DOF LMS with 12 unknown parameters (10 stiffness coefficients and 2 damping ratios) is studied firstly The mass values are supposed to be known The true values of structural parameters are given in Table 2-2 The damping ratios are assumed to be 5% critical damping for the first two modes Random excitations of

Trang 31

Gaussian white noise are exerted on certain levels Instead of recording measurement from sensors, the Newmark’s constant acceleration method (Newmark, 1959) is adopted, as if it is the value measured by sensors Two seconds of dynamic response are generated at every 0.002 s Therefore the evaluated error is obtained through the forward analysis i.e “evaluation operator” The exact values are used to calculate the time response of the structure, which replaces the measured response value in an actual project If the responses of all levels are used in forward analysis, the numerical experiment is termed as “complete response measurement” Otherwise, it is termed as

“incomplete response measurement” In order to simulate real world situation better, incomplete response measurement cases are studied by taking 50% or 25% of the responses Table 2-3 shows the comparison of results between complete and 50% incomplete measurement

Moreover, the measurements are assumed to be without noise The search ranges from –50% to +50% of the “true” value Therefore, the mean error of blind search shall be around 25% The result shows a mean error of 6.6% under the complete measurement condition, and 10.5% under the incomplete measurement Thus, the standard GA algorithm is effective for such small system identification

Case 2: 10-DOF LMS with 5% noise

In this study, zero-mean Gaussian white noise is used to artificially contaminate the random excitation (Input) and response (Output) time histories To account for the effect of Input/Output (I/O) noise, randomly generated noise of 5% is added to the simulated result of forward analysis, which is obtained based on the true value of stiffness and damping ratio Therefore the guiding effect of such forward analysis is relatively weak

Trang 32

Case 3: 10-DOF LMS with 10 % noise

In this case, a larger level of noise is exerted on the standard response to test whether GA can survive through under such level of misleading or not The result in Table 2-4 shows that under 5% or 10% I/O noise, and with 25% incomplete measurement, GA could search LMS with 12 unknown parameters and achieve the mean error 9.4% and 11.4% respectively

Case 4: 10-DOF LMS by GA-LS (with 0%, 5%, 10% noise)

In these cases, the condition of 25% incomplete measurement is still remained

To check the effects of GA-LS for 10 DOF LMS, three trails with 0%, 5% and 10 % noise are executed respectively The results are shown in Table 2-5

Without noise, GA-LS finds the answer with mean error of 1.3 %, and the maximum absolute error 2.5% With 5%, 10% noise, GA-LS yields results of 5.4% or 8.8% mean error The results show that it is difficult for pure GA to refine the result Without the help of LS operation, pure GA cannot effectively search system with 10 DOF According to Figure 2-5 (a) taken from Chen’s experiment result, the maximum error even reaches 27.7% and the mean error reaches 8.62%, when the unknowns are

up to 52 Figure 2-5 (b) illustrates that GA-LS finds the answer with a maximum error 6.72% and mean error 1.94% without I/O noise (search range: +30%)

Case 5: 50-DOF LMS by GA-LS (with 0%, 5%, 10% noise)

Figure 2-6, 2-7, and 2-8 show a series of results with 0%, 5% and 10% noise respectively (search range: +50%) Unless otherwise mentioned, the simulations use 25% incomplete response measurement as the default The results demonstrate that GA-LS is still effective enough for systems with up to 52 unknown parameters, in

Trang 33

particular at the 0% noise level Moreover, the method also allows a certain level of I/O noise

In this case study, the efficiency of this GA-LS search becomes more and more critical although its satisfactory accuracy With the number of unknown parameters increased to 52, the total time consumed is as long as one week for single computer Table2-6 shows that computational effort for sequential algorithm is enormous The speed issue becomes the bottleneck of GA-LS method

2.6 Concluding Remarks

This chapter examines the effectiveness and search capability of GA In particular the accuracy of search is the main concern The numerical experiments show that the standard GA can find the answers of relatively small systems, for which the number of unknown parameters is less than 12 GA-LS greatly improves the search results and is able to identify larger structure systems GA-LS has ever successfully found out the answer of 52 DOF lumped mass system and limited the mean error less than 3%

The experiments show that 25% measurement should be enough for proposed

GA, although incomplete measurement misses a lot useful information to identify the system

A certain level of I/O noise, which always exists in any real experiment, may

be allowed when GA or GA-LS is used to solve lumped mass system identification In this thesis, problems with 5% or 10% noise are studied as well The results show the real error shall be limited around its I/O noise value

Trang 34

Table 2-1 Control Parameters List

Table 2-2 Exact Parameters of the 10-DOF LMS

Trang 35

Table 2-3 Identification Results of Case 1 without I/O noise

Complete Response Measurement

Incomplete Response Measurement (50%) Parameters

Value (KN/m) Error

Value (KN/m) Error

Trang 36

Table 2-4 Identification Results of Case 2 & 3

With 5% noise With 10% noise Parameters Value

(KN/m) Error

Value (KN/m) Error

Trang 37

Table 2-5 Case 4 Identification Result by GA - LS

With 0% noise With 5% noise With 10% noise Parameters Value

(KN/m) Error

Value (KN/m)

Trang 39

Initial Population of GA

Evaluation of fitness value

Selection

Reach GA Terminal Condition?

Stop Yes

Trang 40

Parent 1:

Parent 2:

Mask :

1111010101000100 1000011110111101

Ngày đăng: 08/11/2015, 17:25