1. Trang chủ
  2. » Luận Văn - Báo Cáo

Applications of soft computing in geophysical exploration of oil and geothermal fields

134 8 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 134
Dung lượng 16,37 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Three main features of the soft computing are Artificial Neural Network ANN, Genetic Algorithm GA and Fuzzy Logic FL.. The adopted network was then used for quick 3-D inversion of MAM da

Trang 1

APPLICATIONS OF SOFT COMPUTING

IN GEOPHYSICAL EXPLORATION

OF OIL AND GEOTHERMAL FIELDS

A DOCTORAL DISSERTATION

HO Trong Long

Trang 2

APPLICATIONS OF SOFT COMPUTING

by

HO Trong Long

B.Sc in Information Technology Computer Science, Univ of Natural Sciences, VN (1998)

B.Eng in Petroleum Engineering Geology & Petroleum, Univ of Technology, VN (1999)

M.Sc in Solid Earth Geophysics Physics, Univ of Natural Sciences, VN (2002)

Supervised by

Prof Dr Sachio EHARA

Department of Earth Resources Engineering

Graduate School of Engineering KYUSHU UNIVERSITY

JAPAN

2007

Trang 3

Geophysical data analysis can be seen as statistical problems with attributes

of variables containing imprecision, partial truth and uncertainties They also may have some drawback issues due to partially structured models, incomplete information and large quantities of complex data leading to high computation costs

Soft computing (Zadeh, 1992) represents a significant paradigm shift in the aims of computing, a shift that reflects the fact that human mind, unlike present day computers, possesses a remarkable ability to store and process information which is tolerable of imprecision, partial truth, and uncertainty for a particular problem Three main features of the soft computing are Artificial Neural Network (ANN), Genetic Algorithm (GA) and Fuzzy Logic (FL)

This thesis presents applications of soft computing in geophysical data analysis during exploration operations of petroleum and geothermal reservoirs Two soft computing techniques to be studied are GA for automatically generation

of ANN architecture – an application for three-dimensional (3-D) inversion of mise-à-la-masse (MAM) resistivity data at a geothermal field; and FL for inputs selection of ANN – an application for porosity and permeability estimation at an oil field

This study has four objectives

1 The first objective is to study the practical use of soft computing in oil and geothermal explorations Two case studies at the Takigami geothermal field in Japan and the A2-VD oil prospect in Vietnam will be presented

Trang 4

2 As different practical cases require different soft computing adoptions, the second objective is to develop an appropriate method for solving the present problems (e.g modifying normal techniques)

3 The third objective is to advance the pragmatism of geophysical techniques by reducing the time cost, such as a four-dimensional advanced geo-electrical technique, namely “fluid-flow tomography” which is developed by Geophysics Laboratory of the Kyushu University

4 The last objective is to study oil potential of the A2-VD prospect to enhance oil fields development in Vietnam

In the first case study, a three-layered feed-forward neural network using the Resilient PROPagation (RPROP) learning algorithm was found to be the most efficient among various available algorithms The number of neurons in single hidden layer of the network was selected by heuristic method or automatically generated by GA In this case, not a normal GA but a modified micro-GA (GA) using “hill climbing” technique was developed in FORTRAN in order to decrease evaluation of the fitness function The fitness function was defined based on test error of the network after a specified training iteration Both searching methods (heuristic and modified GA) show that the hidden layer with 45 neurons is the most effective It ensures that the obtained optimization is a global optimum The adopted network was then used for quick 3-D inversion of MAM data, which was measured by the “fluid-flow tomography” method at the Takigami geothermal field This method can be used to reduce the data processing time and to advance the pragmatism of the “fluid-flow tomography” method

In the second case study, after many trials of various algorithms, a layered feed-forward neural network using the Batch Back-Propagation learning algorithm was chosen for porosity network Meanwhile the Quick Propagation learning algorithm was more effective for permeability network For the small size training data set, the architectures of the networks were selected based on heuristic method Input of these networks was well log data sets Eight logging curves had been used which are: gamma ray, calliper, neutron, density, sonic and three resistivity logs (LLS, LLD, MSFL) A fuzzy ranking technique of FL had been deployed to identify noise of well log data by correlation with core measurements

Trang 5

three-The noise probably occurred due to complex borehole condition in fractured/vugs basement rock The fuzzy ranking found that calliper, gamma ray were highly noisy and should not be used as inputs for the porosity network Otherwise, calliper, MSFL were not reliable for training the permeability network because of high level

of noise By filtering the noise in the inputs dataset, the neural networks were performed to predict reservoir parameters (porosity, permeability) with high-accuracy at the A2-VD oil prospect In order to demonstrate the practical meaning

of the soft computing technique, the results were compiled with other data for more comprehensive visualization of 3-D structural model of the whole prospect The porosity was compiled with reservoir dynamic data for mapping porosity distribution; it manifested regions of the fractured/vugs zones which yields high possibility for oil accumulation Both the porosity and the permeability were used

to adjust faults and fractures on seismic data This adjustment was necessary because in basement rock, the mapping of faults and fractures was much more difficult as seismic energy is strongly attenuated and the resulting seismic wave field consists mainly of reflections with low frequency Besides, strong reflections (fault/fracture surfaces) in basement coincide with good porosity and high permeability As a conclusion, a model for hydrocarbon potential assessment of the prospect was presented The model showed a play concept of the prospect that was able to explain the generation, migration and accumulation of hydrocarbon This research brought a good understanding of the A2-VD prospect and potentially plays an important role in future development of oil fields in Vietnam

In summary, the thesis has shown the capacity of the soft computing in solving not only in geophysical exploration of the oil and geothermal fields, but also in various similar problems in geophysical data analysis

The layout of the thesis is as followed:

Chapter 1, Introduction, presents main ideas of the study including

general information of the soft computing and background of the thesis This chapter also gives an overview of previous works during the last few years The objectives of the study are also given in this chapter

Chapter 2, Soft computing, presents basic concepts of the soft computing

features The chapter also presents various forms of their fusion based on

Trang 6

advantages and disadvantages of each method Arguments that lead to the selection

of two fusion methods used throughout the thesis are also discussed

Chapter 3, Reservoir monitoring in a geothermal field using neural networks & genetic algorithm, focuses on the fusion method of neural networks

and genetic algorithm In this chapter, a modified micro-genetic algorithm FORTRAN program has been developed to speed up searching procedure of a normal genetic algorithm The modified micro-genetic algorithm is used for automatic generation of neural network architecture which will be implemented for quick three-dimensional inversion of mise-à-la-masse resistivity data The aim of applying neural network is to reduce the data processing time and to advance the pragmatism of a four-dimensional advanced geo-electrical technique, namely

“fluid-flow tomography” which is developed by Geophysics Laboratory of Kyushu University In order to illustrate capacity of the fusion method, a case study at the Takigami geothermal field in Japan will be presented

Chapter 4, Oil field study from reservoir properties estimation by fuzzy-neural networks, focuses on the fusion method by fuzzy logic and neural

networks, so-called fuzzy-neural networks This chapter deals with application of fuzzy ranking technique to select variables for neural network inputs by filtering the noise in the dataset Results are shown for a case study of reservoir properties (porosity, permeability) estimation by the fuzzy-neural networks at the A2-VD oil prospect in Vietnam For practically view of the calculating results, combination of the porosity and permeability with reservoir dynamic data and seismic data can extract high-reliable results of porosity distribution and faults systems In conclusion, a general view of hydrocarbon potential assessment of the prospect will be given, including three-dimensional structure model, fracture/vugs zones which are prone to oil accumulation, play concept of the prospect explaining where hydrocarbon comes from (mature source rock), direction of oil migration and region of oil accumulation

Chapter 5, Summary and conclusions, summarizes the studies, discusses

the most important results and concludes the thesis

Trang 7

Acknowledgements

I would like to express my sincere gratitude to the Japan International Cooperation Agency (JICA) for their financial support during my study at Kyushu University via the doctoral scholarship award from 9/2004 to 9/2007

I am deeply indebted to Prof Dr Koichiro Watanabe for his hearty help in introducing me the JICA scholarship, for his motivation and moral support throughout my stay at Kyushu University

I wish to express my deepest thanks to my former supervisor, Prof Dr Keisuke Ushijima, for his invaluable help and guidance through my thesis, and for his continuous support to my attendance at several international conferences and research meetings He also has kindly nominated me the President of the Kyushu University Geophysical Society in period of 2005-2007

I would also like to express my warmest thanks to my supervisor, Prof Dr Sachio Ehara, for his supervision of my thesis work, for his outstanding advice throughout my entire studies and for his encouragement in every aspect of this work I am deeply thankful his great help, support and motivation for my final stage of completing the thesis

I wish to extend my deepest gratitude to Prof Hideki Mizunaga for his help

in implementing Artificial Neural Networks of performance of the “fluid-flow tomography” method, and for his valuable advice and expertise in computing facilities

My sincere thanks go to Prof Dr Ryuichi Itoi, Department of Earth Resources Engineering of the Kyushu University and Prof Dr Kenji Jinno,

Trang 8

Department of Urban and Environmental Engineering of the Kyushu University for their constructive criticisms, suggestions and comments which greatly help to improve the quality of this dissertation to its present form

I express my sincere appreciation to the Idemitsu Oita Geothermal Co., Ltd, the Japan Vietnam Petroleum Co (JVPC) and the Vietnam Petroleum Corporation for providing data of my researches

I am thankful to Dr Yutaka Sasaki and Dr Toshiaki Tanaka, Department

of Earth Resources Engineering of the Kyushu University for many helpful and interesting discussions

I specially thank Dr Louise Pellerin, President of the Society of Exploration Geophysicists - Near Surface Geophysics (SEG-NSG) for kindly presenting me the 2006 Student Travel Grant to attend and present this research at the SEG Annual Meeting in New Orleans, USA I also thank Dr Bui T Huyen, Stanford University, Dr Gad El-Qady, National research institute of Astronomy and Geophysics (NRIAG) of Egypt, Dr W Al-Nuaimy, University of Liverpool,

Dr Viacheslav Spichak, Geo-electromagnetic Research Institute RAS of Russia,

Dr Lindsay Thomas, University of Melbourne and Dr Colin Farquharson, Memorial University of Newfoundland for their contribution and discussion during our meeting at the international conferences

My sincere gratitude to Mr Hiroshi Iwadate, Mr Nihei Naoki, Mr Yasuhiro Suhara, Ms Chikako Yoshino and all the JICA/JICE staffs for taking care of me during three years of my stay in Japan I sincerely thank all of my friends in the Exploration Geophysics Laboratory of Kyushu University who have made my studies in Japan so enjoyable

Finally, I would like express my profound gratitude to my beloved wife,

my dearest son, my sister, brothers, and especially my parents who always bring an immense source of inspiration to finish this project

Fukuoka, July 2007

HO Trong Long

Trang 9

Chapter 3 Reservoir monitoring in a geothermal

Trang 10

3.1 Introduction 19

3.6 Validation of neural network architecture by micro-genetic algorithm 38

Chapter 4 Oil field study from reservoir

4.5 Properties estimation by fuzzy-neural networks 71

Trang 11

Chapter 1

Introduction

Trang 12

This chapter introduces background of the study, overview of previous researches, objectives of the research and thesis organization

1.1 Background

In recent years, “soft computing” have been a well-known term not just in academic world, but in all industries It has been focusing attention on this type of approach and has evidenced how soft computing techniques are advantageous and

can be rapidly implemented in numerous sectors, such as pattern recognition,

classification, complex process control, and signal processing

Lotfi A Zadeh (was born in 1921) was the first to raise systematically the problem of integrating techniques peculiar to fuzzy logic with neural and evolutionist techniques, and in 1992 he named that integration “soft computing”

Soft computing is thus a methodology tending to fuse synergic ally the different aspects of fuzzy logic, neural networks, evolutionary computation, and non-linear distributed systems in such way as to define and implement hybrid systems (neuro-fuzzy, fuzzy-genetic, fuzzy cellular neural networks, etc.) The basis principle of soft computing is its combined use of these new computation techniques that allow it to achieve a higher tolerance level towards imprecision and approximation, and thereby new software/hardware products can be had at lower cost, which are robust and better integrated in the real world The hybrid systems deriving from this combination of soft computing techniques are considered to be

the new frontier of artificial intelligence; exactly, intelligence is also not easy to

define, but, we can say that a system is intelligent if it is able to improve its performance or maintain an acceptable level of performance in the presence of uncertainty In effect, the role model for soft computing is the human mind

The successful applications of soft computing and the rapid growth of its theory suggest that the impact of soft computing will be felt increasingly in coming years Soft computing is likely to play an especially important role in science and engineering, but eventually its influence may extend much farther

For many years, geophysical data analysis is always known as difficult task because of its uncertainties, large quantities of complex data leading to high

Trang 13

computation costs, imprecision, partial truth, partially structured models and incomplete information Even though a few researchers have studied some applications of soft computing to solve several geophysical problems, but it is still

in modest beside great un-discovered advantages of the soft computing

With fast growing up of soft computing technique along with improvement

of state-of-the-art computational techniques, soft computing is now widely open for all geophysical scientists This thesis aims; therefore, to study capacity of soft computing features in particularly cases, which are meaningful in fields of economical development and sustainable energy development (i.e oil and geothermal industries) and enhance pragmatism of geophysical methods in the future This study offers a fully view of soft computing including neural networks, evolutionary computing, fuzzy logic and their fusions

1.2 Overview of previous researches

For summary of geophysical researches using soft computing during last several years, there are some noticeable studies which can be outlined as the followings:

In general researches, Aminzadeh had some key studies of application of soft computing in oil industry (Aminzadeh and Groot, 2006), while Stephen and Carlos specialized in fuzzy logic and neural network to solve problems of geophysics such as missing data in geosciences databases or parameter estimation (Stephen, 2006; Carlos et al., 2000)

Inversion of geo-electrical data using soft computing has been studied for many years One-dimensional inversion of geo-electrical resistivity sounding data using artificial neural networks (Singh et al., 2005); two-dimensional inversion of direct current resistivity data using neural network (El-Qady and Ushijima, 2001)

or using genetic algorithm (Christoph et al., 2005; Ahmet et al., 2007) have been suitably investigated Inversion of potential-field data using a hybrid-encoding genetic algorithm was implemented in a paper of Chen (Chen et al., 2006) For three-dimensional inversion of magnetotelluric, a case study using neural network was shown by Spichak (Spichak and Popova, 2000); Manoj applied artificial

Trang 14

neural networks to analysis magnetotelluric time-series (Manoj and Nagarajan, 2003) Otherwise, Marco and Adam (2002), and Jinlian and Yongji (2005) applied genetic algorithms for 2-D inversion of magnetotelluric data

In petroleum industry, well log and seismic are two most important geophysical data Since soft computing is applicable and efficient for solving many problems, applications of soft computing to analysis well log and seismic data have been considered Some researchers proposed to use hybrid neural network or integrating of neural networks and fuzzy logic for improved reservoir property prediction from well log data (Aminzadeh et al., 2000; Lim, 2005, Aminzadeh and Brouwer, 2006), otherwise Ouenes (2000) applied fuzzy logic and neural networks

to fractured reservoir characterization Others researchers applied soft computing

to lithology identification (Hsieh et al., 2005; Bursika and Rogova, 2006; Maiti et al., 2007)

Seismic method plays a very important role in petroleum exploration Studies of seismic attributes using neural network and fuzzy logic were implemented by Aminzadeh and Groot (2004), Aminzadeh and Wilkinson (2004) and Klose (2002) Abdelkader and Kristofer applied neural network to detect faults from seismic data (Abdelkader et al., 2005; Kristofer and Matthijs, 2005) Meanwhile, Patricia used a genetic annealing (GAN) technique to model seismic attributes (Patricia et al., 2002) In another work, Matos presented a method of unsupervised seismic facies analysis using self-organizing neural network (Matos

et al., 2007)

In short, soft computing has been successfully applied to solve various geophysical problems, that their attributes are uncertainties, imprecision, partial truth, partially structured models, incomplete information and large quantities of complex data leading to high computation costs

1.3 Objectives of the study

From the background mentioned above, this thesis has four objectives

Trang 15

1 To study the practical use of soft computing in oil and geothermal explorations based on two case studies at the Takigami geothermal field in Japan and the A2-VD oil prospect in Vietnam

2 As different practical cases require different soft computing adoptions, the scope of this study also is to improve the method appropriately to solve the present problems (e.g modifying normal techniques)

3 Advance the pragmatism of geophysical techniques by reducing the computing time cost, such as a four-dimensional advanced geo-electrical technique, namely “fluid-flow tomography” which is developed by Geophysics Laboratory of the Kyushu University

4 Using soft computing to study oil potential of the A2-VD prospect to enhance oil fields development in Vietnam

1.4 Thesis Organization

The chapter layout is as followed:

- Chapter 1, Introduction, presents main ideas of the study including

general information of the soft computing and background of the thesis This chapter also gives an overview of previous works during the last few years The objectives of the study are also given in this chapter

- Chapter 2, Soft computing, presents basic concepts of the soft computing

features The chapter also presents various forms of their fusion based on advantages and disadvantages of each method Arguments that lead to the selection of two fusion methods used throughout the thesis are also discussed

- Chapter 3, Reservoir monitoring in a geothermal field using neural networks & genetic algorithm, focuses on the fusion method of neural

networks and genetic algorithm In this chapter, a modified micro-genetic algorithm FORTRAN program has been developed to speed up searching procedure of a normal genetic algorithm The modified micro-genetic algorithm is used for automatic generation of neural network architecture

Trang 16

which will be implemented for quick three-dimensional inversion of à-la-masse resistivity data The aim of applying neural network is to reduce the data processing time and to advance the pragmatism of a four-dimensional advanced geo-electrical technique, namely “fluid-flow tomography” which is developed by Geophysics Laboratory of Kyushu University In order to illustrate capacity of the fusion method, a case study

mise-at the Takigami geothermal field in Japan will be presented

- Chapter 4, Oil field study from reservoir properties estimation by fuzzy-neural networks, focuses on the fusion method by fuzzy logic and

neural networks, so-called fuzzy-neural networks This chapter deals with application of fuzzy ranking technique to select variables for neural network inputs by filtering the noise in the dataset Results are shown for a case study of reservoir properties (porosity, permeability) estimation by the fuzzy-neural networks at the A2-VD oil prospect in Vietnam For practically view of the calculating results, combination of the porosity and permeability with reservoir dynamic data and seismic data can extract high-reliable results of porosity distribution and faults systems In conclusion, a general view of hydrocarbon potential assessment of the prospect will be given, including three-dimensional structure model, fracture/vugs zones which are prone to oil accumulation, play concept of the prospect explaining where hydrocarbon comes from (mature source rock), direction

of oil migration and region of oil accumulation

- Chapter 5, Summary and conclusions, summarizes the studies, discusses

the most important results and concludes the thesis

Trang 18

Since soft computing has been a well-interested subject, many books and presentations of soft computing have been published For basic understand of soft computing and its applications, we can find in “Soft computing: new trends and applications” (Fortuna, 2001) or “Soft computing: methodologies and applications” (Hoffmann, 2005) For short introduction of Neural Networks, Fuzzy Systems, Genetic Algorithms and their fusion, the recommended book is “Fusion of Neural Networks, Fuzzy Systems and Genetic Algorithms: Industrial Applications” (Lakhmi and Martin, 1998)

In this chapter, only a brief introduction of soft computing will be displayed based on Lakhmi and Martin (1998) In each case-study of Chapter 3 and Chapter

4, the soft computing approach will be explained in details

Soft computing was firstly envisaged by Lotfi A Zadeh in the 1990s (Zadeh, 1994) In generally, soft computing refers to a collection of computational techniques in computer science, artificial intelligence, machine learning and some engineering disciplines, which attempt to study, model, and analyze very complex phenomena: those for which more conventional methods have not yielded low cost, analytic, and complete solutions The principal constituents of Soft Computing (SC) are Artificial Neural Networks (ANNs), Evolutionary Computation (EC), Fuzzy Logic (FL), Machine Learning (ML) and Probabilistic Reasoning (PR), with the latter subsuming belief networks, chaos theory and parts of learning theory Here bellow, only three main components of SC that successfully used in this study i.e ANNs, EC and FL will be shown

2.1 Artificial Neural Networks

Artificial Neural Networks (ANNs) mimic biological information processing mechanisms They are typically designed to perform a nonlinear mapping from a set of inputs to a set of outputs ANNs are developed to try to achieve biological system type performance using a dense interconnection of simple processing elements analogous to biological neurons ANNs are information driven rather than data driven They are non-programmed adaptive information processing systems that can autonomously develop operational

Trang 19

capabilities in response to an information environment ANNs learn from experience and generalize from previous examples They modify their behavior in response to the environment, and are ideal in cases where the required mapping algorithm is not known but tolerance to faulty input information is required

ANNs contain electronic processing elements (PEs) connected in a particular fashion The behavior of the trained ANN depends on the weights, which are also referred to as strengths of the connections between the PEs ANNs offer certain advantages over conventional electronic processing techniques These advantages are the generalization capability, parallelism, distributed memory, redundancy, and learning

Artificial neural networks are applied to a wide variety of automation

problems Pattern recognition has, however, emerged as a major application

because the network structure is suited to tasks that biological systems perform well, and pattern recognition is a good example where biological systems out-perform traditional rule-based artificial intelligence approaches

The first significant paper on artificial neural networks is generally considered to be that of McCullock and Pitts (1943) This paper outlined some concepts concerning how biological neurons could be expected to operate The neuron models proposed were modeled by simple arrangements of hardware that attempted to mimic the performance of the single neural cell Hebb (1949) formed the basis of ‘Hebbian learning’ which is now regarded as an important part of ANN theory The basic concept underlying ‘Hebbian learning’ is the principle that every time a neural connection is used, the pathway is strengthened About this time of neural network development, the digital computer became more widely available and its availability proved to be of great practical value in the further investigation of ANN performance In 1958 Neumann proposed modeling the brain performance using items of computer hardware available at that time Rosenblatt (1959) constructed neuron models in hardware during 1957 These models ultimately resulted in the concept of the Perceptron This was an important development and the underlying concept is still in wide use today Widrow and Hoff (1960) were responsible for simplified artificial neuron development

Trang 20

In 1969 Minsky and Pappert published (1969) an influential book

“Perceptrons” which showed that the Perceptron developed by Rosenblatt had serious limitations He further contended that the Perceptron, at the time, suffered from severe limitations The essence of the book “Perceptrons” was the assumption that the inability of the perception to be able to handle the ‘exclusive or’ function was a common feature shared by all neural networks As a result of this assumption, interest in neural networks greatly reduced The overall effect of the book was to reduce the amount of research work on neural networks for the next 10 years The book served to dampen the unrealistically high expectations previously held for ANNs Despite the reduction in ANN research funding, a number of people still persisted in ANN research work

John Hopfield (1982) produced a paper in 1982 that showed that the ANN had potential for successful operation, and proposed how it could be developed This paper was timely as it marked a second beginning for the ANN While Hopfield is the name frequently associated with the resurgence of interest in ANN

it probably represented the culmination of the work of many people in the field From this time onward the field of neural computing began to expand and now there is world-wide enthusiasm as well as a growing number of important practical applications

Today there are two classes of ANN paradigm, supervised and unsupervised The multilayer back-propagation network (MLBPN) is the most popular example of a supervised network It results from work carried out in the mid-eighties largely by David Rumelhart (Rumelhart and McClelland, 1986) and David Parker (Parker, 1985) It is a very powerful technique for constructing nonlinear transfer functions between several continuous valued inputs and one or more continuously valued outputs The network basically uses a multilayer perceptron architecture and gets its name from the manner in which it processes errors during training

Adaptive Resonance Theory (ART) is an example of an unsupervised or self-organizing network and was proposed by Carpenter and Grossberg (1987) Its architecture is highly adaptive and evolved from the simpler adaptive pattern recognition networks known as the competitive learning models Kohonen’s

Trang 21

Learning vector quantiser (Kohonen, 1989) is another popular unsupervised neural network that learns to form activity bubbles through the actions of competition and cooperation when the feature vectors are presented to the network A feature of biological neurons, such as those in the central nervous system, is their rich interconnections and abundance of recurrent signal paths The collective behavior

of such networks is highly dependent upon the activity of each individual component This is in contrast to feed forward networks where each neuron essentially operates independent of other neurons in the network

The underlying reason for using an artificial neural network in preference

to other likely methods of solution is that there is an expectation that it will be able

to provide a rapid solution to a non-trivial problem Depending on the type of problem being considered, there are often satisfactory alternative proven methods capable of providing a fast assessment of the situation

Artificial Neural Networks are not universal panaceas to all problems They

are really just an alternative mathematical device for rapidly processing

information and data It can be argued that animal and human intelligence is only a

huge extension of this process Biological systems learn and then interpolate and extrapolate using slowly propagated (100 m/s) information when compared to the propagation speed (3.108 m/s) of a signal in an electronic system Despite this low signal propagation speed the brain is able to perform splendid feats of computation

in everyday tasks The reason for this enigmatic feat is the degree of parallelism that exists within the biological brain

2.2 Evolutionary Computing

Evolutionary computation is the name given to a collection of algorithms based on the evolution of a population toward a solution of a certain problem These algorithms can be used successfully in many applications requiring the optimization of a certain multi-dimensional function The population of possible solutions evolves from one generation to the next, ultimately arriving at a satisfactory solution to the problem These algorithms differ in the way a new population is generated from the present one, and in the way the members are represented within the algorithm Three types of evolutionary computing

Trang 22

techniques have been widely reported recently These are Genetic Algorithms

(GAs), Genetic Programming (GP) and Evolutionary Algorithms (EAs) The EAs

can be divided into Evolutionary Strategies (ES) and Evolutionary Programming (EP) All three of these algorithms are modeled in some way after the evolutionary processes occurring in nature

Genetic Algorithms were envisaged by Holland (1975) in the 1970s as an algorithmic concept based on a Darwinian-type survival-of-the-fittest strategy with sexual reproduction, where stronger individuals in the population have a higher chance of creating an offspring A genetic algorithm is implemented as a computerized search and optimization procedure that uses principles of natural genetics and natural selection The basic approach is to model the possible solutions to the search problem as strings of ones and zeros Various portions of these bit-strings represent parameters in the search problem If a problem-solving mechanism can be represented in a reasonably compact form, then GA techniques can be applied using procedures to maintain a population of knowledge structure that represent candidate solutions, and then let that population evolve over time through competition (survival of the fittest and controlled variation) The GA will

generally include the three fundamental genetic operations of selection, crossover

and mutation These operations are used to modify the chosen solutions and select

the most appropriate offspring to pass on to succeeding generations GAs consider many points in the search space simultaneously and have been found to provide a rapid convergence to a near optimum solution in many types of problems; in other

words, they usually exhibit a reduced chance of converging to local minima GAs

show promise but suffer from the problem of excessive complexity if used on problems that are too large

Generic algorithms are an iterative procedure that consists of a sized population of individuals, each one represented by a finite linear string of

constant-symbols, known as the genome, encoding a possible solution in a given problem

space This space, referred to as the search space, comprises all possible solutions

to the optimization problem at hand Standard genetic algorithms are implemented where the initial population of individuals is generated at random At every

evolutionary step, also known as generation, the individuals in the current

Trang 23

population are decoded and evaluated according to a fitness function set for a given

problem The expected number of times an individual is chosen is approximately proportional to its relative performance in the population Crossover is performed between two selected individuals by exchanging part of their genomes to form new individuals The mutation operator is introduced to prevent premature convergence

Every member of a population has a certain fitness value associated with it, which represents the degree of correctness of that particular solution or the quality

of solution it represents The initial population of strings is randomly chosen The strings are manipulated by the GA using genetic operators, to finally arrive at a quality solution to the given problem GAs converge rapidly to quality solutions Although they do not guarantee convergence to the single best solution to the problem, the processing leverage associated with GAs make them efficient search techniques The main advantage of a GA is that it is able to manipulate numerous strings simultaneously, where each string represents a different solution to a given problem Thus, the possibility of the GA getting stuck in local minima is greatly reduced because the whole space of possible solutions can be simultaneously searched A basic genetic algorithm comprises three genetic operators: selection, crossover, and mutation

Starting from an initial population of strings (representing possible solutions), the GA uses these operators to calculate successive generations First, pairs of individuals of the current population are selected to mate with each other

to form the offspring, which then form the next generation Selection is based on

the survival-of-the-fittest strategy, but the key idea is to select the better

individuals of the population, as in tournament selection, where the participants

compete with each other to remain in the population The most commonly used strategy to select pairs of individuals is the method of roulette-wheel selection, in which every string is assigned a slot in a simulated wheel sized in proportion to the string’s relative fitness This ensures that highly fit strings have a greater probability to be selected to form the next generation through crossover and mutation After selection of the pairs of parent strings, the crossover operator is applied to each of these pairs

Trang 24

The crossover operator involves the swapping of genetic material

(bit-values) between the two parent strings In single point crossover, a bit position

along the two strings is selected at random and the two parent strings exchange their genetic material as illustrated below

The mutation operator alters one or more bit values at randomly selected locations in randomly selected strings Mutation takes place with a certain probability, which, in accordance with its biological equivalent, typically occurs with a very low probability The mutation operator enhances the ability of the GA

to find a near optimal solution to a given problem by maintaining a sufficient level

of genetic variety in the population, which is needed to make sure that the entire solution space is used in the search for the best solution In a sense, it serves as an insurance policy; it helps prevent the loss of genetic material

Genetic algorithms are most appropriate for optimization type problems, and have been applied successfully in a number of automation applications and the automated design of fuzzy logic controllers and ANNs

John Koza of Stanford University developed genetic programming (GP)

techniques in the 1990s (Koza, 1990) and later in the 2000s (Koza et al., 2003) Generic programming is a special implementation of GAs It uses hierarchical genetic material that is not limited in size The members of a population or chromosomes are tree structured programs and the genetic operators work on the branches of these trees

Trang 25

Evolutionary algorithms do not require separation between a recombination and an evaluation space The genetic operators work directly on the actual structure The structures used in EAs are representations that are problem dependent and more natural for the task than the general representations used in GAs

Evolutionary programming is currently experiencing a dramatic increase in popularity Several examples have been successfully completed that indicate EP is full of potential Koza and his students have used EP to solve problems in various domains including process control, data analysis, and computer modeling Although at the present time the complexity of the problems being solved with EP lags behind the complexity of applications of various other evolutionary computing algorithms, the technique is promising Because EP actually manipulates entire computer programs, the technique can potentially produce effective solutions to very large-scale problems To reach its full potential, EP will likely require dramatic improvements in computer hardware

2.3 Fuzzy Logic

Fuzzy logic was first developed by Zadeh (1988) in the mid-1960s for

representing uncertain and imprecise knowledge It provides an approximate but

effective means of describing the behavior of systems that are too complex, defined, or not easily analyzed mathematically Fuzzy variables are processed

ill-using a system called a fuzzy logic controller It involves fuzzification, fuzzy

inference, and defuzzification The fuzzification process converts a crisp input

value to a fuzzy value The fuzzy inference is responsible for drawing conclusions from the knowledge base The defuzzification process converts the fuzzy control actions into a crisp control action

Zadeh argues that the attempts to automate various types of activities from assembling hardware to medical diagnosis have been impeded by the gap between the way human beings reason and the way computers are programmed Fuzzy logic

uses graded statements rather than ones that are strictly true or false It attempts to

incorporate the “rule of thumb” approach generally used by human beings for decision making Thus, fuzzy logic provides an approximate but effective way of describing the behavior of systems that are not easy to describe precisely Fuzzy

Trang 26

logic controllers, for example, are extensions of the common expert systems that

use production rules like “if-then.” With fuzzy controllers, however, linguistic variables like “tall” and “very tall” might be incorporated in a traditional expert system The result is that fuzzy logic can be used in controllers that are capable of making intelligent control decisions in sometimes volatile and rapidly changing problem environments Fuzzy logic techniques have been successfully applied in a

number of applications: computer vision, decision making, and system design

including ANN training

2.4 Fusions

Neural networks, fuzzy logic and evolutionary computing have shown capability on many problems, but have not yet been able to solve the really complex problems that their biological counterparts can It is useful to fuse neural networks, fuzzy systems and evolutionary computing techniques for offsetting the demerits of one technique by the merits of other techniques Some of these techniques are fused as:

1 Neural networks for designing fuzzy systems

2 Neural networks for input evolutionary computing

3 Fuzzy systems for designing neural networks

4 Evolutionary computing for the design of fuzzy systems

5 Evolutionary computing in automatically training and generating neural network architectures

In this research, fusions of (3) and (5) will be implemented

Trang 27

As a most successfully application of ANNs: pattern recognition, Chapter 3

will present a neural approach for apparent resistivity analysis to monitor fluid flow of a geothermal reservoir Chapter 4 will also present application of ANNs to estimation of porosity and permeability in an oil field

As a best optimization tool, GA will be used for generating neural network architectures in Chapter 3 (optimizing the network components) before it can be applied for fast analysis of resistivity data Fuzzy logic ranking is known as a very good tool for noise removing, then, it will be used for selection inputs of the neural network in Chapter 4

Trang 28

Validation of neural network architecture

Computing time and monitoring procedure 49

Trang 29

The “fluid-flow tomography” is an advanced technique for geoelectrical survey, which has been developed by Exploration Geophysics Laboratory of Kyushu University based on the conventional mise-à-la-masse measurement This method is proposed to monitor the fluid-flow behavior during water injection or production operations in a geothermal field Due to high data processing costs, a time efficient solution using neural network technique has been adopted This chapter deals with the application of a neural network for quick 3-D inversion of mise-à-la-masse data, which is measured by the “fluid-flow tomography” method

A case study from the Takigami geothermal field in Japan will be presented, including the construction of neural network architecture, the generation of training and testing data, and the selection of learning algorithms Accuracy of the solution

is then verified by using root mean square misfit error as an indicator

The achieved neural network is a three-layered, feed-forward neural network The input layer has 168 neurons, representing 168 apparent resistivities

on the surveyed area The output layer, which is a result of the 3-D masse inversion, has 720 neurons and the hidden layer with 45 neurons is the most convenient for network training The most successful learning algorithm in this work is the Resilient Propagation algorithm (RPROP) The study consequently advances the pragmatism of the “fluid-flow tomography” method, which can be widely used for geothermal fields

mise-à-la-3.1 Introduction

The mise-à-la-masse (MAM) is a kind of borehole-to-surface DC resistivity methods, which has already been recognized as a tool for mapping the low resistivity area This area is often encountered in the highly fractured reservoir zones, where upstream flows of geothermal fluid found in a hot water-dominated geothermal area (Kauahikaua et al., 1980, Tagomori et al., 1984 and Ushijima, 1989)

The Exploration Geophysics Laboratory of Kyushu University has developed an advanced technique which is modified from the conventional MAM measurement, namely “fluid-flow tomography” method The method was firstly

Trang 30

implemented at the Sumikawa geothermal field, Akita, in Japan in 1999 (Ushijima

et al., 1999) The advantages of this method are that this method not only being able to measure the charged potential but also the streaming potential; and that the data can be continuously recorded as a function of time

The Takigami geothermal field, situated in Kyushu Island, is one of the most important geothermal fields in Japan, where Takigami geothermal power station with installed capacity at 25MWe has been installed and operated since

1996 To automatically monitor the fluid flow behavior during water injection and production operations, the “fluid-flow tomography” method has been proposed to apply in this geothermal field However, the main problem with the existing method is enormous data processing time that leads into the high field costs of exploration and production operations

An artificial neural network (ANN) can be viewed as a fast computational technique that makes use of some organizational principles presented in biological nervous systems (Patterson, 1996) The ANN processing encompasses a broad range of computer algorithms that solve several types of problems, which include procedures of classification, parameter estimation, parameter prediction, pattern recognition, completion, association, filtering and optimization (Brown and Poulton, 1996) There are numerous kinds of neural networks, however, the main aim of any research is usually to find out the network that performs best on presented data In this research, I attempt to address the application of ANN for 3-

D inversion of MAM data recorded by the “fluid-flow tomography” method, and present a case study at the Takigami geothermal field The aim of this application

is to reduce the data processing time and advance the pragmatism of the flow tomography” method

“fluid-3.2 The Takigami geothermal field

The Takigami geothermal field is located in the Hohi geothermal region of the northeast of the Kyushu Island (Fig 3.1) This is one of the most active geothermal areas in Japan The Hohi geothermal region has an outstanding level of geothermal resources, with many volcanoes and geothermal manifestations such as hot springs, fumaroles and hydrothermal alteration halos at the surface Although

Trang 31

the Takigami field lies within the promising Hohi geothermal region, there are no surface manifestations in its vicinity area The nearest hot springs are located within 2 km north and east of the area (Fig 3.1) Thus, it is regarded as a

“concealed geothermal system” Since 1979, the various researches in geology, geochemistry, hydrogeology and geophysics have been conducted by Idemitsu Geothermal Co Ltd

Figure 3.1 Map of the northeast Kyushu, showing the Takigami geothermal field

Geological settings

The subsurface geology of the area was studied from drill cores and cuttings by Furuya et al (2000) The central Kyushu Island is cut by a volcano-tectonic depression that developed within a tensile stress field since the Neogene time, resulting in volcanism since the Pliocene time A schematic cross-sectional model of the geothermal system in the southwest-northeast of the Takigami area is shown in Fig 3.2 The area is mainly covered by Quaternary volcanic rocks overlying the Tertiary Usa Group The oldest volcanic rock is Mizuwake andesite (Pliocene) exposed in the northern part of the area The formations developed in the Takigami area consist of the thick Pliocene to Pleistocene age dacitic to andesitic lavas and pyroclastics, which are interfingered with lacustrine sediments

Trang 32

Figure 3.2 Geological cross-section of the Takigami geothermal system (after

Takenaka & Furuya, 1991)

Resistivity structures of the area were obtained by two-dimensional (2-D) inversion of MT data (Phoenix, 1987) and CSAMT data (West Japan Engineering Consultants, 1994) The resistivity model is characterized by a low resistivity reservoir (from 10 to 12 Ωm) beneath an impermeable and rather high resistivity layer (from 100 Ωm to 105 Ωm) (Fig 3.3)

Trang 33

Figure 3.3 Three-dimensional structural model of the Takigami area, constructed

from 2-D inversion of MT and CSAMT data

The conceptual geological model of the area is illustrated in Fig 3.3 There are two types of fault/fracture system, i.e the east-to-west and the north-to-south striking faults, which are identified mainly from studies of lineaments using remote sensing data and correlations of subsurface stratigraphy The east-to-west striking faults are estimated to have smaller vertical displacements than that of the north-to-south striking faults The north-to-south trending of the Noine fault zone is not observed from the surface, but is important because it divides the area into the eastern and the western parts stratigraphically The Takigami geothermal system is best described as having two parts, the eastern and western reservoirs The eastern part of the reservoir is shallower (700-1100 meters in depth) and has dominated fractures with a high permeability (50-100 m-darcy) The strata temperature varies from 160oC to 210oC On the other hand, the western part of the reservoir is deeper (1500-2000 meters in depth) and has a lower permeability (5-10 m-darcy) and higher temperature (230oC -250oC)

The Takigami geothermal power station has been jointly developed by Idemitsu Geothermal Co Ltd with Kyushu Electric Power Co Ltd., and operated

Trang 34

since November 1996 with the capacity of 25 megawatts (MW) It is important to keep stability of the Takigami geothermal power plant by having proper understanding of the subsurface structure, reservoir characteristics with possibility

of its extension, the proper location of re-injection area; and reservoir monitoring is necessary during water injection or steam production operations

3.3 Monitoring methodology

The “fluid-flow tomography” method has been developed by Geophysics Laboratory, Kyushu University since 1999 (Ushijima et al., 1999) The method is used to monitor the transient phenomena of dynamic fluid-flow behavior during water injection and production operations of a geothermal field In this method two potentials (charged and streaming) are measured on the ground surface as a function of time By multiplying with geometric factor of the MAM method, the charged potentials can be converted to the apparent resistivity (Ushijima, 1989) The resistivity changes express the fluid distribution; meanwhile the self potentials (SP) due to streaming potentials show the anisotropic permeability

Figure 3.4 Field survey of the “fluid-flow tomography” method

(Ushijima et al., 1999)

Trang 35

The analysis of charged potential obtained from the “fluid-flow tomography” method can also recognize the presence and boundary of an anomalous body If the conductivity of the anomalous body is very high, there is a relatively little potential drop across the body and the buried body can therefore be mapped

Fig 3.4 illustrates a field layout of the “fluid-flow tomography” method A charged current electrode (C1) is connected to the conductor casing of the well A distant electrode (C2) is fixed at distance 3 km away from the charged well A base potential electrode (P2) is also fixed 3 km away from the well and on the opposite side of the C2 cable line to minimize electromagnetic coupling effects A potential electrode (P1) is moved along a traverse line for the conventional MAM survey, while multiple potential electrodes are used for the “fluid-flow tomography” survey An electric current of 1-5A at a frequency of 0.1Hz is introduced into the earth by a transmitter used for a conventional electrical resistivity survey Potential distributions on the ground surface are continuously measured by a digital recording system controlled by a personal computer on the survey site, and time series data are stored in the computer’s memory with a sampling rate of 1,800 records/hour By doing so, fluid-flow fronts and the distribution of fluid flow can

be continuously imaged as a function of time on the survey site Practical images

of fluid-flow behavior can be obtained by making contour maps and a comparison

of two data sets between an arbitrary time lag and an initial time (a base time) These contour maps could be continuously obtained before and during stimulations

of reservoirs such as hydraulic fracturing, production and re-injection operations

I have carried out the “fluid-flow tomography” method in the northern part

of the Takigami area by utilizing the anchor casing of the re-injection well of

TT-19 (Fig 3.5)

Trang 36

Figure 3.5 Location of the mise-à-la-masse survey in the Takigami area

The current electrode C1 was connected to the wellhead part of a casing pipe of TT-19 (1500 meters in length) and 160 of multiple potential electrodes were radially set along the survey lines from A to P with an interval of 150 m as shown in Fig 3.5 The data processing included 3-D inversion of MAM resistivity

Trang 37

data and 3-D inversion of spontaneous potential (SP) data Because of the long period of the monitoring operation and high sampling rate, a large quantity of measured data was obtained Unfortunately, the data processing of the “fluid flow tomography” method had a drawback due to the limitations in computational speed and high data processing costs Therefore, here I introduced the using of neural network technique to solve the problem Within the scope of this study, I will present an application of ANN for quick 3-D inversion of MAM resistivity data in the Takigami geothermal field

3.4 MAM 3-D forward problem

The 3-D forward problem of resistivity method is a common issue in recent years In this study, I used the algorithm proposed by Mizunaga et al (1991), who used a singularity removal that provided with a high accuracy in numerical modeling A full description of the program can be found in Mizunaga et al (1991)

The forward numerical modeling was carried out according to the scheme illustrated in Fig 3.6 The mathematical basis of this method is as follows:

The relationship between electric current density ( J

where x s , y s , z s : coordinates of the source, I: current in amperes

The volume integration of Equation (3.3) was carried out, where the electric potential  i,j,k is unknown

x I

Trang 38

Figure 3.6 (a) 3-D numerical modeling of the forward problem, (b) Apparent resistivity at the grid points on the surface (excepted wellhead point), (c) Grid

discretization around the current source

Using Green theorem, the volume integral becomes

s , y , z x I ds

of Equations (3.4), (3.5) and approximation  /  value, the set of simultaneous equations is given as

where C is a function of medium geometry and physical property

distribution in the grid of LMNxLMN matrix, is the unknown solutions of the

total potential at all nodes and S is the source term of charge injection

X

Trang 39

3.5 Neural network recognition

Basic of ANN technique can be found in Hu and Hwang (2002)

In recent years, ANN has been successfully applied in many problems of geophysics (Taylor and Vasco, 1990; Wiener et al., 1991; Poulton et al., 1992; Winkler, 1994; Ashida, 1996; Spichak and Popova, 2000; Salem et al., 2000, El-Qady and Ushijima, 2001; and Aristodemou et al., 2005) Particularly, most of the successful studies of geophysical problems used the multilayer perceptron (MLP) neural network model which is the most popular among all the existing techniques The MLP is a variant of the original perceptron model proposed by Rosenblatt in the 1950s (Rosenblatt, 1958) The model consists of a feed-forward, layered network of McCulloch and Pitts’ neurons (McCulloch and Pitts, 1943) Each neuron in the MLP model has a nonlinear activation function that is often continuously differentiable Some of the most frequently used activation functions for MLP model are the sigmoid and the hyperbolic tangent functions

A key step in applying the MLP model is to choose the weighted matrices Assuming a layered MLP structure, the weights feeding into each layer of neurons form a weight matrix of that layer Values of these weights are found using the error back-propagation method This leads to a nonlinear least square optimization problem to minimize the error There are numerous nonlinear optimization algorithms available to solve this problem, and some of the basic algorithms are the Steepest Descend Gradient method, the Newton’s method and the Conjugate-Gradient method

There are two separate modes in which the gradient descent algorithm can

be implemented: the incremental mode and the batch mode (Battiti, 1992) In the incremental mode, the gradient is computed and the weights are updated after each input is applied to the network In the batch mode, all the inputs are applied to the network before the weights are updated The gradient descent algorithm with momentum converges faster than gradient descent algorithm with non-momentum Two powerful techniques of gradient descent algorithm with momentum in incremental and batch modes are online back-propagation and batch back-propagation, respectively

Trang 40

Some researches proposed high-performance algorithms that can converge from ten to one hundred times faster than conventional descend gradient algorithm, for example the Quasi-Newton algorithm (Battiti, 1992), the Resilient Propagation algorithm - RPROP (Riedmiller and Braun, 1993), the Levenberg-Marquardt algorithm (Hagan and Menhaj, 1999) and the Quick Propagation algorithm (Ramasubramanian and Kannan, 2004) The disadvantage of the Quasi-Newton algorithm and the Levenberg-Marquardt algorithm is long time training due to complex calculations in computing the approximate of the Hessian matrix (Battiti, 1992; Hagan and Menhaj, 1999)

In this study, I have considered the mathematical basis of four training techniques, including the online back-propagation, the batch back-propagation, the RPROP and the Quick propagation algorithm, as they are the most suitable techniques The main difference among these techniques is on the method of calculating the weights and their updating (Werbos, 1994)

The training process starts by initializing all weights with small non-zero values Often they are generated randomly A subset of training samples (called patterns) is then presented to the network, one at a time A measure of the error incurred by the network is made, and the weights are updated in such a way that the error is reduced One pass through this cycle is called an epoch This process is repeated as required until the global minimum is reached

In the online back-propagation algorithm, the weights are updated after each pattern is presented to the network, otherwise batch back-propagation with weight updates occurring after each epoch (Battiti, 1992) The weights are updated

as followed

 t wt 1w

t MSE t

where  w ij is the change of the synaptic connection weight from neuron i to

neuron j in the next layer, MSE is the least mean square error,  is the learning rate,

t is the step of training and  is the momentum that pulls the network out of small local minima And then,

Ngày đăng: 31/01/2021, 23:27

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm