1. Trang chủ
  2. » Luận Văn - Báo Cáo

Application of neuro fuzzy technique in streamflow forecasting

85 10 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 85
Dung lượng 1,57 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

iii ABSTRACT The neuro - fuzzy technique NFT, called the Adaptive Neuro - Fuzzy Based Inference System ANFIS was employed to forecast daily and weekly streamflow for four gauging statio

Trang 1

APPLICATION OF A NEURO - FUZZY TECHNIQUE IN STREAMFLOW

FORECASTING

by

Chau Nguyen Xuan Quang

A thesis submitted in partial fulfillment of the requirements for the degree of Master of

Previous Degree : Bachelor of Civil Engineering

Hochiminh City University of Technology Hochiminh, Vietnam

Scholarship Donor : Government of the Netherlands

Asian Institute of Technology School of Civil Engineering

Thailand April 2004

Trang 2

ACKNOWLEDGEMENT

First of all, the author wishes to express his deepest gratitude to his advisor, Professor Tawatchai Tingsanchali, for his invaluable advice, continuous guidance and encouragement in the accomplishment of this thesis

Grateful acknowledgements are extended to Dr Roberto S Clemente and Dr Manukid Parnichkun who served as Examination Committee Members, for their valuable suggestions, comments, guidance and inspiration during period of this study

The author also would like to express his appreciation to Royal Irrigation Department (RID) for their help providing data

The financial support of the Government of Netherlands through a scholarship award is gratefully acknowledged His study at the Asian Institute of Technology (AIT) would have been impossible without this financial support

Finally, grateful acknowledgements are extended to his family, for their eternal love, dedications, constant encouragement, and moral support throughout his education and study at AIT

Trang 3

iii

ABSTRACT

The neuro - fuzzy technique (NFT), called the Adaptive Neuro - Fuzzy Based Inference System (ANFIS) was employed to forecast daily and weekly streamflow for four gauging stations, namely Y17, Y4, Y6 and Y20 of the Yom River Basin in Thailand The model is proposed to forecast the streamflow of one, two and three days and one week in advance Various inputs of different types and length of training data were tried, using observed daily and weekly discharges, water level and mean areal rainfall series from 1990 to 1999 for calibration (training) and from 2000 to 2001 for verification (testing), to obtain the most accurate results The accuracy of flood forecast is evaluated by using statistical efficiency index (EI), and root mean square error (RMSE)

The results obtained from NFT model were found to be very satisfactory in both daily and weekly streamflow forecasting The model accuracy decreases when the time of forecasting ahead is increased However, the accuracy of results of two and three days ahead forecasting are much better when using successive day to day forecast of previous day as the input of the next days forecast The NFT model results are very close to the results obtained by using multilayer perceptron (MLP) model and are better than the results obtained from multi variable regression (MVR) model

This study presented the application of NFT for daily and weekly streamflow forecasting with promising results The results also indicate that NFT perform slightly better than MLP and much better than MVR in flood forecasting in terms of accuracy Both NFT and MLP are strongly recommended for streamflow forecasting due to their capacity in non-linear relationship modeling

Trang 4

3.2.2 Basic Relations and Logical Operations 10

Trang 5

Tables Of Contents (Continued)

5 RESULTS AND DISCUSSIONS

5.1 Relationship Function of Output and Input 30

5.4.2 Daily Water Level Forecasting Model 35 5.4.3 Mean Weekly Discharge Forecasting Model 38 5.5 Comparison of NFT Model to MLP and MVR Models 41

Trang 6

List of Abbreviations

ANN Artificial Neural Network

ANFIS Adaptive Neuro - Fuzzy Based Inference System

RID Royal Irrigation Department

RMSE Root Mean Square Error

RBF Radius Basis Function

SSARR Streamflow Synthesis and Reservoir Regulation USGS U.S Geological Survey

Trang 7

List of Figures

3.5 Commonly Used Fuzzy If – Then Rules and

4.1 The Distribution of Rainfall and Gauging Stations in Yom River Basin 25 4.2 Arrangement of Gauging Stations for Streamflow Forecasting 25 4.3 The diagram for Selection the Relationship Function of Input and Output 27

4.5 Overtraining Behavior of Training Process 29 5.1 RMSE vs Relationship Functions of Input and Output of

5.2 Comparison of Observed and 1 Day Ahead Forecasted Discharge at

Station Y17, (Testing Period: 1/April/2000 – 31/March/2002) 34 5.3 Comparison of Observed and 1 Day Ahead Forecasted Discharge at

Station Y4, (Testing Period: 1/April/1997 – 31/March/1999) 35 5.4 Comparison of Observed and 1 Day Ahead Forecasted Discharge at

Station Y6, (Testing Period: 1/April/2000 – 31/March/2002) 35 5.5 Comparison of Observed and 1 Day Ahead Forecasted Discharges at

Station Y20, (Testing Period: 1/April/2000 – 31/March/2002) 35

Trang 8

5.6 RMSE vs Relationship Functions of Input and Output of

5.7 Comparison of Observed and 1 Day Ahead Forecasted Water Level at

Station Y17, (Testing Period: 1/April/2000 – 31/March/2002) 38 5.8 Comparison of Observed and 1 Day Ahead Forecasted Water Level at

Station Y4, (Testing Period: 1/April/1997 – 31/March/1999) 38 5.9 Comparison of Observed and 1 Day Ahead Forecasted Water Level at

Station Y6, (Testing Period: 1/April/2000 – 31/March/2002) 39 5.10 Comparison of Observed and 1 Day Ahead Forecasted Water Level at

Station Y20, (Testing Period: 1/April/2000 – 31/March/2002) 39 5.11 Comparison of Mean Weekly Observed and 1 Week Ahead Forecasted

Discharge at Station Y17, (Testing Period: 1/April/2000 – 31/March/2002) 40 5.12 Comparison of Mean Weekly Observed and 1 Week Ahead Forecasted

Discharge at Station Y6, (Testing Period: 1/April/2000 – 31/March/2002) 40 5.13 Comparison of Mean Weekly Observed and 1 Week Ahead Forecasted

Discharge at Station Y6, (1/April/2000 – 31/March/2002) 40 5.14 Comparison of Mean Weekly Observed and 1 Week Ahead Forecasted

Discharge at Station Y20, (Testing Period: 1/April/2000 – 31/March/2002) 41 5.15 Comparison of Observed and 1 Day Ahead Forecasted Discharges Obtained

from NFT, MLP and MVR at Station Y17, (Testing Period: 1/April/2001–

5.16 Comparison of Observed and 1 Day Ahead Forecasted Discharge Obtained

from NFT, MLP and MVR at Station Y4, (Testing Period: 1/April/1998 –

5.17 Comparison of Observed and 1 Day Ahead Forecasted Discharge Obtained

from NFT, MLP and MVR at Station Y6, (Testing Period: 1/April/2001 –

5.18 Comparison of Observed and 1 Day Ahead Forecasted Discharge Obtained

from NFT, MLP and MVR at Station Y20, (Testing Period: 1/April/2001 –

5.19 Comparison of Observed and 1 Day Ahead Forecasted Water Level Obtained

from NFT, MLP and MVR at Station Y17, (Testing Period: 1/April/2001 –

5.20 Comparison of Observed and 1 Day Ahead Forecasted Water Level Obtained

from NFT, MLP and MVR at Station Y4, (Testing Period: 1/April/1997 –

5.21 Comparison of Observed and 1 Day Ahead Forecasted Water Level Obtained

from NFT, MLP and MVR at Station Y6, (Testing Period: 1/April/2001 –

5.22 Comparison of Observed and 1 Day Ahead Forecasted Water Level Obtained

from NFT, MLP and MVR at Station Y6, (Testing Period: 1/April/2001 –

Trang 9

5.23 Comparison of Mean Weekly Observed and 1 Week Ahead Forecasted

Discharge Obtained from NFT, MLP and MVR at Station Y17,

(Testing Period: 1/April/2000 – 31/March/2002) 46 5.24 Comparison of Mean Weekly Observed and 1 Week Ahead Forecasted

Discharge Obtained from NFT, MLP and MVR at Station Y4,

(Testing Period: 1/April/1997 – 31/March/1999) 46 5.25 Comparison of Mean Weekly Observed and 1 Week Ahead Forecasted

Discharge Obtained from NFT, MLP and MVR at Station Y6,

(Testing Period: 1/April/2000 – 31/March/2002) 47 5.26 Comparison of Mean Weekly Observed and 1 Week Ahead Forecasted

Discharge Obtained from NFT, MLP and MVR at Station Y20,

(Testing Period: 1/April/2000 – 31/March/2002) 47

Trang 10

List of Tables

3.1 Two passes in the hybrid learning procedure for ANFIS 22

5.1 Results of Selected the Training Pattern for Station Y17, Y4, Y6 and Y20 32 5.2 Results of the Separation of Training Data into Two Sets: High Flow

(July to November) and Low Flow (December to June of following year) 32 5.3 Results of Daily Discharge Forecasting of Stations Y17, Y6, Y4 and Y20 33 5.4 Results of Daily Water Level Forecasting of Stations Y17, Y6, Y4 and Y20 36 5.5 Results of Mean Weekly Discharge Forecasting 38 5.6 Comparison of Results of NFT, MLP and MVR Models for Station Y17 41 5.7 Comparison of Results of NFT, MLP and MVR Models for Station Y4 42 5.8 Comparison of Results of NFT, MLP and MVR Models for Station Y6 42 5.9 Comparison of Results of NFT, MLP and MVR Models for Station Y20 43

Trang 11

CHAPTER 1 INTRODUCTION

1.1 General

Water is the most abundant substance on earth, the principal of all living things, and is circulating in hydrology cycle It is generally usable only in form of river flow and groundwater flow, whose input is mainly rainfall Thus, the distribution of water in space and time does not always correspond to our needs which is the main causes of flood, drought and not satisfy in water demand This problem become seriously as the climate change affect and the increase of human activities on the watershed such as deforestation, erosion Therefore, water resources development including planning and management often deals with streamflow forecasting problems

Knowledge of streamflow forecasting has been one of the important problems for hydrologists, reservoir operators and for flood protection engineers This is mainly in the form of estimation or forecasting the discharge and water level of the river Such forecasts are useful in many ways They provide warnings for people to evacuate areas threatened by floods, and to help water management personnel optimize the operation of system like reservoir and power plants In addition, estimates of streamflow in space and time are often needed in irrigation, water supply, navigation, environmental conservation

Streamflow forecasting with rainfall as input could be considered as forecasting runoff from rainfall There has been a variety of models develop for the transfer rainfall to runoff, such as Unit Hydrograph (Sherman, 1932), Tank Model (Sugawara, 1961), Stanford Watershed Model IV (Rawford and Linsley, 1966), Streamflow Synthesis and Reservoir Regulation (SSARR) model (Rockwood, 1968), MIKE 11 model, etc Further more, the problem of rainfall-runoff modeling has perhaps received the maximum attention by ANN modelers in recent years This problem lends itself admirably to ANN applications (Hsu et

al, 1995) It could be an important tool for modeling the non-linear relationship between rainfall-runoff due to their flexible mathematical structures There are many successful applications of ANN in rainfall-runoff forecasting such as back propagation (Hjelmfelt and

Wang, 1996), time-delayed (Karunanithi et al 1994), recurrent (Carriere, Mobaghegh and

Gaskari, 1996), radial-basis function (Fernando and Jayawardena, 1998), modular (Zhang

and Govindaraju 1998), and self-organizing (Hsu, Gupta and Sorooshian, 1998)

1.2 The Study Area

The Yom River system, figure 1.1, is a large river in the north of Thailand with the catchment area about 23,616 km2 and originates from, Doi Khunbuam, Peepunnarn range

in the region of Phayao province The length of the river basin is about 400 kilometers and its width averages about 56 kilometers The east is the basin of the Nan River and the west

is the Ping and its tributary the Wang River These three rivers with Mae Yom River are the principal tributaries of the Chao Phraya, the main river of the Central Plain of Thailand The northern part of the Yom Basin is separated from the Mekong Basin by mountains generally in excess of 1500 meters in elevation and from the Nan and Wang Basins by ridges ranging in height from 900 to 1200 meters These ridges enclose the Phrae flat alluvial plain through which the river flows, after the Yom River breaks through a range of hills and flows through a Sukhothai broad plain, meets the Nan River and flows into Chao Phraya River in Nakorn Sawan

Trang 12

Flood period of Yom River are from July to November The Yom River always experiences flooding due to excessive rainfall and limited channel flow capacity

PRAI KHAM LAN KRA PHUR

PHOTHALAY

PHROMPI RAM

SUKHOTHAI SRI SO LONG

TUNG LIEM

UTTARADIT

SAM KHAM

PITSANULOK TAK

PHAYAO

THAPLA WANG CHIN

PARE LAMPANG

N

Trang 13

1.3 Statement of the Problem

Flood forecasting is very important for flood control and mitigation It can effectively provide advance information for flood warning to people who are living in flood prone areas It can help in the operation of flood control structures and reduce impact of flood disasters and flood damages As above display, there are many physically based hydrological models and conceptual models that can be used for flood forecasting, however, these models require geometrical, physical and hydrological input data that in many circumstances are not readily available To overcome these problems, one of early approaches is using multi variables regression (MVR) to obtain an approximate function of input and output variables This function is used to forecast streamflow in the future The advantages of MVR model are quite simply and easy to develop the model However, the MVR model do not give high accurate result due to the approximate function of MVR is just linear function

Recently, another approach, artificial neural networks (ANNs) has been widely applied for flood forecasting successfully with much less than input data requirement The main advantages of using ANNs are their flexibility and ability to model nonlinear relationships

of input and output variables However, despite these advantages, ANNs have often been criticized for acting as ‘‘black boxes’’ The knowledge contained in an ANN model is kept

in the form of a weight matrix that is hard to interpret and can be misleading at times, (Tarek Sayed, Arash and Abdolmehdi, 2001)

In this study, a neuro-fuzzy technique (NFT), which combines the learning capacity of ANNs and flexible capability of knowledge representation of fuzzy logic in order to overcome many of shortcomings of ANNs, is applied for forecasting daily and weekly streamflow, discharge and water level, of Yom River in Thailand The results obtained from NFT are compared with the results obtained from MLP (multilayer perceptron), a type of ANN, MVR (multi variables regression) and the measured flow data to analyze the accuracy and sensitivity of the models

1.4 Objectives of the Study

The main objectives of this study are to develop a daily and weekly streamflow forecasting model by using neuro - fuzzy technique and compare the results obtain from NFT to another model such as MLP (ANN) and MVR

The specific objectives can be given as follows:

1 To develop the daily and weekly streamflow forecasting model by using neuro - fuzzy technique (NFT) using an Adaptive Network Based Fuzzy Inference Systems (ANFIS)

at various gauging stations of Yom River, North Thailand

2 To develop the daily and/ or weekly streamflow forecasting model with Multilayer Perceptron (MLP) neural network at various gauging stations of Yom River

3 To develop the daily and/ or weekly streamflow forecasting model with Multi Variables Regression (MVR) at various gauging stations of Yom River

4 To compare the achievement and limitations of each model to obtain the most suitable model for streamflow forecasting of Yom River

5 To forecast the daily, three days ahead, and weekly streamflow, (discharges and/or water levels) at various stations of Yom River

Trang 14

1.5 Scope of the Study

The scope of the study will focus on the following points:

1 Collect of hydro meteorological the data, discharge, water level and rainfall, from the Royal Irrigation Department (RID) of Thailand

2 Use of the data recent year, from 1990 to 1999 for calibration (training), and from 2000

to 2001 for verification (testing) of NFT, MLP and MVR model

3 Development an appropriate streamflow forecasting model for four stations along Yom River, namely, Y20, Y6, Y4, and Y17 (see Figure 4.1) by using NFT, MLP and MVR

4 Application of NFT, MLP, MVR model to forecast the daily (three days ahead), weekly discharges and/or water levels of these stations

5 Compare results from NFT, MLP and MVR in term of accuracy

Trang 15

CHAPTER 2 LITERATURE REVIEW

2.1 Neural Network Approach

In recent years, artificial neural networks (ANNs) have been used for forecasting in many areas of science and engineering The main advantage of the ANN approach over traditional methods of modeling is that it does not require the complex nature of the underlying process under consideration to be explicitly described in mathematical terms In the field of water resources, especially in rainfall-runoff modeling, the application of ANN has been widely investigated

Halff et al (1993) designed a three layer feedforward ANN using the observed rainfall

hyetograph as inputs and hydrographs recorded by the U.S Geological Survey (USGS) at Bellvue, Washington, as outputs The authors decided to use five nodes in the hidden layer

A total of five storm events were considered On a rotation basis, data from four storms were used for training, while data from the fifth storm were used for testing network performance A sequence of 25 normalized 5 min rainfalls was applied as input to predict the runoff This study opened up several possibilities of rainfall runoff application using neural network

Hjelmfelt and Wang (1993) developed a neural network based on the unit hydrograph theory Using linear superposition, a composite runoff hydrograph for a watershed was developed by appropriate summation of unit hydrograph ordinates and runoff excesses To implement this in neural network framework, the number of units in the input and hidden layer was kept the same Connections only existed between the corresponding pairs in the first two layers, i.e., the ith node in the first layer connects only to the ith node in the

second layer, with the weights being set to unity The nodes in the hidden layer were fully connected with the single output node representing runoff The inputs to the ANN were se-quences of rainfall Instead of the threshold function, a ramp transfer function corresponding to the rainfall Φ-index was used for the hidden layer The hidden layer served to extract the infiltration from rainfall, and its outputs were rainfall excesses The output layer calculated a weighted sum of the rainfall excesses

Zhu et al (1994) predicted upper and lower bounds of the flood hydrograph in Butter Creek, New York by using two neural networks Off-line predictions were made when present flood data were not available and estimates had to be based on rainfall data alone On-line predictions were based on both rainfall and previous flood data Data for ANN testing and validation were generated from a nonlinear storage model Model performance was strongly influenced by the training data set

Smith and Eli (1995) applied a back-propagation neural network model to predict peak discharge and time to peak over hypothetical watershed Data sets for training and validation were generated by either a linear or a nonlinear reservoir By representing the

watershed as a grid of cells, it was possible for the authors to incorporate the spatial and

temporal rainfall distribution information of rainfall into the ANN model

Jayawardena and Fernando (1995, 1996 and 1998) also used RBF methods for flood forecasting They illustrated the application of radius basis function (RBF) artificial neural networks using an orthogonal least squares algorithm (OLS) to model the rainfall-runoff process Hourly rainfall and runoff data from a 3.12 km2 watershed were collected and used in developing the ANN The input nodes contained three antecedent discharges and

Trang 16

two rainfall values-that is, Q(t-1), Q(t-2), Q(t-3), R(t-2), and R(t) The output was the discharge at the current hour, Q(t)

Shamseldin (1997) compared ANNs with a simple linear model, a season-based linear perturbation model, and a nearest neighbor linear perturbation model Daily average values

of and runoff from six different watersheds around the word were collected for this study Three different types of information were compiled from this data These were weighted averages of recent rainfall measurements, seasonal information on Φ-index and average discharges, and nearest neighbor information Four different scenarios based on commons

of some or all of these types of input information examined A three-layer neural network was adopted by author, and the conjugate gradient method was used for training A two-parameter gamma function representation was chosen as the impulse response of the rainfall series The parameters were also estimated as part of the training procedure The network output consisted of the runoff time series The results suggested that the neural networks generally performed better than the other models during training and testing The issue of enhancing the training speed using a three-layer network was addressed by Hsu et al (1995) and Gupta et al (1997) These studies advocated the linear least squares simplex (LLSSIM) algorithm, which partitions the weight space to implement a synthesis

of two training strategies The input hidden layer weights were estimated using a multistart downhill simplex nonlinear optimization algorithm, while the hidden-output layer weights were estimated using optimal linear least square estimation The nonlinear portion of the search was thereby confined to a smaller dimension space, resulting in acceleration of the training process The simplex search involves multiple starts that are initiated randomly in the search space, and the probability of finding local minima is virtually eliminated The authors applied this technique to daily rainfall runoff modeling of the Leaf River Beam near Collins, Mississippi

The nonlinear ANN model approach is shown to provide better representation of the rainfall-runoff relationship of the medium size Leaf River basin near Collins, Mississippi than the linear ARMAX (Auto Regression Moving Average with eXogenous inputs) time series approach or the conceptual SAC-SMA (Sacramento Soil Moisture Accounting) model Because the ANN approach presented here does not provide models that have physically realistic components and parameters, it is by no means a substitute for conceptual watershed modeling However, the ANN approach does provide a viable and effective alternative to the ARMAX time-series approach for developing input-output simulation and forecasting models in situations that do not require modeling of the internal structure of the watershed

The problem of rainfall-runoff modeling has perhaps received the maximum attention by ANN modelers This problem lends itself admirably to ANN applications (Hsu et al 1995) The nonlinear nature of the relationship, availability of long historical records, and the complexity of physically-based models in this regard, are some of the factors that have caused researchers to look at alternative models-and ANNs have been a logical choice Research activities in this aspect have been quite revealing, and they can be broadly classified into two categories The first category of studies are those where ANNs were trained and tested using existing models (e.g., Smith and Eli (1995); Shamseldin (1997)) The second categories in which the ANN model has used observed rainfall – runoff data for training

Trang 17

2.2 Fuzzy Sets Approach

In recently years, the application of fuzzy technique in water resources field has been widely investigated However, the application in rainfall – runoff modeling are not widely used

Kojiri (1989) applied fuzzy theory to a real time reservoir operation for flood prevention

The control rules of reservoir were represented as the fuzzy theory by the combinations of fuzzy circumstanced when the storage volume, the inflow, the incremental volume of inflow, the mass volume of inflow, or precipitation are stated

Kojiri (1993) applied fuzzy set theory to develop fuzzy runoff model and fuzzy reservoir operation model to consider the few monitoring data The fuzzy runoff model makes it available to calculate discharge under the condition of designed fuzzy grade for few hydrological data in the mountain basin The reservoir operation with multi objective of water supply and hydropower and multi-constraints of landside was formulated through the fuzzy dynamic programming (FDP) by combining uncertainty information of precipitation and knowledge-based landside The max-min approach was employed to solve the FDP to obtain optimal solution In this case, the assumption that minimum operator of Bellman

and Zadeh (1970) is the proper representation of the decision maker's fuzzy preference has

been made However, such situations seem to occur rarely, and consequently, it becomes

evident that an interaction with decision maker is necessary (Sakawa, 1993)

Samuel O Russell and Paul F Campbell (1996) applied fuzzy logic to find operating procedures for a single-purpose hydroelectric project, where both the inflows and the selling price for energy can vary Operation of the system is simulated using both fuzzy logic programming and fixed rules The results are compared with those obtained by deterministic dynamic programming with hindsight The use of fuzzy logic with flow forecasts is also presented The results indicate that fuzzy logic approach is promising, but

it suffers from the “curse of dimensionality.” It can be a useful supplement to other conventional optimization technique, but probably not a replacement

Yu and Chen (2000) applied gray and fuzzy methods for rainfall forecasting This study proposes a rainfall forecast model based on a gray model, in which model parameters are estimated by using a fuzzy regression method This work has proposed the gray fuzzy model for 1-3 h advance rainfall forecasting The model exhibits the advantage of requiring

a small amount of data Normally, four past data events are enough to develop a model The study is found that gray fuzzy model gave good results for 1 h advance forecasts and the one time step forecasting technique could improve the forecasting ability for 2 and 3 h lead time

2.3 Neuro – Fuzzy Approach

The application of neuron – fuzzy technique in water resources are still limit, although its advantages to compare with neural network and fuzzy technique Some application of neuro - fuzzy has discussed as following

Rosangela Ballini, Secundino Soares and Marinho Gomes Andrade (1998), presented an adaptive neural fuzzy network model for seasonal streamflow forecasting The model is based on a constructive learning method that adds neurons to the network structure whenever new knowledge is necessary so that it learns the fuzzy rules and membership functions essential for modeling a fuzzy system The model was implemented to forecast monthly average inflow on a one-step-ahead basis It was tested on three hydroelectric plants located in different river basins in Brazil When the results were compared with

Trang 18

those of a multilayer feedforward neural network model, the present model revealed at

least a 50% decrease in the forecasting error

Meuser Valenca and Teresa Ludermir (2000), presented a Fuzzy Neural Network model for inflow forecast for the Sobradinho Hydroelectric power plant, part of the Chesf (Companhia Hidrelétrica do São Francisco-Brazil) system The model was implemented to forecast monthly average inflow on a one-step-ahead basis The Fuzzy Neural Network model is shown to provide better representation of the monthly average water inflow forecasting, than the models based on Box-Jenkins method, currently in use on the

Brazilian Electrical Sector

Lihua Xiong, Asaad Y Shamseldin and Kieran M.O’ Connor (2001) applied the first-order Takagi-Sugano fuzzy system as the fourth combination method (beside other three combination methods tested earlier, i.e the simple average method (SAM), the weighted average method (WAM), and the neural network method (NNM)) to combine together the simulation results of five difference conceptual rainfall-runoff model in a flood forecasting

on eleven catchments The comparison of the forecast simulation efficiency of the order Takagi-Sugano method is just as efficient as both the WAM and the NNM in enhance the flood forecasting accuracy The first-order Takagi-Sugano method is recommended for use as the combination system for flood forecasting due to its simplicity and efficiency

first-Fi-John Chang, Yen-Chang Chen (2001) applied counterpropagation fuzzy-neural network (CFNN) which is fusion of a neural network and fuzzy arithmetic to forecast streamflow of the Da-cha River in central of Taiwan The CFNN can automatically generate the rules used for cluster the input data No parameter input is needed, because the parameters are systematically estimated by the approach of converging to an optimal solution A comparison of the results obtained by the CFNN model and ARMAX indicate that the superiority and the reliability of the CFNN rainfall-runoff model

Nils Thosten Lange (2001) combined the input parameters of a neural network with the help of fuzzy rules to improve runoff prediction The advantage of this approach is that the different fuzzy sets can be combined with rules based upon hydrologic knowledge Another big advantage is the reduction of input parameters of the neural network The reduction of input parameters helps to reduce the number of connections of the network structure The neural network of this study is a feedforward network with the Backpropagation learning algorithm One example of a new input parameter is the combination of a precipitation index and a seasonal factor In this way this new parameter gives an idea about the soil moisture at the beginning of the rainfall event This new input parameter together with the rainfall sum and a base flow height index the neural network was able to give a very robust runoff prediction even with a low number of training and validation events

R.M.Trigo (2001) and et al compared between artificial neural network (ANN), fuzzy (ANFIS) and regression spline (MARS) models in non linear rainfall-runoff modeling This study present a comparison between a typical feed-forward ANN configuration and the less known ANFIS (adaptive network based fuzzy inference system) and MARS (multivariate adaptive regression spline) models A simple multi-linear regression model (MLR) is also used as a benchmark model The aim of all four models is

neuro-to forecast daily flow at the Aguieira dam located in the Mondego river system (Portugal) using daily precipitation from a nearby rain gauge station The catchment area has approximately 3000 km2 and the period considered spans between 1984 and 1994 All models were cross-validated, using each time 9 years for calibration and one for validation

Trang 19

Standard performance measures were used: the coefficient of determination (R2), the root mean square error (RMSE), and the coefficient of efficiency (CE) As expected, results show a superior performance of all three non-linear models, when compared with the MLR model Overall, the ANN and ANFIS approaches present the most promising results for all performance measures However, it is not clear yet how significant some of these different performance measures really are

B Dixon and H, D Scott et al (2001) applied neuro - fuzzy technique to predict groundwater vulnerability in Northwest Arkansas In this study the trapezoidal membership functions are used All training datasets reclassified according to the following groundwater vulnerability categories: high, moderate high moderate and low potential Application of Neuro-fuzzy techniques was a two step process First, the training data set was used with the Neuro-fuzzy software to generate classifiers, fuzzy sets and rule bases Second, once the software was trained, the application data set was used with the Neuro-fuzzy models The outputs from the models then were used to generate maps Nestor Sy (2003) applied neuro - fuzzy techniques in plot – scale rainfall runoff modeling using rainfall simulator data In this study the Adaptive Neuro – Fuzzy Inference Systems (ANFIS) was used to simulate and the results are to compare with the ANN topologies used Multilayer Perception (MLP), Radial Basis Function (RBF) and Generalized Feedforward Networks (GFF) Both artificial neural network and neuro - fuzzy methods show satisfactory results and demonstrate that they are both viable methodologies in predicting runoff at the plot – scale

Trang 20

CHAPTER 3 THEORETICAL CONSIDERATIONS

§ Fuzzy system is introduced into neural networks to enhance knowledge

representation capability of conventional neural networks à fuzzy - neural system

§ Neural networks are used in fuzzy modeling to provide fuzzy system with learning

capacities à neural - fuzzy system

§ Neural networks and fuzzy system are combined into a homogeneous architecture

They are incorporated into hybrid system à fuzzy - neural hybrid system

Currently, several neuro-fuzzy systems exist in literature Most notable ones are Fuzzy Adaptive Learning Control Network (FALCON) developed by Lin and Lee (1991), GARIC developed by Berenji (1992), Adaptive Network – based Fuzzy Inference System (ANFIS) developed by Jang (1993), Neuro-Fuzzy Control (NEFCON) proposed by Nauck, Klawonn, and Kruse (1997), and other variations from these developments These neuro - fuzzy structures and systems establish the foundation of neuro - fuzzy computing

In this study, ANFIS (Adaptive Network based Fuzzy Inference System), which is obtained by embedding the fuzzy inference system into the framework of adaptive networks, are used to develop the flood forecasting model The ANFIS can sever as a basis for constructing a set of fuzzy if then rules, Takagi Sugano and Kang (TSK) fuzzy, with appropriate membership functions to generate the stipulated input-output pairs The concepts of fuzzy system, adaptive network and their learning algorithms which are components of ANFIS, and ANFIS architecture are introduced as the following sections

3.2 Fuzzy Logic

3.2.1 Definition of Fuzzy Sets

Fuzzy set was introduced by Zaded (1965) to represent and manipulate data and information that possess nonstatistical uncertainty The mathematical statement of fuzzy set as following

Let X denotes a collection of objectives, then a fuzzy set A in X is a set of ordered pairs:

})x(0,Xx:)]

x(,x{[

where: µA is the membership function that mapping from x to the unit interval [0, 1]

3.2.2 Basic Relations and Logical Operations

Let A and B be fuzzy sets in the universe of discourse U:

§ Complement: When µA(x)∈[0,1], the complement of A, denoted as A, is defined

by its membership function as

)x(1)x

A = −µ

Trang 21

§ Intersection: The intersection of fuzzy sets A and B, denoted as A∩B, is defined

by

)]

x();

x(min[

)x

x(max[

)x

§ Triangle membership function: Triangle MF is a function A(x, a, b, c) with three

parameter a, and c is defined by:

xc,ab

axminmax)

c,b,a

;x(

§ Trapezoidal membership function: Trapezoidal MF is a function B(x; a, b, c, d)

with four parameter a, b, c and d is defined by:

xd,1,ab

axminmax)

d,c,b,a

;x(

Figure 3.1 (a) Triangular Membership Function

§ Gaussian membership function: Gaussian MF is a function C(x; σ, c) with two

parameters defined by

2 2

2 ) c x (

e)c,

;x(

where c is the center and σ is the weight of membership function

§ Bell membership function Bell MF is a function D(x; a, b, c) with three parameters

Trang 22

a

cx1

1)

c,b,a

;x(D

−+

where: c determine the center of the curve, a and b determine it shape

Figure 3.2 (a) Gaussian Membership Function (b) Bell Membership Function

§ Sigmoid membership function: Sigmoid MF is a function E(x; a, c) with two

parameters defined by

) c x ( a

e1

1)

c,a

;x(

3.2.4 Fuzzy If - Then Rules

Fuzzy if-then rules or fuzzy conditional statements are expressions of the form:

where: A and B are labels of fuzzy sets characterized by appropriate membership

functions

An example that describes a simple fact is

where: pressure and volume are linguistic variables, high and small are linguistic variables

or labels that are characterized by membership functions

MF

2a

1.0 0.5

0

Slope = -b/2a

x

MF 1.0

MF 1.0

0

c

MF

x1.0

0

Trang 23

3.2.5 Takagi-Sugeno Model

The Takagi-Sugeno fuzzy model was proposed by Takagi, Sugeno, and Kang in an effort

to formalize a system approach to generating fuzzy rules from an input-output data set A typical fuzzy rule in Takagi-Sugeno fuzzy model has the format

where: A, B are fuzzy sets in the antecedent; z = f(x, y) is a crisp function in the consequent

§ if f(x, y) is a constant à the zero-order Takagi-Sugeno fuzzy model

§ if f(x, y) is a first-order polynomial, à the first-order Takagi-Sugeno fuzzy model

3.2.6 Fuzzy Inference Systems

Fuzzy Inference Systems are also known as fuzzy-rule-based systems, fuzzy model, fuzzy associative memories (FAM), or fuzzy controller when use as controllers Basically a fuzzy inference system is component of five functional block described in figure 3.4

where:

§ Rule base: containing a number of fuzzy if-then rules

§ Database: define the membership functions of the fuzzy sets used in the fuzzy rules

§ Decision-making unit: perform the inference operations on the rules

§ Fuzzification interface: transform the crisp inputs into degrees of match with

linguistic values

§ Defuzzification interface: transform the fuzzy results of the inference into a crisp

output

§ Knowledge base: joint of the rule base and the database

The steps of fuzzy reasoning (inference operations upon fuzzy if-then rules) performed by

fuzzy inference systems are:

1 Compare the input variables with the membership functions on the antecedent part

to obtain the membership values of each linguistic label (fuzzification)

2 Combine (through a specific t-norm operator, usually multiplication or min) the

membership values on the premise part to get firing strength (weight) of each rule

3 Generate the qualified consequents (either fuzzy or crisp) or each rule depending on the firing strength

4 Aggregate the qualified consequents to produce a crisp output (defuzzification.)

Output Input

Knowledge base Data base Rule base

Fuzzification

interface

Defuzzification interface

Decision making unit

(Fuzzy) (Fuzzy)

Trang 24

Depending on the types of fuzzy reasoning and fuzzy if-then rules employed, most fuzzy inference systems can be classified into three types (figure 3.5):

Type 1: The overall output is the weighted average of each rule's crisp output induced by

the rule's firing strength (the product or minimum of the degrees of match with the premise part) and output membership functions The output membership functions used in this scheme must be monotonic functions

Type 2: The overall fuzzy output is derived by applying “max" operation to the qualified

fuzzy outputs (each of which is equal to the minimum of firing strength and the output membership function of each rule) Various schemes have been proposed to choose the final crisp output based on the overall fuzzy output; some of them are center of area, bisector of area, mean of maxima, maximum criterion, etc

Type 3: Takagi and Sugeno's fuzzy if-then rules are used The output of each rule is a

linear combination of input variables plus a constant term, and the final output is the weighted average of each rule

Figure 3.5 Commonly Used Fuzzy If-Then Rules and Fuzzy Reasoning Mechanisms

Figure 3.5 utilizes a two-rule two-input fuzzy inference system to show different types of fuzzy rules and fuzzy reasoning mentioned above Be aware that most of the different from the specification of the consequent part (monotonically non-decreasing or bell-shaped

membership functions, or crisp function) and thus the defuzzification schemes (weighted

average, centroid of area, etc.) are also different

3.3 Adaptive Network

3.3.1 Architecture

An adaptive network (figure 3.6) is a multi-layer feed-forward network in which each node

performs a particular function (node function) on incoming signals as well as a set of

parameters pertaining to this node The nature of the node functions may vary from node to node, and the choice of each node function depends on the overall input-output function which the adaptive network is required to carry out Note that the links in an adaptive network only indicate the flow direction of signals between nodes; no weights are

associated with the links A square node (adaptive node) has parameters while a circle node (fixed node) has none

Trang 25

Figure 3.6 An Adaptive Network

The parameter set of an adaptive network is the union of the parameter sets of each adaptive node In order to achieve a desired input-output mapping, these parameters are updated according to given training data and a learning algorithms procedure described below

3.3.2 Gradient Descent Learning Algorithm

Suppose that a given adaptive network has L layers and the k-th layer has #(k) nodes The

output O of the node in the i-th position of the k-th layer, depends on its incoming signals ki

and its parameter set, are:

, )c,b,a,O,

O(O

Oki = ki 1k−1 k#(−k1−1) (3.15)

where: a, b, c, etc are the parameters pertaining to this node

Assuming the given training data set had P entries, the error measure (or energy function) for the p-th (1 ≤ p ≤ P) entry of training data entry as the sum of square errors are defined

2 L p , m p , m

O is the m-th component of output vector produced by the p-th input vector

Hence, the overall error measure is:

=

= P

1 p p

p , p , L

p ,

k p ,

1 k p , m 1 k p , m

p k

p ,

p

O

OO

EO

x 2

x 1

y 2

y 1

Trang 26

The error rate of an internal node can be expressed as a linear combination of the error rates of the nodes in the next layer Therefore for all 1 ≤ k ≤ L and 1 ≤ i ≤ #k, we can find

S O

*

* p p

*

OO

EE

where: S is the set of nodes whose outputs depend on α

Then the derivate of the overall error measure E with respect to α is

= ∂α

1 p p

EE

where: k is the step size, the length of each gradient transition in the parameter space The

value of k can be changed to vary the speed of convergence

Actually, there are two learning paradigms for adaptive networks With the batch learning (or off-line learning), the update formula for parameters α is based on equation (3.21) and

the update action takes place only after the whole training data set has been presented, i.e.,

only after each epoch or step On the other hand, if we want the parameters to be updated

immediately after each input-output pair has been presented, then the update formula is

based on equation (3.20) and it is referred to as the pattern learning (on on-line learning)

3.3.3 Hybrid Learning Rule: (Off – line learning)

Here, we propose a hybrid learning rule which combines the gradient method and the least squares estimate (LSE) to identify parameters

For simplicity, assume that the adaptive network under consideration has only one output

)S,IFoutput

where:

I is the set of input variables and S is the set of parameters

If there exists a function H such that the composite function H(F) is linear in some of the elements of S, then these elements can be identified by the least square method More formally, if the parameters set S can be decomposed into two sets

2

1 SS

where: ⊗represents direct sum

Trang 27

Since H(F) is linear in the elements of S2, the upon applying H to equation (3.24), we have

))S,IF(H)output(H

where: X is an unknown vector whose elements are parameters in S2

LetS2 =M, then the dimensions of A, X and B areP×M, M×1andP×1, respectively

Since P (number of training data pairs) is usually greater than M (number of linear

parameters), this is an overdetermined problem and generally there is no exact solution to

equation (3.27) Instead, a least squares estimate (LSE) of X, X*, is sought to minimize the

squared error AX−B 2 This is a standard problem that forms the groups for linear regression, adaptive filtering and signal processing The most well-know formula for X* uses the pseudo-inverse of X:

* T 1 T

XBA)AA

where: AT is the transpose of A, T 1 T

A)AA( − is the pseudo-inverse of A if ATA

is singular The equation (3.28) is concise in notation, it is expensive in computation when dealing with the matrix inverse and, moreover, it becomes ill-defined if ATAis singular

non-As a result, we employ sequential formulas to compute the LSE of X This sequential

method of LSE is more efficient (especially when M is small) and can be easily modified to

an on-line version (see below) for system with changing characteristics Specifically, let

this i-th row vector of matrix A defined in equation (3.27) be a and its i-th element of B Ti

beb , then X can be calculated iteratively using the sequential formulas: Ti

=

−+

=

+ +

+ + +

+ + + + +

IS

;0X

1P, ,1,0i,aSa1

SaaSSS

)Xab(aSXX

0 0

1 i i T 1 i i T 1 i 1 i i i 1 i

i T 1 i T 1 i 1 i 1 i i 1 i

where: Si is called covariance matrix and the least square estimate X* is equal Xp

η is a positive large number and I is the identity matrix with dimension MxM When dealing with multi-output adaptive networks (output in equation 3.24 is a column vector), equation (3.29) still applies except that b is i-th rows of the matrix B Ti

Now the gradient method and the least squares estimate can be combined to update the parameters in an adaptive network Each epoch of this hybrid learning procedure is composed of a forward pass and a backward pass In the forward pass, we supply input data and functional signals go forward to calculate each node output until the matrices A and B in equation (3.27) are obtained, and the parameters in S2 are identified by the sequential least squares formulas in the equation (3.29) After identifying parameters in S2, the functional signals keep going forward till the error measure is calculate In the backward pass, the error rates propagate from the output end toward the input end, and the parameters in S1 are updated by the gradient method in equation (3.22)

Trang 28

3.3.4 Hybrid Learning Rule: (On – line learning)

If the parameters are updated after each date presentation, we have the pattern learning or

on-line learning paradigm This learning paradigm is vital to the on-line parameter

identification for system with changing characteristics To modify the batch learning rule

to its on-line version, it is obvious that the gradient descent should be based on Ep (see

equation 3.20) instead of E Strictly speaking, this is not a truly gradient search procedure

to minimize E, yet it will approximate to one if the learning rate is small

For the sequential least squares formulas to account for the time-varying characteristics of the incoming data, we need to decay the effect of old data pairs as new data pairs become available One simple method is to formulate the squared error measure as a weighted version that gives higher weighting factors to more recent data pair This amount to the

addition of a forgetting factor λ to the original sequential formula:

−λ

=

−+

=

+ +

+ + +

+ + + + +

1 i i T 1 i i T 1 i 1 i i i 1

i

i T 1 i T 1 i 1 i 1 i i 1 i

aSa1

SaaSS

1S

)Xab(aSXX

where: the value of λ is between 0 and 1 The smaller lambda is, the faster the effects of

old data decay But a small lambda sometimes causes numerical instability and should be

avoided

3.4 ANFIS

The architecture and learning rules of adaptive networks have been described in previous section Functionally, there are almost no constraints on the node functions of an adaptive network except piecewise differentiability Structurally, the only limitation of network configuration is that it should be of feedforward type Due to these minimal restrictions, the adaptive network's applications are immediate and immense in various areas In this section, we propose a class of adaptive networks which are functionally equivalent to fuzzy inference systems

3.4.1 ANFIS Architecture

For simplicity, we assume the fuzzy inference system under consideration has two inputs x and y and one output z Suppose that the rule base contains two fuzzy if-then rules of Takagi and Sugeno's type:

Rule 1: If x is A1 and y is B1 thenf1=p1x+q1y+r1, (3.31)

Rule 2: If x is A2 and y is B2 thenf2 =p2x+q2y+r2 (3.32) Then the type-3 fuzzy reasoning is illustrated in the Fig 3.7, and the corresponding equivalent ANFIS architecture (type-3 ANFIS) is shown in the Figure 3.8 The node functions in the same layer are of the same function family as described below:

Layer 1 Every node i in this layer is a square node with a node function Parameters in this

layer are referred to as premise parameters

i

A 1

where: x is the input to node i and Ai is the linguistic label (small, large, etc.) associated

with this node function It other words, O is the membership function of A1i i and it specifies the degree to which the given x satisfies the quantifier Ai Usually is chosen (x)

i

A

µ to

Trang 29

bell-shaped with maximum equal to 1 and minimum equal to 0, such as the generalized bell function:

i

i i

) x ( A

a

cx1

i

a c x

signal and sends the product out For, instance,

.2,1i),y()x(w

i

A

Layer 3 Every node in this layer is a circle node labeled N The i-th node calculates the

ratio of the i-th rules firing strength to the sum of all rules' firing strengths:

.2,1i,ww

ww

2 1

1 1

0 0

1 1

2 1

2 2 1 1

fwfw

ww

fwfwf

N

N

Trang 30

For convenience, output of this layer will be called normalized firing strengths

Layer 4 Every node i in this layer is a square node with a node function Parameters in this

layer will be referred to as consequent parameters

O4i(x)=wifi =wi(pix+qiy+ri) (3.38) where: w is the output of layer 3, and i pi,qi,ri}is the parameter set

Layer 5 The single node in this layer is a circle node labeled ? that computes the overall

output as the summation of all incoming signals, i.e

i i 5

1

w

fwf

woutput

overall)

x(

For type-1 fuzzy inference system, the extension is quite straightforward and the type-1 ANFIS is shown in Fig 3.9, where the output of each rule is induced jointly by the output membership function and the firing strength For type-2 fuzzy inference systems, if we replace the centroid defuzzification operator with a discrete version which calculates the appropriate centroid of area, then type-3 ANFIS can still be constructed accordingly However, it will be more complicate than its type-3 and type-1 versions and thus not worth the efforts to do so

(a)

(b) Fig 3.9 (a) Type-1 Fuzzy Reasoning (b) Equivalent ANFIS (type-1 ANFIS)

Fig 3.10 shows a 2-input, type-3 ANFIS with nine rules Three membership functions are associated with each input, so the input space is partitioned into nine fuzzy subspaces, each

of which is governed by fuzzy if-then rules The premise part of a rule delineates a fuzzy subspace, while the consequent part specifies the output within this fuzzy subspace

0

0 0

0 0

2 1

2 2 1 1

fwfw

ww

fwfwf

w

2

2fw

Trang 31

Figure 3.10 (a) Two-Input Type-3 ANFIS with Nine Rules

(b) Corresponding Fuzzy Subspaces 3.4.2 Hybrid Learning Algorithm for ANFIS

From the proposed type-3 ANFIS architecture (figure 3.8), it is observed that given the values of premise parameters, the overall output can be expressed as a linear combinations

of the consequent parameters More precisely, the output f in Figure 3.8 can be rewrite as

2 2 2 2 2 2 1 1 1 1 1 1

2 2 1 1 2 2 1

2 1

2 1

1

rw(q)yw(p)xw(rw(q)yw(p)xw

(

fwfwfww

wf

ww

w

f

++

++

+

=

+

=+

++

=

which is linear in the consequent parameters(p1,q1,r1,p2,q2,r2) As a result, we have:

S = set of total parameters,

S1= set of premise parameters,

S2= set of consequent parameters,

The consequent parameters can be obtained by solving the following over-constrained, simultaneous equations

) 2 (

) 1 (

2 2 2 1 1 1

) n ( 2 )

n ( 2 ) n ( ) n ( 2 ) n ( 1 ) n ( ) n ( 1 ) n ( ) n ( 1

) 2 ( 2 ) 2 ( ) 2 ( 2 ) 2 ( ) 2 ( 2 ) 2 ( 1 ) 2 ( ) 2 ( 1 ) 2 ( ) 2 ( 1

) 1 ( 2 ) 1 ( ) 1 ( 2 ) 1 ( ) 1 ( 2 ) 1 ( 1 ) 1 ( ) 1 ( 1 ) 1 ( ) 1 ( 1

d dd

rqprqp

wywx

wwywxw

wywxwwywxw

wywxwwywxw

Trang 32

Equation (3.41) is the explicit form of equation (3.27), and it can be solved by applying equation (3.29) Therefore, the hybrid learning algorithm can be applied directly

More specifically, in the forward pass of the hybrid learning algorithm, functional signal

go forward till layer 4 and the consequent parameters are identified by the least square estimate In the backward pass, the error rates propagate backward and the premise parameters are updated by the gradient descent The Table 3.1 summarizes the activities in each pass

Table 3.1 Two passes in the hybrid learning procedure for ANFIS

3.4.3 Computation Complexity

It should be noted that the computation complexity of the least square estimate is higher than that of the gradient descent In fact, there are four methods to update the parameters,

as listed below according to their computation complexities:

1 Gradient descent only: all parameters are updates by the gradient descent

2 Gradient descent and one pass of LSE: the LSE is applied only once at the very

beginning to get the initial values of the consequent parameters and then the gradient descent takes over to update all parameters

3 Gradient descent and LSE: this is the proposed hybrid learning rule

4 Sequential (approximate) LSE only: the ANFIS is linearized and the premise

parameters and the extended Kalman filter algorithm is employed to update all parameters

3.5 Performance Statistics

In order to evaluate the performance of the models, some statistics are used to draw the conclusion between the observed and forecasted discharges The following commonly used statistic is applied in this study

§ Efficiency Index (EI)

According to Nash and Sutcliffe (1970), the following statistic can be used to measure the efficiency of as given model:

2 i

N 1 i

N 1 i

2 i i 2

i

)QQ(

)FQ()

QQ(

where: Qi is observed discharge at time i;

Q is mean value of observed discharges: ∑

=

1 i i

QN

1

Fi calculated discharge at time i,

N number of data points

Trang 33

§ Root Mean Square Error (RMSE)

2 i

i F)Q(N

1

The RMSE is used to compare the performance of two or more models when used for the same data set

Trang 34

sCHAPTER IV DATA COLLECTION AND MODELING PROCEDURE

4.1 Data Collection

The data of 15 rainfall stations and 4 gauging stations inside the Yom River Basin were

collected from the Royal Irrigation Department (RID) of Thailand for calibration (training) and verification (testing) the models The data include daily observed rainfall, discharge and water level The data is available from 1/4/1990 to 31/3/2001 (12 years), except for the gauging station Y4, available from 1/4/1990 to 31/3/1998, (8 years) The name and

location of these stations are shown as the tables 4.1; 4.2 and figure 4.1; 4.2:

Table 4.1 List of the Rainfall Stations

Location

Table 4.2 List of the Gauging Stations

No

At or Near

Station

1 Ban Ngao Sak Y20 PR 18 35 03 100 09 17 180.98

Province: Phrae (PR), Sukhothai (SK); Available data from 1/4/1990 to 31/3/2002 (12 years); (*) Available data from 1/4/1990 to 31/3/1998 (8 years);

Trang 35

Figure 4.1 The Distribution of Rainfall and Gauging Stations in Yom River Basin

The gauging stations proposed for streamflow forecasting is arranged as Figure 4.2:

NAN

Y.20 SONG 40142 CHIANG MAI

LAMPANG

PARE

40062 WANG CHIN

THAPLA PHAYAO

Y.6 59032

59012 Y.4

Y.17 39022 59042

16092

40032 40142

40013 40052

Flow direction

Trang 36

4.2 Data Processing

Raw data must be processed and converted to the true format for model application Input data format require for training the NFT model is the matrix form with all but the last column representing input data, and the last column representing the single output data The rows of matrix present training patterns For NFT model, input data due to maps to the membership functions with the degree from 0 to 1 and the transfer function of the output node is linear function, it is unnecessary to normalize the input data

Input data are divided into two sets: training data and testing data For gauging stations, namely, Y17, Y6, and Y20, the training data are available from 1/4/1990 to 31/3/2000, (10

years) and the testing data are from 1/4/2000 to 31/3/2002, (2 years) For gauging station

Y4, the training data are available from 1/4/1990 to 31/3/1996, (6 years) and the testing data are from 1/4/1997 to 31/3/1999, (2 years) Daily and weekly inputs data are prepared

due to these models propose for daily and weekly streamflow forecasting

4.3 Input Selection

4.3.1 Selection of Input and Output Variables

The goal of NFT and ANN is to generalize a relationship function of input and output of the form:

)X(

where: Xn = n-dimensional input vector consisting of variables x1, x2, …, xn

Ym = m-dimensional output vector consisting of resulting variables y1, y2, , ymOutputs are actually the forecasted discharge and/or water level at lead time τ Inputs are the observed rainfall, discharge, and water level in the past of the forecast gauging station and its upstream due to these factors significantly influence to streamflow

Rainfall is very important component of the model input due to discharge and water level are significant influenced by rainfall, the selection of an appropriate rainfall input will give better results It is should be chosen local rainfall between forecast gauging station and its upstream, especially the stations on the main river and its tribute, for the model input

4.3.2 Selection of Relationship Function of Input and Output

Input and output variables have identified as above, the general form of the relationship function of input and output variables use for training and testing the discharge and water level forecasting model are shown as the equation (4.2) and (4.3)

Q(t+τ) = f(R(t), R(t-1),…, R(t-k), W(t), W(t-1),…, W(t-m), Q(t), Q(t-1),…, Q(t-n)) (4.2) W(t+τ) = f(R(t), R(t-1),…, R(t-k), W(t), W(t-1),…, W(t-m), Q(t), Q(t-1),…, Q(t-n)) (4.3) Where: t = lead time, t = 1, 2, 3

k, m, n = lag time, k, m, n = 0, 1, 2, 3,

R(t) = daily observed rainfall at time t

Q(t) = daily observed discharge at time t

Q(t+τ) = daily forecasted discharge at lead time τ

W(t) = daily observed water level at time t

W(t+τ) = daily forecasted water level at lead time τ

Trang 37

Selection of an appropriate the relationship function of input and output that will allow to successfully apply NFT model for streamflow forecasting When input variables are defined, the remaining tasks are to determine which factors (observed rainfall, discharge and water level) and their lag time should be used to obtain the most accurate results These factors should be trimmed when they do not have significant effect on the performance of the models This is the most difficult and time consuming task in NFT modeling First method for solving this task is sensitivity analysis and second one is trial and error In this study, the following procedure is applied to select the best relationship function of input and output which are respected to obtain the most accurate results

• Firstly, to train the model with the simplest relationship function

• Secondly, to add more input variables into the relationship function and continuous training the model

• Thirdly, to compare the error obtain from the late function (using RMSE) with error of the first one to consider effect of the new adding variables to performance of the model The following formula can be applied to evaluate this change:

1

i 1

RMSE

RMSERMSE

where: RMSE1: error of the simplest relationship function

RMSEi: error of the relationship function number i

∆RMSE: the ratio of changing error between the functions

- If ∆RMSE >0, it means that the error of the late function is higher than the fist one à adding variables should be trimmed

- If ∆RMSE <0, it means that the error of the late function is lower than the fist one à adding variables should be kept

• Continuous the step three until obtain the best relationship function with the lowest RMSE error

This procedure is described in the following diagram as figure 4.3

Figure 4.3 The diagram for Selection the Relationship Function of Input and Output

Change the input variable

Trang 38

4.3.3 Selection of Training Patterns

When the relationship function of input and output is defined, the next step is to determine

an appropriate training data set An optimal data set should be representative of the probable occurrence of an input vector and should facilitate the mapping of the underlying nonlinear process Inclusion of unnecessary patterns could slow network learning On contrast, an insufficient data set could lead to poor learning In streamflow forecasting, when the climate change, and characteristic basin change such as deforestation, land cover, land use, reservoir construction, channel erosion and deposition….The training data pattern proposed to use in three scenarios as following

• Using all available data for training the model

• Using a part of the data for training the model

• Using two periods in separate, dry and flood period, for training the model

4.4 ANFIS Modeling

4.4.1 Membership Function

There are a lot of membership functions have defined in Chapter III that can be used in NFT modeling However, from the experiment, the bell MFs is found that most suitable in NFT application The equation and the shape of the Bell MFs are recalled as:

b A

a

cx1

1)

x(

−+

=

The shape of the Bell MFs is as figure 4.4

Figure 4.4 Bell Membership function 4.4.2 Initial Parameters

The objective of training the NFT model is to find optimal premise parameters (a, b, c) of bell membership function defined in equation (4.6) and consequent parameters (p, q) of fuzzy if – then rules The initialization of MF parameters is determined by:

a = 0.5x(input data range)x(number of MFs – 1) (4.7)

c= spaced along the range of each input variable (4.9)

2a

µ(x)

1 0.5 0

Slope = -b/2a

Trang 39

4.4.3 Training Algorithm

Two training algorithms, namely gradient descent and hybrid can be applied to train the NFT model However, experience found that one should apply gradient descent algorithm

in discharge modeling and hybrid for water level modeling

ANFIS model used for forecasting shall adopt the following consideration:

• Using the first order Takagi Sugano and Kang fuzzy model so the consequent part of fuzzy if – then rules is linear equation

• T – norms operations that performs algebraic fuzzy operation “AND”

• The type of MF used is bell function defined as equation (4.6)

• The initial parameters is determined by equations (4.7), (4.8) and (4.9)

• The algorithm for update the MFs parameters are gradient descent and hybrid

• The number of fuzzy if – then rule is determined by the following equation:

Number of rules = (A) number of input (4.7) where: A is the number of membership function on each input

Figure 4.5 Overtraining Behavior of Training Process

Training process will still proceed until the testing error rises to the extent that exceeds certain factor αstop of the lowest testing error obtained so far L Prechelt (1998) suggested that the difference between the first and the following local minima is usually not huge

αstop can be selected as 1.2 to assume that the minimum error has been passed

As in some case the error function oscillates around the global minimum for a very large number of training iterations, the training is also be terminated after a user specified a maximum number of epochs to prevent being trapped in the training process

Trang 40

CHAPTER 5 RESULTS AND DISCUSSIONS

5.1 Relationship Function of Input and Output

Many relationship functions were tried using NFT to obtain the most accurate function which gives the highest value of efficiency index (EI) and the lowest value of RMSE both

in training and testing The relationship function was tried from the simplest to complex one by adding more input variable Some input variables were trimmed if they do not give the significant effect to the performance of model

The training process used data of ten years from 1990 to 1999 (stations Y17, Y6, Y20) and

of six years from 1990 to 1996 (station Y4) The testing process used data of two years from 2000 to 2001 (stations Y17, Y6, Y20) and from 1997 to 1998 (station Y4) The relationship function and the results are shown in tables A1 to A4 in the Appendix A Root Mean Square Error (RMSE) vs relationship functions are plotted as figure 5.1

Y4, Y6, Y20

RMSE (m3/s)

6 9 12 15 18 21

Training Testing

RMSE (m3/s)

Function RMSE (m3/s)

Function

RMSE (m3/s)

Function

Function

Ngày đăng: 09/02/2021, 16:00

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN