1. Trang chủ
  2. » Tất cả

Evolutionary algorithm for training compact single hidden layer feedforward neural networks hieu trung huynh and yonggwan won, member, IEEE

20 4 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Evolutionary Algorithm for Training Compact Single Hidden Layer Feedforward Neural Networks
Tác giả Hieu Trung Huynh, Yonggwan Won
Trường học IUH University
Chuyên ngành Advanced Artificial Intelligence
Thể loại research paper
Năm xuất bản Unknown
Thành phố Vietnam
Định dạng
Số trang 20
Dung lượng 377,23 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Evolutionary Algorithm for Training Compact Single Hidden Layer Feedforward Neural Networks Hieu Trung Huynh and Yonggwan Won, Member, IEEE EVOLUTIONARY ALGORITHM FOR TRAINING COMPACT SINGLE HIDDEN LA[.]

Trang 1

EVOLUTIONARY ALGORITHM FOR TRAINING COMPACT SINGLE HIDDEN LAYER FEEDFORWARD NEURAL NETWORKS

HIEU TRUNG HUYNH AND YONGGWAN WON, MEMBER, IEEE

Group members:

Mr Nhan

Mr Dien.Vo

Mr Tu Advanced Artificial Intelligence

Trang 2

for single hidden layer feedforward neural networks (SLFNs)

output weights by a simple matrix-inversion operation

require a large number of hidden units due to non-optimal input weights and hidden layer biases

learning machine (ELS-ELM), to determine the input weights and biases of hidden units using the differential evolution algorithm in which the initial generation is generated not by random selection but by a least squares scheme

generalization performance with compact networks

Trang 3

INTRODUCTION

to approximate complex nonlinear mappings directly from input patterns

weights are tuned by error propagation from the output layer to the input layer

problem if the learning rate is adequately small

Trang 4

PROBLEM SOLVE

improve the learning speed

initial weight vectors was proposed by Jim Y F Yam and Tommy W S Chow

by some researchers

training

However, up to now, most of the training algorithms based on the gradient descent are still

slow due to many iterative steps that are required in the learning process.

Trang 5

PROBLEM SOLVE

Recently, Huang et al showed that a single hidden-layer feedforward neural network (SLFN) can

learn distinct observations with arbitrary small error if the activation function is chosen properly

• An effective training algorithm for SLFNs called extreme learning machine (ELM) was also

proposed by Huang et al

• In ELM, the input weights and biases of hidden units are randomly chosen, and the output weights

of SLFNs can be determined through the inverse operation of the output matrix of hidden layer

• This algorithm can avoid many problems which occur in the gradient-descent-based learning methods such as local minima, learning rate, epochs, etc It can obtain better generalization performance at higher learning speed in many applications

However, it often requires a large number of hidden units and long time for responding to new input patterns

Trang 6

PROBLEM SOLVE

approach to determine the input weights and hidden layer biases by using a linear model, and then the output weights are also calculated by Moore-Penrose (MP) generalized inverse

of population based on the linear model proposed In the second step, the input weights and hidden layer biases are estimated by the DE process, and the output weights are determined through MP generalized inverse

SLFN as E-ELM and LS-ELM, which results in the fast response of the trained network to new input patterns

However, this approach can take longer time for training process in comparison with the

original ELM and LS-ELM

Trang 7

DIFFERENTIAL EVOLUTION

Mutation: the mutant vector is generated as vi,G+1= 0r1,G+F(0r2,G - 0r3,G), where r1, r2, r3 {1, 2, …,

NP} are different random indices and F [0,2] is a constant factor used to control the amplification of the ∈ differential variation

Crossover: the trial vector is formed so that

where rand b(j) is the j-th evaluation of a uniform random number generator, CR is the crossover constant

and rnbr(i) is a randomly chosen index which ensures at least one parameter from v ji,G+1

Selection: The new generation is determined by:

Trang 8

SINGLE HIDDEN LAYER FEEDFORWARD NEURAL NETWORKS

An SLFN with N hidden units and C output units is depicted

Trang 9

SINGLE HIDDEN LAYER FEEDFORWARD NEURAL NETWORKS

The ELM algorithm can be described as follows

This algorithm can obtain good generalization performance at high learning speed However,

it often requires a large number of hidden units and takes long time for responding new patterns

Randomly assign input

weights and hidden layer biases

Compute the hidden

layer output matrix H

Calculate the output

weights A

Step 3

Trang 10

EVOLUTIONARY EXTREME LEARNING

MACHINE (E-ELM)

input weights and hidden layer biases, and the MP generalized inverse is used to determine the output weights First, the population of the initial generation is generated randomly Each individual in the population is a set of the input weights and hidden layer biases defined by:

inverse Three steps of DE process are used; individuals with better fitness values are retained to the next generation The fitness of each individual is chosen as the root-mean squared error (RMSE) on the whole training set or the validation set

Trang 11

EVOLUTIONARY EXTREME LEARNING

MACHINE (E-ELM)

We can summarize the E-ELM algorithm as follows:

Initialization: G

Mutation Crossover output weights for Determine the

each individual

Evaluate the fitness for each individual

Selection

Trang 12

EVOLUTIONARY EXTREME LEARNING

MACHINE (E-ELM)

not obtain small input weights and hidden layer biases

Trang 13

(ELS-ELM)

• Following this initialization, the DE process is applied to find further optimal set of the input

weights and hidden layer biases

scheme is used for tuning the input weights and hidden layer biases, and the MP generalized inverse operation is used for determining the output weights

Trang 14

Mutation Crossover Compute the hidden

layer output matrix H

Determine the output weights

Selection

Randomly assign

the values for the

matrix B

Estimate input

weights wm and

biases b m of ⍬

Calculate the hidden-layer

output matrix H

Determine the

output weights A

Evaluate the fitness for each individual

Trang 15

EXPERIMENTAL RESULTS

Trang 16

EXPERIMENTAL RESULTS

Trang 17

EXPERIMENTAL RESULTS

Trang 18

EXPERIMENTAL RESULTS

Trang 19

CONCLUSION

learning machine (ELS-ELM), for training single hidden layer feedforward neural networks (SLFNs) was proposed

layer biases in our ELS-ELM were estimated by using the differential evolution (DE) process while the output weights were determined by MP generalized inverse

generation by the least-squares scheme This method can obtain the trained networks with small number of hidden units as E-ELM and LS-ELM while

Trang 20

Thank you!

👏👏👏

Ngày đăng: 27/11/2022, 00:16

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w