1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Nano - and Micro Eelectromechanical Systems - S.E. Lyshevski Part 13 potx

1 180 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 1
Dung lượng 276,27 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

It should be emphasized that W and B1 are adjusted through the training learning mechanism.. For multi-layer neural network of z neurons, one can find the following expression for the i

Trang 1

where u is the neuron output, u∈1; f is the nonlinear function (transfer

function); W is the weighting matrix, W = [ w11, w12, K , , Ww1k−1, w1k] ∈1 × k;

v is the input vector (performance variables), v∈k; B1 is the bias variable

It should be emphasized that W and B1 are adjusted through the training (learning) mechanism

For a single-layer neural network of z neurons, one has

( Wv B )

f

where the weighting matrix and bias vector are W∈z × k and B∈z

For multi-layer neural network of z neurons, one can find the following

expression for the (i + 1) network outputs

1

where M is the number of layers in the neural network

For example, for three-layer network, we have

( 3 2 3) , 2

3

3= f W u + B i =

and u1= f1( W1v + B1) , i = 0

Hence, one obtains

( 3 2 3) 3[ 3 2( 2 1( 1 1) 2) 3]

3

where the corresponding subscripts 1, 2 and 3 are used to denote the layer variables

To approximate the unknown functions, weighting matrix W and the bias

vector B must be determined, and the procedure for selecting W and B is called the network training Many concepts are available to attain training, and the backpropagation, which is based upon the gradient descent optimization methods, are commonly used Applying the gradient descent optimization procedure, one minimizes a mean square error performance index using the end-to-end neural network behavior That is, using the inputs vector v and the output vector c, c∈k, the quadratic performance functional is given as

p j

T j p

j

j j

T j

c

=

=

=

=

1 1

,

where ej = cjuj is the error vector; Q∈p × p is the diagonal weighting matrix

The steepest descent algorithm is applied to approximate the mean squire errors, and the learning rate and sensitivity have been widely studied for the quadratic performance indexes

References

[1] P J Antsaklis and K M Passion (eds.), An Introduction to Intelligent and

Autonomous Control, Kluwer Academic Press, MA, 1993.

[2] S Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall, Upper Saddle River, NJ, 1999

© 2001 by CRC Press LLC

Ngày đăng: 10/08/2014, 05:20