1. Trang chủ
  2. » Thể loại khác

DSpace at VNU: Improving learning rule for fuzzy associative memory with combination of content and association

6 213 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 6
Dung lượng 648,92 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Sussner and Valle's models work quite well in auto-associative mode with perfect input patterns, similar to other improvements of Kosko's model.. Depending on the ratio of association an

Trang 1

Improving learning rule for fuzzy associative memory

with combination of content and association

Human Machine Interaction Laboratory, University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam

a r t i c l e i n f o

Article history:

Received 4 July 2013

Received in revised form

14 September 2013

Accepted 8 January 2014

Available online 4 August 2014

Keywords:

Fuzzy associative memory

Noise tolerance

Pattern associations

a b s t r a c t

FAM is an associative memory that uses operators of fuzzy logic and mathematical morphology (MM) FAMs possess important advantages including noise tolerance, unlimited storage, and one pass convergence An important property, deciding FAM performance, is the ability to capture content of each pattern, and association of patterns Existing FAMs capture either content or association of patterns well, but not both of them They are designed to handle either erosive or dilative noise in distorted inputs but not both Therefore, they cannot recall distorted input patterns very well when both erosive and dilative noises are present In this paper, we propose a new FAM called content-association associative memory (ACAM) that stores both content and association of patterns The weight matrix is formed with the weighted sum of output pattern and the difference between input and output patterns Our ACAM can handle inputs with both erosive and dilative noises better than existing models

& 2014 Elsevier B.V All rights reserved

1 Introduction

Associative memories (AMs) store pattern associations and

can retrieve desired output pattern upon presentation of a

possibly noisy or incomplete version of an input pattern They

are categorized as auto-associative memories and

hetero-associative memories A memory is said to be auto-hetero-associative

if the output is the same as the input On the other hand, the

memory is considered hetero-associative if the output is

differ-ent from the input The Hopfield network [1] is probably the

most widely known auto-associative memory at present with

many variations and generalizations Among different kinds of

associative memories, fuzzy associative memories (FAMs)

belong to the class of fuzzy neural networks, which combine

fuzzy concepts and fuzzy inference rules with the architecture

and learning of neural networks Input patterns, output

pat-terns, and/or connection weights of FAMs are fuzzy-valued

Working with uncertain data is the reason why FAMs have been

used in manyfields such as pattern recognition, control,

estima-tion, inference, and prediction For example, Sussner and Valle

used the implicative FAMs for face recognition[2] Kim et al

predicted Korea stock price index[3] Shahir and Chen inspected

the quality of soaps on-line [4] Wang and Valle detected

pedestrian abnormal behaviour[5] Sussner and Valle predicted

the Furnas reservoir from 1991 to 1998[2]

Kosko's FAM [6]in the early 1990s has initiated research on FAMs For each pair of input X and output Y, Kosko's FAM stores their association as the fuzzy rule “If x is X then y is Y” in a separated weight matrix called FAM matrix Thus, Kosko's overall fuzzy system comprises several FAM matrices Therefore, the disadvantage of Kosko's FAM is very low storage capacity In order

to overcome this limitation, different improved FAM versions have been developed that store multiple pattern associations in a single FAM matrix[7–10] In Chung and Lee's model[7] which gener-alizes Kosko's one, FAM matrices are combined with a max-t composition into a single matrix It is shown that all outputs can

be recalled perfectly with the single combined matrix if the input patterns satisfy certain orthogonality conditions The fuzzy impli-cation operator is used to present associations by Junbo et al.[9], which improves the learning algorithm for Kosko's max–min FAM model By adding a threshold at recall phase, Liu has modi-fied Junbo's FAM in order to improve the storage capacity [10] Recently, Sussner and Valle has established implicative fuzzy associative memories (IFAMs)[2]with implicative fuzzy learning This can be considered as a class of associative memories that grew out of morphological associative memories[11]because each node performs a morphological operation Sussner and Valle's models work quite well in auto-associative mode with perfect input patterns, similar to other improvements of Kosko's model However, these models suffer much from the presence of both erosive and dilative noises

In binary mode, many associative memory models show their noise tolerance capability from distorted input based on their own mathematical characteristics [10,2] For example, models

Contents lists available atScienceDirect

journal homepage:www.elsevier.com/locate/neucom

Neurocomputing

http://dx.doi.org/10.1016/j.neucom.2014.01.063

0925-2312/& 2014 Elsevier B.V All rights reserved.

n Corresponding author.

E-mail address: duybt@vnu.edu.vn (T.D Bui).

Trang 2

using maximum operation when forming the weight matrix

are excellent in the presence of erosive noise (1 to 0) while the

models using minimum operation are ideal for dilative noise

(0 to 1)[11] On the other hand, models with maximum operation

cannot recover well patterns with dilative noise and models with

minimum operation cannot recover well patterns with erosive

noise In grey scale or fuzzy valued mode, even though existing

models can recover main parts of the output pattern, noisy parts of

the input pattern affect seriously to the recalled output pattern

Threshold is probably the most effective mechanism so far to deal

with this problem However, the incorrectly recalled parts in the

output are normallyfixed with some pre-calculated value based

on the training input and output pairs Clearly, there are two main

ways to increase noise tolerance capability of the associative

memory models, which are recovering from the noise and

redu-cing the effect of the noise Existing models concentrate on the

first way The work in this paper is motivated from the second

way, which is how to reduce the effect of noisy input patterns to

the recalled output patterns We propose our work based on the

implicative fuzzy associative memories[2], which also belong to

the class of morphological associative memories [11] Instead of

using only rules to store the associations of the input and output

patterns, we also add a certain part of the output patterns

themselves in the weight matrix Depending on the ratio of

association and content of output patterns in the weight matrix,

the effect of noise in the distorted input patterns onto the recalled

output patterns can be reduced Obviously, incorporating the

content of the output patterns would influence the output

selec-tion in the recall phase However, the advantages from the tradeoff

are worth to consider We have conducted experiments in recalling

images from the number dataset and the Corel dataset with both

erosive and dilative noises to confirm the effectiveness of our

model when dealing with noise

The rest of the paper is organized as follows.Section 2presents

background on fuzzy associative memory models We also present

in this section motivational analysis for our work InSection 3, we

describe in detail our model.Section 4 presents analysis on the

properties of the proposed model and experiments to illustrate

these properties

2 Background and motivation

2.1 Fuzzy associative memory models

The objective of associative memories is to recall a predefined

output pattern given the presentation of a predefined input

pattern Mathematically, the associative memory can be defined

as a mapping G such that for a finite number of pairs fðAξ; BξÞ;

ξ¼ 1; …; kg:

The mapping G is considered to have the ability of noise

tolerance if GðA0ξÞ is equal to Bξfor noisy or incomplete version

A0ξ of Aξ The memory is called auto-associative memory if the

pattern pairs are in the form of fðAξ; AξÞ;ξ¼ 1; …; kg The memory

is hetero-associative if the output Bξis different from the input Aξ.

The process of determining G is called learning phase, and the

process of recalling Bξusing G with the presentation of Aξis called

recall phase When G is described by a fuzzy neural network,

and the patterns Aξand Bξare fuzzy sets for everyξ¼1,…,k, the

memory is called fuzzy associative memory (FAM)

The very early FAM models are developed by Kosko in the early

1990s[6], which are usually referred as max–min FAM and

max-product FAM Both of them are single layer feed-forward artificial

neural networks If WA½0; 1mnis the synaptic weight matrix of a

max–min FAM and if AA½0; 1n is the input pattern, then the output pattern BA½0; 1m is computed as follows:

or

Bj¼ ⋁n

i ¼ 1

Wij4Ai ðj ¼ 1…mÞ: ð3Þ Similarly, the max-product FAM produces the output

or

Bj¼ ⋁n

i ¼ 1

Wij:Ai ðj ¼ 1…mÞ: ð5Þ For a set of pattern pairs fðAξ; BξÞ :ξ¼ 1; …; kg, the learning rule used to store the pairs in a max–min FAM, which is called correlation-minimum encoding, is given by the following equa-tion:

or

Wij¼ ⋁k

ξ¼ 1

Similarly, the learning rule for the max-product FAM called correlation-product encoding is given by W ¼ B○PAT

Chung and Lee generalized Kosko's model by substituting the max–min or the max-product with a more general max-t product

[7] The resulting model, called generalized FAM (GFAM), can be described in terms of the following relationship between an input pattern A and the corresponding output pattern B:

B ¼ W○TA where W ¼ B○TAT; ð8Þ and the symbol○T denotes the max-C product and C is a t-norm This learning rule is referred as correlation-t encoding

For these learning rules to guarantee the perfect recall of all stored patterns, the patterns A1; …; Ak must constitute an ortho-normal set Fuzzy patterns A; BA½0; 1nare said max-t orthogonal if and only if AT○TB ¼ 0, i.e TðAj; BjÞ ¼ 0 for all j ¼ 1; …; n Conse-quently, A1; …; Ak

is a max-t orthonormal set if and only if the patterns Aξand Aηare max-t orthogonal for everyξaηand Aξis a

normal fuzzy set for everyξ¼ 1; …; k Some research focused on the stability of FAMs and the conditions for perfectly recalling stored patterns are[11–14]

Based on Kosko's max–min FAM, Junbo et al.[9]introduced a new learning rule for FAM which allows for the storage of multiple input pattern pairs The synaptic weight matrix is computed as follows:

where the symbol⊛Mdenotes the min-IMproduct

Liu proposed a model, which is also known as the max–min FAM with threshold [10] The recall phase is described by the following equation:

B ¼ ðW○MðA3cÞÞ3θ: ð10Þ The weight matrix WA½0; 1mnis given in terms of implicative learning and the thresholdsθA½0; 1m and c ¼ ½c1; …; cnTA½0; 1n

are of the following form:

θ¼ ⋀k

ξ¼ 1

Trang 3

cj¼ ⋀i A D j⋀ξA LE ijBξ

0 if Di¼ ∅

(

ð12Þ

where LEij¼ fξ: AξjrBξig and Dj¼ fi : LEija∅g

With implicative fuzzy learning, Sussner and Valle established

implicative fuzzy associative memories (IFAMs) [2] IFAMs are

quite similar to the GFAM model of Chung and Lee in the way

that the model is given by a single layer feed-forward artificial

neural network with max-T neurons Here, T is a continuous

t-norm Different from GFAM, the IFAM model includes a bias

term θ¼ ½0; 1n and uses the R-implicative fuzzy learning rule

Given an input pattern AA½0; 1n, the IFAM model produces the

following output pattern BA½0; 1m:

B ¼ ðW○TAÞ3θ; ð13Þ

where

and

θ¼ ⋀k

ξ¼ 1

The fuzzy implication ITis determined by the following equation:

ITðx; yÞ ¼ 3fzA½0; 1 : Tðx; zÞryg 8x; yA½0; 1 ð16Þ

Xiao et al.[15]designed a model that applied the ratio of

input-to-output patterns for the associations:

Wij¼ ⋀k

ξ¼ 1

minðAξ

i; BξjÞ

maxðAξ

Wang and Lu[16] proposed a set of FAM that used division

operator to describe the associations and erosion/dilations for

generalizing the associations

2.2 Motivation

Our study is motivated from the question how to reduce the

effect of noise in distorted patterns onto the recalled output

patterns We base our work on IFAMs [2], particularly with

Lukasiewicz fuzzy conjunction, disjunction and implication [17]

We now present how to incorporate both association and output

content into the weight matrix based on IFAMs

Recall that the Lukasiewicz conjunction is defined as

CLðx; yÞ ¼ 03ðxþy1Þ; ð18Þ

Lukasiewicz disjunction is defined as

DLðx; yÞ ¼ 14ðxþyÞ; ð19Þ

and Lukasiewicz implication is defined as

ILðx; yÞ ¼ 14ðyxþ1Þ: ð20Þ

If AA½0; 1n is the input pattern, and BA½0; 1m is the output

pattern, the learning rule to store the pairs in a Lukasiewicz IFAM

using implication is given by the following equation:

Wij¼ ILðAi; BjÞ ði ¼ 1…n; j ¼ 1…mÞ

The recall phase using conjunction is described by the

follow-ing equation:

Yj¼ CLðWij; AiÞ ðj ¼ 1…mÞ

¼ ⋁n

i ¼ 1

03ðWijþAi1Þ ð22Þ

In order to store both association and output content in the

weight matrix, we modify the learning rule of Lukasiewicz IFAM

using both disjunction and implication as follows:

Wij¼ DLðBj; ILðAi; BjÞÞ ði ¼ 1…n; j ¼ 1…mÞ

¼ 14ðBjþ14ðBjAiþ1ÞÞ

¼ 14ð14ðBjAiþ1ÞþBjÞ

¼ 14ðð1þBjÞ4ð2BjAiþ1ÞÞ

¼ 14ð1þBjÞ4ð2BjAiþ1Þ

¼ 14ð2BjAiþ1Þ ð23Þ And in the recall phase, the conjunction is used with a multi-plication factor of1:

Yj¼1

2 LðWij; AiÞ ðj ¼ 1…mÞ

¼1

2 ⋁n

i ¼ 1

03ðWijþAi1Þ

¼1

2 ⋁n

i ¼ 1

03ð14ð2BjAiþ1ÞþAi1Þ

¼1

2 ⋁n

i ¼ 1

03ðAi42BjÞ

¼1

2 ⋁n

i ¼ 1

Ai42Bj

¼1

2 ⋁n

i ¼ 1

Ai

!

We name our associative memory as association-content asso-ciative memory (ACAM) It can be easy to see that the condition for

W to satisfy the equation WA¼ B is that

Bjr12 ⋁n

i ¼ 1

Ai

!

ð25Þ

If we relax the condition of WijA½0…1, then

Wij¼ 2BjAiþ1; ð26Þ and as a result

Yj¼1

2 ⋁n

i ¼ 1

03ðð2BjAiþ1ÞþAi1Þ

¼1

2 ⋁n

i ¼ 1

032Bj

the equation WA ¼B is satisfied naturally

For a set of pattern pairs fðAξ; BξÞ :ξ¼ 1; …; kg, the weight matrix is constructed with an erosion operator as follows:

Wij¼ ⋀k

ξ¼ 1

2Bξ

3 Generalized association-content associative memory for grey scale patterns

It is clear that we can remove þ1 in the formula of W and 1

in the formula of Y without any effect on our model It can also be seen that our association-content learning rule is similar to morphological associative memory except that there is a multi-plication factor of 2 to Bj This actually represents the portion

of output content to be added to the weight matrix besides the association represented by BjAi More generally, the weight matrix can be constructed as follows:

Wij¼ ⋀k

ξ¼ 1

ðð1ηÞBξjþηðBξjAξiÞÞ

¼ ⋀k

ξ¼ 1

ðBξjηAξ

Trang 4

whereηis a factor to control the ratio between the content and

the association to be stored With theηfactor, when the input is

noisy, the noise will have less effect on the recalled patterns

For an input X, the output Y is recalled from W with the

equation:

Yj¼ ⋁m

i ¼ 1

In order to maintain the ability to store an unlimited number

of patterns in the auto-associative case, we keep Wiithe same as in

MAMs[11]:

Wij¼ ⋀k

ξ¼ 1ðBξjAξiÞÞ if i ¼ j

⋀k

ξ¼ 1ðBξjηAξ

iÞ if iaj

8

<

The equation for recalling is then modified as follows:

Yj¼ ⋁

ðηXiþWijÞ3ðXiþWijÞ ð32Þ

Theorem 1 W in Eq.(31)recalls perfectly for all pairs (Aξ, Bξ) if and

only if for eachξ¼ 1; …; k, each column of matrix WξW contains a

zero entry

Proof W recalls perfectly for all pairs (Aξ, Bξ) 3ð⋁i a j

ðηAξ

iþWijÞ3ðAξiþWijÞÞ ¼ Bξj 8 j ¼ 1; …; m

3Bξj ⋁

!

¼ 0 8ξ¼ 1; …; k and 8j ¼ 1; …; m

3Bξjþ ⋀

ðηAξ

iWijÞ4ðAξiWijÞ

!

¼ 0 8ξ¼ 1; …; k and 8j ¼ 1; …; m

3 ⋀

ðBξjηAξ

¼ 0 8ξ¼ 1; …; k and 8j ¼ 1; …; m

3 ⋀n

i ¼ 1

ðWξijWijÞ ¼ 0 8ξ¼ 1; …; k and 8j ¼ 1; …; m ð33Þ

This last set of equations is true if and only if for eachξ¼ 1; …; k

and each integer j ¼ 1; …; m, each row entry of the j-th column of

½WξW contains at least one zero entry □

4 Properties of association-content associative memory

Similar to MAMs[11]and IFAMs[2], our ACAM converges in

one step Moreover, ACAM has unlimited storage capacity, which is

given by the next theorem

Theorem 2 W in Eq (31) recalls perfectly for all pairs (Aξ, Aξ)

(ξ¼ 1; …; k)

Proof Since Wξ

jj¼ AξjAξj ¼ 0 for each j ¼ 1; …; m and all

ξ¼ 1; …; k Hence, for each ξ¼ 1; …; k, each column of ½WξW

contains a zero entry According to Theorem 1, W recalls perfectly

for all pairs (Aξ, Aξ). □

Similar to MAMs and IFAMs, our ACAM can handle erosive

noise effectively with dilation operation in the recalling equation

However, for MAMs and IFAMs, the noise tolerance capability is

good when the number of stored patterns is much smaller than

the length of the input vector, which decides the size of the weight

matrix W This means that MAMs and IFAMs can correct errors

when a large space of storage is waste This reduces the practical

usability of MAMs and IFAMs Our ACAM can compensate the

errors caused by distorted inputs better than MAMs and IFAMs

To compare the effectiveness of our ACAM in handling noise over other well-known associative memories, we have conducted several experiments The six models which we are comparing with are proposed by Junbo et al.[18], Kosko[6], Xiao et al.[15], Sussner and Valle (IFAMs)[2], and Ritter et al (MAMs)[11]

4.1 Experiments with number dataset

This dataset consists offive 5  5 images of numbers from 0 to 4 Using the standard row-scan method, each pattern image is converted into a vector of size 25 With this dataset, the size of the weight matrix

W is 25  25, which is used to store 5 patterns of size 25 With this dataset, we perform experiments on distorted input images with both auto-associative and hetero-associative modes The distorted images contain both erosive and dilative noises (salt and pepper noise) All models are implemented with dilation operator in recalling function where applicable The distorted images can be seen inFig 1 The criterion to evaluate results are the normalized error, which is calculated as follows:

Eð ~B; BÞ ¼J ~B BJJBJ ð34Þ where B is the expected output pattern, ~B is the recovered pattern, andJ  J is the L2norm of a vector

Table 1shows the total error of different models when recalling distorted input images in auto-associative mode As can be seen from the table, our ACAM produces the least total error, while Junbo et al.'s model, Xiao et al.'s model, Sussner and Valle's IFAM and Ritter et al.'s MAM produce a similar amount of total error

Fig 1 Auto-associative memory experiments with the number dataset: the first row contains original training images; the second row contains distorted input images; the third, fourth, fifth and sixth row contains output images from Junbo et al.'s model, Xiao et al.'s model, Sussner and Valle's IFAM, and our ACAM respectively.

Table 1 Auto-associative memory experiment result on the number dataset with Junbo

et al.'s model, Kosko's model, Xiao et al.'s model, Sussner and Valle's IFAM, Ritter

et al.'s MAM and our ACAM.

Trang 5

Kosko's model produces the most total error This agrees with

what we have mentioned before that Kosko's model cannot even

produce perfect result for perfect input in many cases The reason

other models produce larger total error than our ACAM model is

that these models cannot work well with both erosive and dilative

noises while our ACAM has a mechanism to reduce the effect of

noise This can be seen more clearly inFig 1 In hetero-associative

mode, the pairs of images to remember are images of 0 and 1,

1 and 2, etc.Table 2shows the total error of different models in

this case From the table we can see that our ACAM also produces

the least total error It should be noted that when there is no noise

or only erosive noise, our model performs slightly worse than

IFAMs and MAMs because of the mechanism to reduce the effect of

noise In the presence of only dilative noise, Xiao et al.'s model also

performs better than our ACAM However, this trade-off is worth

to consider because in practice perfect inputs or inputs distorted

by erosive noise only are not common

4.2 Experiments with Corel dataset This dataset includes images selected from the Corel database (Fig 2) The test patterns are generated from input patterns by degrading them with salt and pepper noise, both at 25 percent the number of pixels.Fig 3shows some generated test patterns

In auto-association mode, 10 images are used The result in auto-association mode, which is presented inTable 3, shows our ACAM's effectiveness in handling salt and pepper noise Fig 4

shows samples in which our method visually improves input pattern more than others Hetero-association mode is tested with

10 pairs of images, in which the input image pattern is different from the output image pattern As in the previous test, input patterns are degraded by salt and pepper noise.Table 4also shows how good our ACAM performs compared to other models in the presence of both erosive and dilative noise Fig 5visually com-pares the results of our model to others

Table 2

Hetero-associative memory experiment result on the number dataset with Junbo et al.'s model, Kosko's model, Xiao et al.'s model, Sussner and Valle's IFAM, Ritter et al.'s MAM and our ACAM.

Fig 2 Some images from the dataset used for the experiments.

Fig 3 Test patterns generated from input patterns with salt and pepper noise.

Table 3

Auto-associative memory experiment result on the Corel dataset with Junbo et al.'s model, Kosko's model, Xiao et al.'s model, Sussner and Valle's IFAM, Ritter et al.'s MAM and our ACAM.

Fig 4 Samples from the Corel dataset of which proposed model visually recovers pattern better than other method in auto-associative mode From left to right are patterns salt and pepper noise recovered by Junbo et al.'s model [18] , Kosko's model [6] , Xiao et al.'s model [15] and Sussner and Valle's IFAM [2] , our ACAM model, and the expected result.

Table 4

Hetero-associative memory experiment result on the Corel dataset with Junbo et al.'s model, Kosko's model, Xiao et al.'s model, Sussner and Valle's IFAM, Ritter et al.'s MAM and our ACAM.

Trang 6

5 Conclusion

In this paper, we proposed a new FAM that captures both content

and association of patterns While still possessing vital advantages of

existing FAMs, our model has better noise tolerance when both

erosive and dilative noises are present This is achieved by sacrificing

the reduction of performance in special cases (no noise, only erosive

or dilative noise) We have conducted experiments on different data

sets to prove the efficiency of the proposed FAM The obtained

results hint that the improvement in capturing both pattern content

and associations can be effective

It is noted that the paper is only thefirst step to show a way of

reducing the effect of noise in FAMs There are many ways in

which the paper can be extended First of all, mathematical

analysis of how the effect of noise is reduced is an interesting

problem to solve Secondly, besides combining content of output

with association based on IFAMs and MAMs, using this approach

with other existing FAMs would be a nice try Finally, it is worth

to compare to and integrate with other associative memories

besides FAMs, such as associative memories based on

discrete-time recurrent neural networks[19–21]

Acknowledgements

This work is supported by Nafosted Research Project no

102.02-2011.13

References

[1] J.J Hopfield, Neural networks and physical systems with emergent collective

computational abilities, Proc Natl Acad Sci U S A 79 (1982) 2554–2558

[2] P Sussner, M.E Valle, Implicative fuzzy associative memories, IEEE Trans.

Fuzzy Syst 14 (6) (2006) 793–807

[3] M.-J Kim, I Han, K.C Lee, Fuzzy associative memory-driven approach to

knowledge integration, in: 1999 IEEE International Fuzzy Systems Conference

Proceedings, 1999, pp 298–303.

[4] S Shahir, X Chen, Adaptive fuzzy associative memory for on-line quality

control, in: Proceedings of the 35th South-Eastern Symposium on System

Theory, 2003, pp 357–361.

[5] Z Wang, J Zhang, Detecting pedestrian abnormal behavior based on fuzzy

associative memory, in: Fourth International Conference on Natural

Computa-tion, 2008, pp 143–147 〈 http://dx.doi.org/10.1109/ICNC.2008.396 〉.

[6] B Kosko, Neural Networks and Fuzzy Systems: A Dynamical Systems

Approach to Machine Intelligence, Prentice Hall, Englewood Cliffs, NJ, 1992

[7] F Chung, T Lee, On fuzzy associative memories with multiple-rule storage

capacity, IEEE Trans Fuzzy Syst 4 (3) (1996) 375–384

[8] A Blanco, M Delgado, I Requena, Identification of fuzzy relational equations

by fuzzy neural networks, Fuzzy Sets Syst 71 (1995) 215–226

[9] F Junbo, J Fan, S Yan, A learning rule for FAM, in: 1994 IEEE International

Conference on Neural Networks, 1994, pp 4273–4277.

[10] P Liu, The fuzzy associative memory of max–min fuzzy neural networks with

threshold, Fuzzy Sets Syst 107 (1999) 147–157

[11] G Ritter, P Sussner, J.D de Leon, Morphological associative memories, IEEE

Trans Neural Netw 9 (1998) 281–293

[12] Z Zhang, W Zhou, D Yang, Global exponential stability of fuzzy logical BAM

neural networks with Markovian jumping parameters, in: 2011 Seventh

International Conference on Natural Computation, 2011, pp 411–415.

[13] S Zeng, W Xu, J Yang, Research on properties of max-product fuzzy

associative memory networks, in: Eighth International Conference on

Intelli-gent Systems Design and Applications, 2008, pp 438–443.

[14] Q Cheng, Z.-T Fan, The stability problem for fuzzy bidirectional associative memories, Fuzzy Sets Syst 132 (1) (2002) 83–90

[15] P Xiao, F Yang, Y Yu, Max–min encoding learning algorithm for fuzzy max-multiplication associative memory networks, in: 1997 IEEE International Conference on Systems, Man, and Cybernetics, 1997, pp 3674–3679 [16] S.T Wang, H.J Lu, On new fuzzy morphological associative memories, IEEE Trans Fuzzy Syst 12 (3) (2004) 316–323

[17] W Pedrycz, F Gomide, An Introduction to Fuzzy Sets: Analysis and Design, Complex Adaptive Systems, MIT Press, 1998

[18] F Junbo, J Fan, S Yan, An encoding rule of FAM, in: Singapore ICCS/ISITA '92,

1992, pp 1415–1418.

[19] Z Zeng, J Wang, Analysis and design of associative memories based on recurrent neural networks with linear saturation activation functions and time-varying delays, Neural Comput 19 (8) (2007) 2149–2182

[20] Z Zeng, J Wang, Design and analysis of high-capacity associative memories based on a class of discrete-time recurrent neural networks, IEEE Trans Syst Man Cybern Part B: Cybern 38 (6) (2008) 1525–1536

[21] Z Zeng, J Wang, Associative memories based on continuous-time cellular neural networks designed using space-invariant cloning templates, Neural Netw 22 (2009) 651–657

The Duy Bui got his Bachelor degree in Computer Science from University of Wollongong, Australia in

2000 and his Ph.D in Computer Science from Univer-sity of Twente, the Netherlands in 2004 He is now working at Human–Machine Interaction Laboratory, University of Engineering and Technology, Vietnam National University, Hanoi His research includes Machine Learning, Human–Computer Interaction, Computer Graphics and Image Processing.

Thi Hoa Nong received the Master of Science in Information Technology from Thainguyen University

in 2006 She is pursuing Ph.D degree in Computer Science at University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam She is now a lecturer in Thainguyen University, Vietnam Her current research interests include artificial intelligence, machine learning.

Trung Kien Dang got M.Sc in Telematics from University

of Twente, the Netherlands, in 2003 and received Ph.D.

in Computer Science from University of Amsterdam, the Netherlands in 2013 His research includes machine learning, 3D model reconstruction and video log analysis Fig 5 From left to right are patterns from the Corel dataset recovered from salt and pepper noise in hetero-associative mode by Junbo et al.'s model [18] , Kosko's model [6] , Xiao et al.'s model [15] and Sussner and Valle's IFAM [2] , our ACAM model, and the expected result.

Ngày đăng: 16/12/2017, 00:05

🧩 Sản phẩm bạn có thể quan tâm