Result from experiments shows that our novel model clusters better than existing models, including Original Fuzzy ART, Complement Fuzzy ART, K-mean algorithm, Euclidean ART.. In this pap
Trang 1A new effective learning rule of Fuzzy ART
Nong Thi Hoa, The Duy Bui Human Machine Interaction Laboratory University of Engineering and Technology Vietnam National University, Hanoi
Abstract—Unsupervised neural networks are known for their
ability to cluster inputs into categories based on the similarity
among inputs Fuzzy Adaptive Resonance Theory (Fuzzy ART)
is a kind of unsupervised neural networks that learns training
data until satisfying a given need In the learning process, weights
of categories are changed to adapt to noisy inputs In other
words, learning process decides the quality of clustering Thus,
updating weights of categories is an important step of learning
process We propose a new effective learning rule for Fuzzy
ART to improve clustering Our learning rule modifies weights
of categories based on the ratio of the input to the weight of
chosen category and a learning rate The learning rate presents
the speed of increasing/decreasing the weight of chosen category.
It is changed by the following rule: the number of inputs is larger,
value is smaller We have conducted experiments on ten typical
datasets to prove the effectiveness of our novel model Result
from experiments shows that our novel model clusters better
than existing models, including Original Fuzzy ART, Complement
Fuzzy ART, K-mean algorithm, Euclidean ART.
Index Terms—Fuzzy Adaptive Resonance Theory; Clustering;
Learning rule;
I INTRODUCTION
Clustering is an important tool in data mining and
knowl-edge discovery because clustering discovers hidden similarity
and key concepts base on the ability of grouping similar items
together Moreover, clustering summarizes a large amount of
data into a small number of groups Therefore, it is useful for
comprehending a large amount of data Fuzzy ART is a
artifi-cial neural networks that clusters data into categories by using
AND operators of fuzzy logic The most important advantage
of Fuzzy ART is learning training data until reaching to given
conditions Meaning, weights of categories are updated until
they completely adapt to training data As a result, the learning
process decides the quality of clustering Thus, designing a
learning rule that allows Fuzzy ART to learn various types of
datasets as well as to cluster data better is always on demand
Studies about learning process of Fuzzy ART models
usu-ally focus on designing new effective learning rules
Capen-ter’s model maximized code generalization by training system
several times with different orderings of input set [1] Simpson
incorporated new data and new clusters without retraining
[2] Tan showed Adaptive Resonance Associative Map with
the ability of hetero-associative learning [3] Lin addressed
the on-line learning algorithms for realizing a controller [4]
Isawa proposed an additional step, Group Learning, to present
connections between similar categories [5] Yousuf proposed
an algorithm that allows updating multiple matching clusters
[6] Moreover, Fuzzy ART has applied for many applications
such as document clustering [7] [8], classification of mul-tivariate chemical data [9], Analysing gene expression [10] Therefore, developing a new effective Fuzzy ART is essential for clustering applications
In this paper, we propose a new effective learning rule of Fuzzy ART that learns many types of datasets as well as clusters data better than previous models Our learning rule updates weights of categories based on the ratio of the input
to the weight of chosen category and a learning rate The learning rate presents the speed of increasing/decreasing the weight of chosen category It is changed by the following rule: the number of inputs is larger, value is smaller We have conducted experiments with ten typical datasets to prove the effectiveness of our novel model Results of experiments show our novel model clusters better than exiting model, including Original Fuzzy ART, Complement Fuzzy ART, K-mean algorithm, Euclidean ART
The rest of the paper is organized as follows The next section shows some background Related works are presented
in Section II In section VI, we present our learning rule and discussions Section VII shows experiments with ten datasets
II RELATED WORKS
Studies about theory of Fuzzy ART can be divided into three categories, including developing new models of Fuzzy ART; studying properties of Fuzzy ART; optimizing the performance
of Fuzzy ART In the first category, new models of Fuzzy ART were proposed to improve clustering/classifying data into categories
Capenter proposed Fuzzy ARTMAP for incremental super-vised learning of recognition categories and multidimensional maps from arbitrary sequences of inputs [1] This model minimized predictive error and maximized code generalization
by training system several times with different orderings of input set
Simpson presented a fuzzy min-max clustering neural net-work with unsupervised learning [2] Pattern clusters were fuzzy sets associating a membership function Simpson’s model have three advantages including stabilizing into pattern clusters in only a few passes; reducing to hard cluster bound-aries; incorporating new data and add new clusters without retraining
Tan showed a neural architecture termed Adaptive Reso-nance Associative Map that extends unsupervised ART sys-tems for rapid, yet stable, hetero-associative learning [3]
2012 Conference on Technologies and Applications of Artificial Intelligence
2012 Conference on Technologies and Applications of Artificial Intelligence
Trang 2Pattern pairs were coded explicitly and recalled perfectly.
Moreover, this model produces the stronger noise immunity
Lin addressed the structure and the associated on-line
learning algorithms of a feed forward network for realizing
the basic elements and functions of a traditional fuzzy logic
controller [4] The input and output spaces were parted on-line
based on the training data by tuning membership functions and
finding proper fuzzy logic rules
Isawa proposed an additional step, called Group Learning,
for the Fuzzy ART in order to obtain more effective
catego-rization [5] The important feature of the group learning was
creating connections between similar categories
Kenaya employed the Euclidean neighbourhood to decide
the said pertinence and patterns mean value for category
training [11] This model calculated the Euclidean distance
and decides a new pattern in an existing category or a new
category
Isawa proposed Fuzzy ART combining overlapped
cate-gories in connections to void the category proliferation
prob-lem [12] The important feature of this study was arranging
the vigilance parameters for every category and varying them
in learning process
Yousuf proposed an algorithm that compares all weights to
the input and allows updating multiple matching clusters [6]
This model mitigated the effects and supervision of updating
clusters for the wrong class
In the second category, important properties of Fuzzy ART
were studied to choosing suitable parameters for a new Fuzzy
ART Huang presented some important properties of the Fuzzy
ART that distinguished into a number of categories[13]
Prop-erties includes template, access, reset, and other propProp-erties for
weight stabilization Moreover, the effects of choice parameter
and vigilance parameter on the functionality of Fuzzy ART
were presented clearly
Geogiopoulos provided a geometrical and clearer
under-standing of why, and in what order, categories are chosen
for various ranges of choice parameter of Fuzzy ART [14]
This study was useful to develop properties of learning that
pertain to the architecture of neural networks Moreover, he
commented the orders according to which categories were
chosen
Anagnostopoulos introduced novel geometric concepts,
namely category regions, in the original framework of Fuzzy
ART and Fuzzy ARTMAP These regions had the same
geometrical shape and shared a lot of common and interesting
properties [15] He proved properties of learning and showed
the training and performance phases did not depend on the
particular choices of the vigilance parameter in one special
state of the vigilance-choice parameter space
In the third category, studies focused on ways to increase
the performance of FART Cano generated function identifiers
for noisy data [16] Thus, FARTs trained on noisy data without
changing the structure or data preprocessing
Burwick discussed implementations of ART on a
non-recursive algorithm to decrease algorithmic complexity of
Fuzzy ART [17] Therefore, the complexity dropped from
Figure 1 Architecture of an ART network
O(N*N)+O(M*N) down to O(NM) where N be the number
of categories and M be the input dimension
Dagher introduced an ordering algorithm that identified
a fixed order of training pattern presentation based on the maxmin clustering method to improve generalization perfor-mance of FART [18]
Kobayashi proposed a new reinforcement learning system that used fuzzy ART to classify observed information and construct effective state space [19] Then, this system was used to solving partially observable Markov decision process problems
Fuzzy ART has applied for many applications such as docu-ment clustering [7] [8], classification of multivariate chemical data [9], Analysing gene expression [10], quality control of manufacturing process [20], classification with missing data
in a wireless sensor network [21]
III BACK GROUND[22]
A ART Network
Adaptive Resonance Theory (ART) neural networks are developed by Grossberg to address the problem of stability-plasticity dilemma The general structure of an ART network
is shown in the Figure 1
A typical ART network consists of two layers: an input layer (F1) and an output layer (F2) The input layer contains N nodes, where N is the number of input patterns The number
of nodes in the output layer is decided dynamically Every node in the output layer has a corresponding prototype vector The networks dynamics are governed by two sub-systems: an attention subsystem and an orienting subsystem The attention subsystem proposes a winning neuron (or category) and the orienting subsystem decides whether to accept it or not This network is in a resonant state when the orienting system accepts a winning category, meaning, the winning prototype vector matches the current input pattern close enough
B Fuzzy ART Algorithm [22]
Input vector: Each input I is an M-dimensional vector
(I1 , IM ), where each component l i is in the interval [0, 1]
Weight vector: Each category (j) corresponds to a vector
w j = (W j1 , , w jM) of adaptive weights, or LTM traces The number of potential categories N(j = i, , N) is arbitrary Initially
Trang 3Wj1 = = w jM = 1 (1)
and each category is said to be uncommitted Alternatively,
initial weightswjimay be taken greater than 1 Larger weights
bias the system against selection of uncommitted nodes,
lead-ing to deeper searches of previously coded categories After
a category is selected for coding it becomes committed As
shown below, each LTM tracew jiis monotone non-increasing
through time and hence converges to a limit
Parameters: Fuzzy ART dynamics are determined by a
choice parameter α 0; a learning rate parameter β ∈ [0, 1];
and a vigilance parameterθ ∈ [0, 1].
Category choice: For each input I and category j, the choice
functionTj is defined by
Tj (I) = |I ∧ w j |
where the fuzzy AND (Zadeh, 1965) operator∧ is defined
by
(x ∧ y) i = min(x i , y i) (3) and where the norm |.| is defined by
|x| =M i=1
For notational simplicity, Tj (I) in Equation 2 is often
written asTj when the input I is fixed The category choice
is indexed by J, where
If more than one Tj is maximal, the category j with
the smallest index is chosen In particular, nodes become
committed in order j = 1, 2, 3,
Resonance or reset: Resonance occurs if the match
func-tion of the chosen category meets the vigilance criterion; that
is, if
|I ∧ wj|
Learning then ensues, as defined below Mismatch reset
occurs if
|I ∧ wj|
Then the value of the choice functionTj is reset to−1 for
the duration of the input presentation to prevent its persistent
selection during search A new index J is chosen, by Equation
5 The search process continues until the chosen J satisfies
Equation 6
Learning: The weight vector w j is updated according to
the equation
w new j = β(I ∧ w old
j ) + (1 − β)w old
Fast-commit slow-recode option: For efficient coding of noisy input sets, it is useful to set β = 1 when J is an
uncommitted node, and then to takeβ < 1 after the category is
committed Thenw (new) j = I the first time category J becomes
active
C Fuzzy ART with complement coding [22]
Moore [23] described a category proliferation problem that can occur in some analog ART systems when a large number
of inputs erode the norm of weight vectors Proliferation of categories is avoided in Fuzzy ART if inputs are normalized; that is, for someγ > 0
for all inputs I Normalization can be achieved by prepro-cessing each incoming vector a A normalization rule, called complement coding, achieves normalization while preserving amplitude information Complement coding represents both the on-response and the off-response to a To define this operation in its simplest form, let a itself represent the on-response The complement of a, denoted bya c, represents the off-response, where
The complement coded input I to the recognition system is the 2M-dimensional vector
I = (ai, a c i ) = (a1, , aM , a c1, , a M i ) (11) After normalization, |I| = M so inputs preprocessed into
complement coding form are automatically normalized Where complement coding is used, the initial condition 1 is replaced by
IV K-MEANSCLUSTERING[24]
K-means is one of the simplest unsupervised learning al-gorithms that solve the clustering problem The procedure follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters) This algorithm aims at minimizing a squared error function by the following equation:
J = k
j=1
N
i=1
x (j) i − Cj 2 (13) where N be the number of points that is in cluster j
In the other words, x (j) i − Cj 2 is a chosen distance measure between a data pointx (j) i and the cluster centreCj,
is an indicator of the distance of the n data points from their respective cluster centres
The algorithm is composed of the following steps:
• Step 1: Place k points into the space represented by the objects that are being clustered These points represent initial group centroids
Trang 4• Step 2: Assign each object to the group that has the
closest centroid
• Step 3: When all objects have been assigned, recalculate
the positions of the k centroids
• Step 4: Repeat Steps 2 and 3 until the centroids no
longer move This produces a separation of the objects
into groups from which the metric to be minimized can
be calculated
Although the procedure will always terminate, the K-means
algorithm does not necessarily find the most optimal
configura-tion, corresponding to the global objective function minimum
V EUCLIDEANART ALGORITHM[11]
The Euclidean ART is a clustering technique that
eval-uates the Euclidean distance between patterns and cluster
centrers to decide clustering membership of patterns The
pattern membership is dependent on the parameter Rth,the
Euclidean threshold The Euclidean ART algorithm consist of
the following steps:
• Step 1: Present a normalized and complement coded
pattern to Euclidean ART module
• Step 2: Calculate the Euclidean distance between this
pattern and the entire existing cluster centers by
Equa-tion 14 Those Euclidean distances are considered as an
activation value of each cluster center with respect to
the presented pattern If there is no cluster center yet,
consider this pattern to be the first one
d(j) = N
j=1
(x i − wj)2 (14)
where j is the category index found in Euclidean ART
network and i is the index of the current presented pattern
• Step 3: Find d(J), where d(J) = min(d)
• Step 4: If d(J) ≤ R th then
– Include the presented pattern x k in the winning
cluster whose center iswJ
– Start the learning process; calculate the new cluster
center according to learning equation 15
wJ new =
L
k=1 xJk
wherexJk is the pattern member k of cluster J and
L is the number of cluster members
Else x i becomes a new categorywN +1
• Step 5: Jump back to Step 1 to accept a new pattern if
there are more patterns to test Else training is over and
resulting Euclidean ART matrix is the trained Euclidean
ART network
VI OUR APPROACH
A Our novel model
Our goal is creating a new Fuzzy ART that clusters better
We propose a new effective rule for updating weights of
categories Our novel model with the new effective learning rule greatly clusters better than exiting studies
Our novel model consists of following steps:
• Step 1: Set up connection weights W j, the choice pa-rameterα and the vigilance parameter ρ.
• Step 2: Choose a suitable category for the input according
to Equation 2-5
• Step 3: Test the current state that can be resonance or reset by Equation 6 and 7
• Step 4: Learning is performed by our learning rule:
w new j = w old
j + β(I − w old
where β be learning rate The learning rate is change
base on the number of patterns of datasets Meaning, the number of patterns is larger,β is smaller In other words,
adding an input to a category, the weight vector of this category is increased/decreased to adapt to the new input
In Fuzzy ART,wj is in [0,1] Therefore, ifwj < 0 then
assign wj = 0 and w j > 1 then assign wj= 1
• Fast-commit slow-recode option is similar to Fuzzy Original ART of Carpenter
B Discussion
In previous studies, learning rules are similar with two terms, including the percent of old weight of the chosen category and the percent of the ratio of the input to old weight
of the chosen category Learning parameterβ is used to present
the percent of terms In our learning rule, learning parameter shows the rate of learning process is quick or slow Therefore, our learning rule is different from previous studies
Our novel model can coded into two models, including Original New Fuzzy ART without normalizing inputs and Complement New Fuzzy ART with normalizing inputs There-fore, we have two models in experiments Similarly with the model of Carpenter [22]
With Complement Fuzzy ARTs, category proliferation prob-lem is not happened by normalizing inputs Moreover, choos-ing a suitable value ofβ (enough small), Original Fuzzy ARTs
void the category proliferation problem Thus, do not solve this problem in experiments
VII EXPERIMENTS
We select 10 datasets from UCI database [25] and Shape database [26], including Iris, Wine, Jain, Flame, R15, Glass, Pathbased, Compound, Aggregation, and Spiral These datasets are different from each other by the number of attributes, the number of categories, the number of patterns, and distribution of categories Table I shows parameters of selected databases
Our novel models are compare to Fuzzy ARTs [22], Kmean [24], and Euclidean ART [11] Six models are coded to assess the effective of our novel models, including Origi-nal Fuzzy ART (OriFART), Complement Fuzzy ART (Com-FART), Original New Fuzzy ART (OriNew(Com-FART), Comple-ment New Fuzzy ART (ComNewFART), K-mean (Kmean), and Euclidean ART (EucART)
Trang 5Table I
F EATURES OF DATASETS
Index Dataset Name #Categories #Attribute #Reocords
Table II
T HE DISTRIBUTION OF CATEGORIES IN I RIS DATASET
Distribution 1–50 51–100 101–150
Data of each datasets are normalized to values in [0,1] In
all experiments, we choose a random vector of each category
to be the initial weight vector Values of parameters are chosen
to reach to highest performance of models In most datasets
and most models, α = 0.8, β = 0.1, ρ = 0.5 In Euclidean
ART with all datasets, ρ = 0.4.
A Experiment 1: Testing with Iris dataset
Table II shows the distribution of categories is uniform with
3 categories
Table III shows the number of successful clustering patterns
in Iris dataset Data are sorted by the number of testing
patterns Table III shows Complement New Fuzzy ART is
better in all sub-tests
B Experiment 2: Testing with Glass dataset
Table IV shows the distribution of categories is not uniform
with 7 categories, especially, the distribution of the fourth
category is 0
Table V shows our novel model is better some sub-tests
with the large number of testing patterns and not better some
sub-tests with the small one
Table III
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN I RIS DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
% 100.0 100.0 100.0 100.0 100.0 100.0
Table IV
T HE DISTRIBUTION OF CATEGORIES IN G LASS DATASET
Distribution 1–70 71–146 147–163 164–176 177-185 186-214
Table V
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN G LASS DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
Table VI
T HE DISTRIBUTION OF CATEGORIES IN S PIRAL DATASET
Distribution 1–101 102–206 207–312
C Experiment 3: Testing with Spiral dataset
Table VI shows the distribution of categories is uniform with 3 categories
Table VII shows Original New Fuzzy ART are greatly better than other models in all sub-tests, excepting Euclidean ART model in the sub-test with 312 patterns
D Experiment 4: Testing with Flame dataset
Table VIII shows the distribution of categories is not uni-form with 2 categories
Table IX shows Original New Fuzzy ART are greatly better than other models in all sub-tests, excepting Original Fuzzy ART model in the sub-test with 240 patterns
E Experiment 5: Testing with Aggregation dataset
Table X shows the distribution of categories is not uniform with 7 categories
Table XI shows Complement New Fuzzy ART are greatly better than other models in all sub-tests, excepting Euclidean ART model in three first sub-tests However, Euclidean ART model is greatly lower Complement New Fuzzy ART in the sub-test with 788 patterns
Table VII
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN S PIRAL
DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
Table VIII
T HE DISTRIBUTION OF CATEGORIES IN F LAME DATASET
Category Index 1 2 Distribution 1–87 88–240
Trang 6Table IX
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN F LAME
DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
Table X
T HE DISTRIBUTION OF CATEGORIES IN A GGREGATION DATASET
Distribution 1–45 46–215 216–317 318–590 591–624 625–754 755–788
Experiment 6: Testing with Wine dataset
Table XII shows the distribution of categories is uniform
with 3 categories
Table XIII shows Complement New Fuzzy ART is
approx-imate K-mean and better than other models
F Experiment 7: Testing with R15 dataset
Table XIV shows the distribution of categories is uniform
with 15 categories
Table XV shows Complement New Fuzzy ART is
approx-imate Euclidean ART, equal to Complement Fuzzy ART, and
better than K-mean and Original Fuzzy ART
G Experiment 8: Testing with Compound dataset
Table XVI shows the distribution of categories is not
uniform with 6 categories
Table XVII shows Original New Fuzzy ART is better than
other models, excepting Original Fuzzy ART In two first
sub-tests, Original New Fuzzy ART is better than Original Fuzzy
ART but lower in two final sub-tests
H Experiment 9: Testing with Pathbased dataset
Table XVIII shows the distribution of categories is uniform
with 3 categories
Table XI
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN A GGREGATION
DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
Table XII
T HE DISTRIBUTION OF CATEGORIES IN W INE DATASET
Distribution 1–59 60–130 131–178
Table XIII
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN W INE DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
% 100.0 100.0 100.0 73.3 100.0 100.0
Table XIV
T HE DISTRIBUTION OF CATEGORIES IN R15 DATASET
Distribution 1–40 41–80 81–120 121–160 161–200 201–240 241–280
281–320 321–360 361–400 401–440 441–480 481–520 521–560 561–600
Table XIX shows Complement New Fuzzy ART is lower K-mean with all sub-tests and Euclidean ART with two first sub-tests, and better than other models
I Experiment 10: Testing with Jain dataset
Table XX shows the distribution of categories is not uniform with 2 categories
Data from Table XXI shows Complement New Fuzzy ART are better than Original Fuzzy ART and Euclidean ART However, Complement New Fuzzy ART is a bit lower than K-mean and Complement Fuzzy ART in two final sub-tests
In summary, although several sub-tests of other models are better than our novel model, our novel model is better than exiting models in many sub-tests and in most datasets
VIII CONCLUSION
In this paper, we proposed a new effective learning rule for Furry ART Our novel model updates weights of categories base on the ratio of the input to the weight of chosen category, and a learning rate The learning parameter shows the rate of learning process is quick or slow Changing learning rate is made by the following rule: The number of inputs is larger,
Table XV
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN R15 DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
Table XVI
T HE DISTRIBUTION OF CATEGORIES IN C OMPOUND DATASET
Distribution 1–50 51–142 143–180 181–225 226–383 384–399
Trang 7Table XVII
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN C OMPOUND
DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
Table XVIII
T HE DISTRIBUTION OF CATEGORIES IN P ATHBASED DATASET
Distribution 1–110 111–207 208–300
this parameter is smaller We proposed our novel model with
the new learning rule to compare to exiting models
Moreover, we have conducted experiments with ten datasets
to prove the effectiveness of our novel model Experiments
show our novel model is the best with four datasets (Iris, Glass,
Spiral, Flame), better than or equal to other models with four
datasets (Aggregation, Wine, R15, Compound) However,
K-mean is better than our models with two datasets (Pathbase,
Jain) and complement Fuzzy ART with Jain dataset
From data of experiments, we obtain two important
conclu-sions, including (i) our novel model clusters correctly from
80% to 100% with formal small datasets that categories
distribute uniformly and (ii) from 50% to 80% with formal
small datasets that its categories distribute non-uniformly and
consist of many categories
ACKNOWLEDGEMENTS
This work is supported by Nafosted research project No
102.02-2011.13
REFERENCES [1] G A Capenter, S Grossberg, and N Markuron, “Fuzzy artmap-an
addaptive resonance architecture for incremental learning of analog
maps,” 1992.
Table XIX
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN P ATHBASED
DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
Table XX
T HE DISTRIBUTION OF CATEGORIES IN J AIN DATASET
Category Index 1 2
Distribution 1–276 277–373
Table XXI
T HE NUMBER OF SUCCESSFUL CLUSTERING PATTERNS IN J AIN DATASET
#Records OriNewFART ComNewFART OriFART ComFART EucART K-mean
% 99.0 100.0 99.0 100.0 100.0 100.0
[2] P Simpson, “Fuzzy min-max neural networks - part 2: Clustering,” IEEE
Transactions on Fuzzy Systems, vol 1, no 1, p 32, 1993.
[3] A.-H Tan, “Adaptive resonance associative map,” Elsevier Science,
Neural Network, vol 8, no 3, pp 437–446, 1995.
[4] C.-t Lin, C.-j Lin, and C Lee, “Fuzzy adaptive learning control network
with on-line neural learing,” Elsevier Science-Fuzzy sets and Systems,
vol 71, pp 25–45, 1995.
[5] H Isawa, M Tomita, H Matsushita, and Y Nishio, “Fuzzy adaptive
resonance theory with group learning and its applications,” 2007
Inter-national Symposium on Nonlinear Theory and its Applications, no 1,
pp 292–295, 2007.
[6] A Yousuf and Y L Murphey, “A supervised fuzzy adaptive resonance
theory with distributed weight update,” Springer-Verlag Berlin
Heidel-berg, vol Part I, LN, no 6063, pp 430–435, 2010.
[7] P Pardhasaradhi, R P Kartheek, and C Srinivas, “A evolutionary fuzzy
art computation for the document clustering,” International Conference
on Computing and Control Engineering, 2012.
[8] R Kondadadi and R Kozma, “A modified fuzzy art for soft document
clustering,” Proceedings of the 2002 International Joint Conference on
Neural Networks IJCNN’02, pp 2545–2549, 2002.
[9] X.-H Song, P K Hopke, M Bruns, D a Bossio, and K M Scow, “A fuzzy adaptive resonance theorysupervised predictive mapping neural network applied to the classification of multivariate chemical data,”
Chemometrics and Intelligent Laboratory Systems, vol 41, no 2, pp.
161–170, 1998.
[10] S Tomida, T Hanai, H Honda, and T Kobayashi, “Gene expression
analysis using fuzzy art,” Genome Informatic, vol 12, pp 245–246,
2001.
[11] R Kenaya and K C Cheok, “Euclidean art neural networks,”
Proceed-ings of the World Congress on Engineering and Computer Science 2008,
no 1, 2008.
[12] H Isawa, H Matsushita, and Y Nishio, “Improved fuzzy adaptive resonance theory combining overlapped category in consideration of
connections,” IEEE Workshop on Nonlinear Circuit Networks, pp 8–
11, 2008.
[13] J Huang, M Georgiopoulos, and G L Heileman, “Fuzzy art properties,”
Elsevier Science, Neural Network, vol 8, no 2, pp 203–213, 1995.
[14] M Geogiopoulos, H Fernlund, G Bebis, and G Heileman, “Fart and fartmap-effects of the choice parameter,” pp 1541–1559, 1996 [15] G C Anagnostopoulos and M Georgiopoulos, “Category regions as
new geometrical concepts in fuzzy-art and fuzzy-artmap,” Elsevier
Science, Neural Network, vol 15, pp 1205–1221, 2002.
[16] M Cano, Y Dimitriadis, E Gomez, and J Coronado, “Learning from
noisy information in fasart and fasback neuro-fuzzy systems,” Elsevier
Science, Neural Network, vol 14, pp 407–425, 2001.
[17] T Burwick and F Joublin, “Optimal algorithmic complexity of fuzzy
art,” Kluwer Academic Publisher-Neural Processing Letters, vol 7, pp.
37–41, 1998.
[18] I Dagher, M Georgiopoulos, G L Heileman, and G Bebis, “An ordering algorithm for pattern presentation in fuzzy artmap that tends
to improve generalization performance.” IEEE transactions on neural
networks, vol 10, no 4, pp 768–78, 1999.
[19] K Kobayashi, S Mizuno, T Kuremoto, and M Obayashi, “A reinforce-ment learning system based on state space construction using fuzzy art,”
Proceedings of SICE Annual Conference, vol 2005, no 1, pp 3653–
3658, 2005.
[20] M Pacella, Q Semeraro, and A Anglani, “Adaptive resonance theory-based neural algorithms for manufacturing process quality contro,”
International Journal of Production Research, no 21, pp 4581–4607.
[21] L E Parker, “Classification with missing data in a wireless sensor
network,” IEEE SoutheastCon 2008, pp 533–538, Apr 2008.
Trang 8[22] G Carpenter, S Grossberg, and D B Rosen, “Fuzzy art : Fast stable learning and categorization of analog patterns by an adaptive resonance
system,” Pergamon Press-Neural network, vol 4, pp 759–771, 1991 [23] B.Moore, “Art 1 and pattern clustering,” Proceedings of the 1988
Connectionist Models Summer School, Morgan Kaufmann Publishers,
pp 174–1985, 1989.
[24] J.B.MacQueen, “Some methods for classification and analysis of
mul-tivariate observations,” Proceedings of 5th Berkeley Symposium on
Mathematical Statistics and Probability, no 1, pp 281–297, 1967.
[25] “Uci database,” Avaliable at: http://archive.ics.uci.edu/ml/datasets [26] “Shape database,” Avaliable at: http://cs.joensuu.fi/sipu/datasets/.