1. Trang chủ
  2. » Thể loại khác

Springer LNCS 4232 neural information processing part III ICONIP 2006 king i co(eds) (2006)(t)(1247s) part 3 (NXPowerLite backup)

1,2K 581 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 1.247
Dung lượng 49,91 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Table of Contents – Part IIIBioinformatics and Biomedical Applications DRFE: Dynamic Recursive Feature Elimination for Gene Identification Based on Random Forest.. 217 Jing-Xin Wang, Zhi-

Trang 1

Commenced Publication in 1973

Founding and Former Series Editors:

Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Trang 2

Irwin King Jun Wang Laiwan Chan

DeLiang Wang (Eds.)

Trang 3

Irwin King

Laiwan Chan

Chinese University of Hong Kong

Department of Computer Science and Engineering

Shatin, New Territories, Hong Kong

E-mail:{king,lwchan}@cse.cuhk.edu.hk

Jun Wang

Chinese University of Hong Kong

Department of Automation and Computer-Aided Engineering

Shatin, New Territories, Hong Kong

E-mail: jwang@acae.cuhk.edu.hk

DeLiang Wang

Ohio State University

Department of Computer Science and Engineering

Columbus, Ohio, USA

E-mail: dwang@cse.ohio-state.edu

Library of Congress Control Number: 2006933758

CR Subject Classification (1998): F.1, I.2, I.5, I.4, G.3, J.3, C.2.1, C.1.3, C.3LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues

ISBN-10 3-540-46484-0 Springer Berlin Heidelberg New York

ISBN-13 978-3-540-46484-6 Springer Berlin Heidelberg New York

This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks Duplication of this publication

or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,

in its current version, and permission for use must always be obtained from Springer Violations are liable

to prosecution under the German Copyright Law.

Springer is a part of Springer Science+Business Media

Trang 4

This book and its companion volumes constitute the Proceedings of the 13th ternational Conference on Neural Information Processing (ICONIP 2006) held inHong Kong during October 3–6, 2006 ICONIP is the annual flagship conference

In-of the Asia Pacific Neural Network Assembly (APNNA) with the past events held

in Seoul (1994), Beijing (1995), Hong Kong (1996), Dunedin (1997), Kitakyushu(1998), Perth (1999), Taejon (2000), Shanghai (2001), Singapore (2002), Istanbul(2003), Calcutta (2004), and Taipei (2005) Over the years, ICONIP has maturedinto a well-established series of international conference on neural informationprocessing and related fields in the Asia and Pacific regions Following the tradi-tion, ICONIP 2006 provided an academic forum for the participants to dissem-inate their new research findings and discuss emerging areas of research It alsocreated a stimulating environment for the participants to interact and exchangeinformation on future challenges and opportunities of neural network research.ICONIP 2006 received 1,175 submissions from about 2,000 authors in 42countries and regions (Argentina, Australia, Austria, Bangladesh, Belgium, Brazil,Canada, China, Hong Kong, Macao, Taiwan, Colombia, Costa Rica, Croatia,Egypt, Finland, France, Germany, Greece, India, Iran, Ireland, Israel, Italy,Japan, South Korea, Malaysia, Mexico, New Zealand, Poland, Portugal, Qatar,Romania, Russian Federation, Singapore, South Africa, Spain, Sweden, Thai-land, Turkey, UK, and USA) across six continents (Asia, Europe, North Amer-ica, South America, Africa, and Oceania) Based on rigorous reviews by theProgram Committee members and reviewers, 386 high-quality papers were se-lected for publication in the proceedings with the acceptance rate being less than33% The papers are organized in 22 cohesive sections covering all major topics ofneural network research and development In addition to the contributed papers,the ICONIP 2006 technical program included two plenary speeches by Shun-ichiAmari and Russell Eberhart In addition, the ICONIP 2006 program includedinvited talks by the leaders of technical co-sponsors such as Wlodzislaw Duch(President of the European Neural Network Society), Vincenzo Piuri (President

of the IEEE Computational Intelligence Society), and Shiro Usui (President ofthe Japanese Neural Network Society), DeLiang Wang (President of the Inter-national Neural Network Society), and Shoujue Wang (President of the ChinaNeural Networks Council) In addition, ICONIP 2006 launched the APNNAPresidential Lecture Series with invited talks by past APNNA Presidents andthe K.C Wong Distinguished Lecture Series with invited talks by eminent Chi-nese scholars Furthermore, the program also included six excellent tutorials,open to all conference delegates to attend, by Amir Atiya, Russell Eberhart,Mahesan Niranjan, Alex Smola, Koji Tsuda, and Xuegong Zhang Besides theregular sessions, ICONIP 2006 also featured ten special sessions focusing on someemerging topics

Trang 5

ICONIP 2006 would not have achieved its success without the generous tributions of many volunteers and organizations ICONIP 2006 organizers wouldlike to express sincere thanks to APNNA for the sponsorship, to the China NeuralNetworks Council, European Neural Network Society, IEEE Computational In-telligence Society, IEEE Hong Kong Section, International Neural Network Soci-ety, and Japanese Neural Network Society for their technical co-sponsorship, tothe Chinese University of Hong Kong for its financial and logistic supports, and

con-to the K.C Wong Education Foundation of Hong Kong for its financial support.The organizers would also like to thank the members of the Advisory Committeefor their guidance, the members of the International Program Committee andadditional reviewers for reviewing the papers, and members of the PublicationsCommittee for checking the accepted papers in a short period of time Partic-ularly, the organizers would like to thank the proceedings publisher, Springer,

for publishing the proceedings in the prestigious series of Lecture Notes in puter Science Special mention must be made of a group of dedicated students

Com-and associates, Haixuan Yang, Zhenjiang Lin, Zenglin Xu, Xiang Peng, Po ShanCheng, and Terence Wong, who worked tirelessly and relentlessly behind thescene to make the mission possible There are still many more colleagues, asso-ciates, friends, and supporters who helped us in immeasurable ways; we expressour sincere thanks to them all Last but not the least, the organizers would like

to thank all the speakers and authors for their active participation at ICONIP

2006, which made it a great success

Jun WangLaiwan ChanDeLiang Wang

Trang 6

IEEE Computational Intelligence Society

International Neural Network Society

European Neural Network Society

Japanese Neural Network Society

China Neural Networks Council

IEEE Hong Kong Section

Honorary Chair and Co-chair

Advisory Board

Walter J Freeman, USA

Toshio Fukuda, Japan

Kunihiko Fukushima, Japan

Tom Gedeon, Australia

Zhen-ya He, China

Nik Kasabov, New Zealand

Okyay Kaynak, Turkey

Anthony Kuh, USA

Sun-Yuan Kung, USA

Soo-Young Lee, Korea

Chin-Teng Lin, Taiwan

Erkki Oja, Finland

Nikhil R Pal, IndiaMarios M Polycarpou, USAShiro Usui, Japan

Benjamin W Wah, USALipo Wang, SingaporeShoujue Wang, ChinaPaul J Werbos, USAYou-Shou Wu, ChinaDonald C Wunsch II, USAXin Yao, UK

Yixin Zhong, ChinaJacek M Zurada, USA

Trang 7

General Chair and Co-chair

Organizing Chair

Man-Wai Mak, Hong Kong

Finance and Registration Chair

Kai-Pui Lam, Hong Kong

Workshops and Tutorials Chair

James Kwok, Hong Kong

Publications and Special Sessions Chair and Co-chair

Frank H Leung, Hong Kong Jianwei Zhang, Germany

Publicity Chair and Co-chairs

Jeffrey Xu Yu, Hong Kong

Chris C Yang, Hong Kong

Derong Liu, USAWlodzislaw Duch, Poland

Local Arrangements Chair and Co-chair

Andrew Chi-Sing Leung, Hong Kong Eric Yu, Hong Kong

Secretary

Haixuan Yang, Hong Kong

Program Chair and Co-chair

Trang 8

Organization IX

Program Committee

Shigeo Abe, Japan

Peter Andras, UK

Sabri Arik, Turkey

Abdesselam Bouzerdoum, Australia

Ke Chen, UK

Liang Chen, Canada

Luonan Chen, Japan

Zheru Chi, Hong Kong

Sung-Bae Cho, Korea

Sungzoon Cho, Korea

Seungjin Choi, Korea

Andrzej Cichocki, Japan

Chuangyin Dang, Hong Kong

Wai-Keung Fung, Canada

Takeshi Furuhashi, Japan

Artur dAvila Garcez, UK

Daniel W.C Ho, Hong Kong

Edward Ho, Hong Kong

Sanqing Hu, USA

Guang-Bin Huang, Singapore

Kaizhu Huang, China

Malik Magdon Ismail, USA

Takashi Kanamaru, Japan

James Kwok, Hong Kong

James Lam, Hong Kong

Kai-Pui Lam, Hong Kong

Doheon Lee, Korea

Minho Lee, Korea

Andrew Leung, Hong Kong

Frank Leung, Hong Kong

Yangmin Li, Macau

Xun Liang, ChinaYanchun Liang, ChinaXiaofeng Liao, ChinaChih-Jen Lin, TaiwanXiuwen Liu, USABao-Liang Lu, ChinaWenlian Lu, ChinaJinwen Ma, ChinaMan-Wai Mak, Hong KongSushmita Mitra, IndiaPaul Pang, New ZealandJagath C Rajapakse, SingaporeBertram Shi, Hong KongDaming Shi, SingaporeMichael Small, Hong KongMichael Stiber, USAPonnuthurai N Suganthan, SingaporeFuchun Sun, China

Ron Sun, USAJohan A.K Suykens, BelgiumNorikazu Takahashi, JapanMichel Verleysen, Belgium

Si Wu, UKChris Yang, Hong KongHujun Yin, UK

Eric Yu, Hong KongJeffrey Yu, Hong KongGerson Zaverucha, BrazilByoung-Tak Zhang, KoreaLiqing Zhang, China

M.H ChuSven CroneBruce CurryRohit DhawanDeniz ErdogmusKen FerensRobert FildesTetsuo FurukawaJohn Q Gan

Trang 9

Ju H ParkMario PavoneRenzo PerfettiDinh-Tuan PhamTu-Minh PhuongLibin RongAkihiro SatoXizhong ShenJinhua ShengQiang ShengXizhi ShiNoritaka ShigeiHyunjung ShinVimal SinghVladimir SpinkoRobert StahlbockHiromichi SuetantJun Sun

Yanfeng SunTakashi Takenouchi

Yin TangThomas TrappenbergChueh-Yung TsaoSatoki UchiyamaFeng WanDan WangRubin WangRuiqi WangYong WangHua WenMichael K.Y WongChunguo WuGuoding WuQingxiang WuWei WuCheng XiangBotong Xu

Xu XuLin YanShaoze YanSimon X YangMichael YiuJunichiro YoshimotoEnzhe Yu

Fenghua YuanHuaguang ZhangJianyu ZhangKun ZhangLiqing ZhangPeter G Zhang

Ya ZhangDing-Xuan ZhouJian ZhouJin ZhouJianke Zhu

Trang 10

Table of Contents – Part III

Bioinformatics and Biomedical Applications

DRFE: Dynamic Recursive Feature Elimination for Gene Identification

Based on Random Forest 1

Ha-Nam Nguyen, Syng-Yup Ohn

Gene Feature Extraction Using T-Test Statistics and Kernel Partial

Least Squares 11

Shutao Li, Chen Liao, James T Kwok

An Empirical Analysis of Under-Sampling Techniques to Balance

a Protein Structural Class Dataset 21

Marcilio C.P de Souto, Valnaide G Bittencourt, Jose A.F Costa

Prediction of Protein Interaction with Neural Network-Based Feature

Association Rule Mining 30

Jae-Hong Eom, Byoung-Tak Zhang

Prediction of Protein Secondary Structure Using Nonlinear Method 40

Silvia Botelho, Gisele Simas, Patricia Silveira

Clustering Analysis for Bacillus Genus Using Fourier Transform

and Self-Organizing Map 48

Cheng-Chang Jeng, I-Ching Yang, Kun-Lin Hsieh, Chun-Nan Lin

Recurrence Quantification Analysis of EEG Predicts Responses

to Incision During Anesthesia 58

Liyu Huang, Weirong Wang, Sekou Singare

Wavelet Spectral Entropy for Indication of Epileptic Seizure

in Extracranial EEG 66

Xiaoli Li

The Study of Classification of Motor Imaginaries Based on Kurtosis

of EEG 74

Xiaopei Wu, Zhongfu Ye

Automatic Detection of Critical Epochs in coma-EEG Using

Independent Component Analysis and Higher Order Statistics 82

G Inuso, F La Foresta, N Mammone, F.C Morabito

Trang 11

Sparse Bump Sonification: A New Tool for Multichannel EEG Diagnosis

of Mental Disorders; Application to the Detection of the Early Stage

of Alzheimer’s Disease 92

Fran¸ cois B Vialatte, Andrzej Cichocki

Effect of Diffusion Weighting and Number of Sensitizing Directions

on Fiber Tracking in DTI 102

Bo Zheng, Jagath C Rajapakse

3-D Reconstruction of Blood Vessels Skeleton Based

on Neural Network 110

Zhiguo Cao, Bo Peng

Design of a Fuzzy Takagi-Sugeno Controller to Vary the Joint Knee

Angle of Paraplegic Patients 118

Marcelo C.M Teixeira, Grace S Deaecto, Ruberlei Gaino,

Edvaldo Assun¸ c˜ ao, Aparecido A Carvalho, Uender C Farias

Characterization of Breast Abnormality Patterns in Digital

Mammograms Using Auto-associator Neural Network 127

Rinku Panchal, Brijesh Verma

Evolving Hierarchical RBF Neural Networks for Breast

Cancer Detection 137

Yuehui Chen, Yan Wang, Bo Yang

Ovarian Cancer Prognosis by Hemostasis and Complementary

Learning 145

T.Z Tan, G.S Ng, C Quek, Stephen C.L Koh

Multi-class Cancer Classification with OVR-Support Vector Machines

Selected by Na¨ıve Bayes Classifier 155

Jin-Hyuk Hong, Sung-Bae Cho

Breast Cancer Diagnosis Using Neural-Based Linear

Fusion Strategies 165

Yunfeng Wu, Cong Wang, S.C Ng, Anant Madabhushi,

Yixin Zhong

A Quantitative Diagnostic Method Based on Bayesian Networks

in Traditional Chinese Medicine 176

Huiyan Wang, Jie Wang

Information Security

High-Order Markov Kernels for Network Intrusion Detection 184

Shengfeng Tian, Chuanhuan Yin, Shaomin Mu

Trang 12

Table of Contents – Part III XIII

Improved Realtime Intrusion Detection System 192

Byung-Joo Kim, Il Kon Kim

A Distributed Neural Network Learning Algorithm for Network

Intrusion Detection System 201

Yanheng Liu, Daxin Tian, Xuegang Yu, Jian Wang

A DGC-Based Data Classification Method Used for Abnormal

Network Intrusion Detection 209

Bo Yang, Lizhi Peng, Yuehui Chen, Hanxing Liu, Runzhang Yuan

Intrusion Alert Analysis Based on PCA and the LVQ

Neural Network 217

Jing-Xin Wang, Zhi-Ying Wang, Kui-Dai

A Novel Color Image Watermarking Method Based on Genetic

Algorithm and Neural Networks 225

Jialing Han, Jun Kong, Yinghua Lu, Yulong Yang, Gang Hou

Color Image Watermarking Algorithm Using BPN Neural Networks 234

Cheng-Ri Piao, Sehyeong Cho, Seung-Soo Han

A Novel Blind Digital Watermark Algorithm Based on Neural

Network and Chaotic Map 243

Pengcheng Wei, Wei Zhang, Huaqian Yang, Degang Yang

Data and Text Processing

Stimulus Related Data Analysis by Structured Neural Networks 251

Bernd Br¨ uckner

Scalable Dynamic Self-Organising Maps for Mining Massive Textual

Data 260

Yu Zheng Zhai, Arthur Hsu, Saman K Halgamuge

Maximum-Minimum Similarity Training for Text Extraction 268

Hui Fu, Xiabi Liu, Yunde Jia

Visualization of Depending Patterns in Metabonomics 278

Stefan Roeder, Ulrike Rolle-Kampczyk, Olf Herbarth

A RBF Network for Chinese Text Classification Based on Concept

Feature Extraction 285

Minghu Jiang, Lin Wang, Yinghua Lu, Shasha Liao

Trang 13

Ontology Learning from Text: A Soft Computing Paradigm 295

Rowena Chau, Kate Smith-Miles, Chung-Hsing Yeh

Text Categorization Based on Artificial Neural Networks 302

Cheng Hua Li, Soon Choel Park

Knowledge as Basis Broker — The Research of Matching Customers

Problems and Professionals M´etiers 312

Ruey-Ming Chao, Chi-Shun Wang

A Numerical Simulation Study of Structural Damage Based on RBF

Neural Network 322

Xu-dong Yuan, Hou-bin Fan, Cao Gao, Shao-xia Gao

Word Frequency Effect and Word Similarity Effect in Korean Lexical

Decision Task and Their Computational Model 331

YouAn Kwon, KiNam Park, HeuiSeok Lim, KiChun Nam,

Soonyoung Jung

Content-Based 3D Graphic Information Retrieval 341

Soochan Hwang, Yonghwan Kim

Performance Improvement in Collaborative Recommendation Using

Hung-Ching(Justin) Chen, Malik Magdon-Ismail

A Brain-Inspired Cerebellar Associative Memory Approach to Option

Pricing and Arbitrage Trading 370

S.D Teddy, E.M.-K Lai, C Quek

A Reliability-Based RBF Network Ensemble Model for Foreign

Exchange Rates Predication 380

Lean Yu, Wei Huang, Kin Keung Lai, Shouyang Wang

Combining Time-Scale Feature Extractions with SVMs

for Stock Index Forecasting 390

Shian-Chang Huang, Hsing-Wen Wang

Trang 14

Table of Contents – Part III XV

Extensions of ICA for Causality Discovery in the Hong Kong Stock

Market 400

Kun Zhang, Lai-Wan Chan

Pricing Options in Hong Kong Market Based on Neural Networks 410

Xun Liang, Haisheng Zhang, Jian Yang

Global Optimization of Support Vector Machines Using Genetic

Algorithms for Bankruptcy Prediction 420

Hyunchul Ahn, Kichun Lee, Kyoung-jae Kim

Neural Networks, Fuzzy Inference Systems and Adaptive-Neuro Fuzzy

Inference Systems for Financial Decision Making 430

Pretesh B Patel, Tshilidzi Marwala

Online Forecasting of Stock Market Movement Direction Using

the Improved Incremental Algorithm 440

Dalton Lunga, Tshilidzi Marwala

Pretests for Genetic-Programming Evolved Trading Programs:

“zero-intelligence” Strategies and Lottery Trading 450

Shu-Heng Chen, Nicolas Navet

Currency Options Volatility Forecasting with Shift-Invariant Wavelet

Transform and Neural Networks 461

Fan-Yong Liu, Fan-Xin Liu

Trend-Weighted Fuzzy Time-Series Model for TAIEX Forecasting 469

Ching-Hsue Cheng, Tai-Liang Chen, Chen-Han Chiang

Intelligence-Based Model to Timing Problem of Resources Exploration

in the Behavior of Firm 478

Hsiu Fen Tsai, Bao Rong Chang

Manufacturing Systems

Application of ICA in On-Line Verification of the Phase Difference

of the Current Sensor 488

Xiaoyan Ma, Huaxiang Lu

Neural Networks Based Automated Test Oracle for Software Testing 498

Mao Ye, Boqin Feng, Li Zhu, Yao Lin

Tool Wear Condition Monitoring in Drilling Processes Using Fuzzy

Logic 508

Onder Yumak, H Metin Ertunc

Trang 15

Fault Diagnosis in Nonlinear Circuit Based on Volterra Series

and Recurrent Neural Network 518

Haiying Yuan, Guangju Chen

Manufacturing Yield Improvement by Clustering 526

M.A Karim, S Halgamuge, A.J.R Smith, A.L Hsu

Gear Crack Detection Using Kernel Function Approximation 535

Weihua Li, Tielin Shi, Kang Ding

The Design of Data-Link Equipment Redundant Strategy 545

Qian-Mu Li, Man-Wu Xu, Hong Zhang, Feng-Yu Liu

Minimizing Makespan on Identical Parallel Machines Using Neural

Networks 553

Derya Eren Akyol, G Mirac Bayhan

Ensemble of Competitive Associative Nets for Stable Learning

Performance in Temperature Control of RCA Cleaning Solutions 563

Shuichi Kurogi, Daisuke Kuwahara, Hiroaki Tomisaki,

Takeshi Nishida, Mitsuru Mimata, Katsuyoshi Itoh

Predication of Properties of Welding Joints Based on Uniform

Designed Neural Network 572

Shi Yu, Li Jianjun, Fan Ding, Chen Jianhong

Applying an Intelligent Neural System to Predicting Lot Output Time

in a Semiconductor Fabrication Factory 581

Toly Chen

Multi-degree Prosthetic Hand Control Using a New BP

Neural Network 589

R.C Wang, F Li, M Wu, J.Z Wang, L Jiang, H Liu

Control and Robotics

Neural-Network-Based Sliding Mode Control for Missile

Electro-Hydraulic Servo Mechanism 596

Fei Cao, Yunfeng Liu, Xiaogang Yang, Yunhui Peng, Dong Miao

Turbulence Encountered Landing Control Using Hybrid Intelligent

System 605

Jih-Gau Juang, Hou-Kai Chiou

Trang 16

Table of Contents – Part III XVII

An AND-OR Fuzzy Neural Network Ship Controller Design 616

Jianghua Sui, Guang Ren

RBF ANN Nonlinear Prediction Model Based Adaptive PID Control

of Switched Reluctance Motor Drive 626

Chang-Liang Xia, Jie Xiu

Hierarchical Multiple Models Neural Network Decoupling Controller

for a Nonlinear System 636

Xin Wang, Hui Yang

Sensorless Control of Switched Reluctance Motor Based on ANFIS 645

Chang-Liang Xia, Jie Xiu

Hybrid Intelligent PID Control for MIMO System 654

Jih-Gau Juang, Kai-Ti Tu, Wen-Kai Liu

H ∞ Neural Networks Control for Uncertain Nonlinear Switched

Impulsive Systems 664

Fei Long, Shumin Fei, Zhumu Fu, Shiyou Zheng

Reliable Robust Controller Design for Nonlinear State-Delayed

Systems Based on Neural Networks 674

Yanjun Shen, Hui Yu, Jigui Jian

Neural Network Applications in Advanced Aircraft Flight Control

System, a Hybrid System, a Flight Test Demonstration 684

Fola Soares, John Burken, Tshilidzi Marwala

Vague Neural Network Based Reinforcement Learning Control

System for Inverted Pendulum 692

Yibiao Zhao, Siwei Luo, Liang Wang, Aidong Ma, Rui Fang

Neural-Network Inverse Dynamic Online Learning Control

Zhuohua Duan, Zixing Cai, Jinxia Yu

Tracking Control of a Mobile Robot with Kinematic Uncertainty

Using Neural Networks 721

An-Min Zou, Zeng-Guang Hou, Min Tan, Xi-Jun Chen,

Yun-Chu Zhang

Trang 17

Movement Control of a Mobile Manipulator Based on Cost

Optimization 731

Kwan-Houng Lee, Tae-jun Cho

Evolutionary Algorithms and Systems

Synthesis of Desired Binary Cellular Automata Through the Genetic

Algorithm 738

Satoshi Suzuki, Toshimichi Saito

On Properties of Genetic Operators from a Network Analytical

Viewpoint 746

Hiroyuki Funaya, Kazushi Ikeda

SDMOGA: A New Multi-objective Genetic Algorithm Based

on Objective Space Divided 754

Wangshu Yao, Shifu Chen, Zhaoqian Chen

Hamming Sphere Solution Space Based Genetic Multi-user

Detection 763

Lili Lin

UEAS: A Novel United Evolutionary Algorithm Scheme 772

Fei Gao, Hengqing Tong

Implicit Elitism in Genetic Search 781

A.K Bhatia, S.K Basu

The Improved Initialization Method of Genetic Algorithm for Solving

the Optimization Problem 789

Rae-Goo Kang, Chai-Yeoung Jung

Optimized Fuzzy Decision Tree Using Genetic Algorithm 797

Myung Won Kim, Joung Woo Ryu

A Genetic-Inspired Multicast Routing Optimization Algorithm

with Bandwidth and End-to-End Delay Constraints 807

Sanghoun Oh, ChangWook Ahn, R.S Ramakrishna

Integration of Genetic Algorithm and Cultural Algorithms

for Constrained Optimization 817

Fang Gao, Gang Cui, Hongwei Liu

Neuro-genetic Approach for Solving Constrained Nonlinear

Optimization Problems 826

Fabiana Cristina Bertoni, Ivan Nunes da Silva

Trang 18

Table of Contents – Part III XIX

An Improved Primal-Dual Genetic Algorithm for Optimization

in Dynamic Environments 836

Hongfeng Wang, Dingwei Wang

Multiobjective Optimization Design of a Hybrid Actuator

with Genetic Algorithm 845

Ke Zhang

Human Hierarchical Behavior Based Mobile Agent Control in ISpace

with Distributed Network Sensors 856

SangJoo Kim, TaeSeok Jin, Hideki Hashimoto

Evolvable Viral Agent Modeling and Exploration 866

Jingbo Hao, Jianping Yin, Boyun Zhang

Mobile Robot Control Using Fuzzy-Neural-Network for Learning

Human Behavior 874

TaeSeok Jin, YoungDae Son, Hideki Hashimoto

EFuNN Ensembles Construction Using a Clustering Method

and a Coevolutionary Multi-objective Genetic Algorithm 884

Fernanda L Minku, Teresa B Ludermir

Language Learning for the Autonomous Mental Development

of Conversational Agents 892

Jin-Hyuk Hong, Sungsoo Lim, Sung-Bae Cho

A Multi-objective Evolutionary Algorithm for Multi-UAV

Cooperative Reconnaissance Problem 900

Jing Tian, Lincheng Shen

Global and Local Contrast Enhancement for Image by Genetic

Algorithm and Wavelet Neural Network 910

Changjiang Zhang, Xiaodong Wang

A Novel Constrained Genetic Algorithm for the Optimization

of Active Bar Placement and Feedback Gains in Intelligent Truss

Structures 920

Wenying Chen, Shaoze Yan, Keyun Wang, Fulei Chu

A Double-Stage Genetic Optimization Algorithm for Portfolio

Selection 928

Kin Keung Lai, Lean Yu, Shouyang Wang, Chengxiong Zhou

Trang 19

Image Reconstruction Using Genetic Algorithm in Electrical

Impedance Tomography 938

Ho-Chan Kim, Chang-Jin Boo, Min-Jae Kang

Mitigating Deception in Genetic Search Through Suitable Coding 946

S.K Basu, A.K Bhatia

The Hybrid Genetic Algorithm for Blind Signal Separation 954

Wen-Jye Shyr

Genetic Algorithm for Satellite Customer Assignment 964

S.S Kim, H.J Kim, V Mani, C.H Kim

Fuzzy Systems

A Look-Ahead Fuzzy Back Propagation Network for Lot Output

Time Series Prediction in a Wafer Fab 974

Toly Chen

Extraction of Fuzzy Features for Detecting Brain Activation

from Functional MR Time-Series 983

Juan Zhou, Jagath C Rajapakse

An Advanced Design Methodology of Fuzzy Set-Based Polynomial

Neural Networks with the Aid of Symbolic Gene Type Genetic

Algorithms and Information Granulation 993

Seok-Beom Roh, Hyung-Soo Hwang, Tae-Chon Ahn

A Hybrid Self-learning Approach for Generating Fuzzy Inference

Systems 1002

Yi Zhou, Meng Joo Er

A Fuzzy Clustering Algorithm for Symbolic Interval Data Based

on a Single Adaptive Euclidean Distance 1012 Francisco de A.T de Carvalho

Approximation Accuracy of Table Look-Up Scheme for Fuzzy-Neural

Networks with Bell Membership Function 1022 Weimin Ma

Prototype-Based Threshold Rules 1028 Marcin Blachnik, Wlodzislaw Duch

Trang 20

Table of Contents – Part III XXI

A Fuzzy LMS Neural Network Method for Evaluation of Importance

of Indices in MADM 1038 Feng Kong, Hongyan Liu

Fuzzy RBF Neural Network Model for Multiple Attribute Decision

Making 1046 Feng Kong, Hongyan Liu

A Study on Decision Model of Bottleneck Capacity Expansion

with Fuzzy Demand 1055

Bo He, Chao Yang, Mingming Ren, Yunfeng Ma

Workpiece Recognition by the Combination of Multiple Simplified

Fuzzy ARTMAP 1063 Zhanhui Yuan, Gang Wang, Jihua Yang

Stability of Periodic Solution in Fuzzy BAM Neural Networks

with Finite Distributed Delays 1070 Tingwen Huang

Design Methodology of Optimized IG gHSOFPNN and Its Application

to pH Neutralization Process 1079 Ho-Sung Park, Kyung-Won Jang, Sung-Kwun Oh, Tae-Chon Ahn

Neuro-fuzzy Modeling and Fuzzy Rule Extraction Applied to Conflict

Management 1087 Thando Tettey, Tshilidzi Marwala

Hardware Implementations

Hardware Implementation of a Wavelet Neural Network Using

FPGAs 1095 Ali Karabıyık, Aydo˘ gan Savran

Neural Network Implementation in Hardware Using FPGAs 1105 Suhap Sahin, Yasar Becerikli, Suleyman Yazici

FPGA Discrete Wavelet Transform Encoder/Decoder

Implementation 1113 Pedro Henrique Cox, Aparecido Augusto de Carvalho

Randomized Algorithm in Embedded Crypto Module 1122 Jin Keun Hong

Trang 21

Hardware Implementation of an Analog Neural Nonderivative

Optimizer 1131 Rodrigo Cardim, Marcelo C.M Teixeira, Edvaldo Assun¸ c˜ ao,

Nobuo Oki, Aparecido A de Carvalho, M´ arcio R Covacic

Synchronization Via Multiplex Spike-Trains in Digital Pulse Coupled

Networks 1141 Takahiro Kabe, Hiroyuki Torikai, Toshimichi Saito

A Bit-Stream Pulse-Based Digital Neuron Model for Neural Networks 1150 C´ esar Torres-Huitzil

From Hopfield Nets to Pulsed Neural Networks 1160 Ana M.G Guerreiro, Carlos A Paz de Araujo

A Digital Hardware Architecture of Self-Organizing Relationship

(SOR) Network 1168 Hakaru Tamukoh, Keiichi Horio, Takeshi Yamakawa

Towards Hardware Acceleration of Neuroevolution for Multimedia

Processing Applications on Mobile Devices 1178 Daniel Larkin, Andrew Kinane, Noel O’Connor

Neurocomputing for Minimizing Energy Consumption of Real-Time

Operating System in the System-on-a-Chip 1189 Bing Guo, Dianhui Wang, Yan Shen, Zhishu Li

A Novel Multiplier for Achieving the Programmability of Cellular

Neural Network 1199 Peng Wang, Xun Zhang, Dongming Jin

Neural Network-Based Scalable Fast Intra Prediction Algorithm

in H.264 Encoder 1206 Jung-Hee Suk, Jin-Seon Youn, Jun Rim Choi

Author Index 1217

Trang 22

I King et al (Eds.): ICONIP 2006, Part III, LNCS 4234, pp 1 – 10, 2006

© Springer-Verlag Berlin Heidelberg 2006

DRFE: Dynamic Recursive Feature Elimination for Gene

Identification Based on Random Forest

Ha-Nam Nguyen and Syng-Yup Ohn

Department of Computer Engineering Hankuk Aviation University, Seoul, Korea {nghanam, syohn}@hau.ac.kr

Abstract Determining the relevant features is a combinatorial task in various

fields of machine learning such as text mining, bioinformatics, pattern tion, etc Several scholars have developed various methods to extract the relevant features but no method is really superior Breiman proposed Random Forest to classify a pattern based on CART tree algorithm and his method turns out good results compared to other classifiers Taking advantages of Random

recogni-Forest and using wrapper approach which was first introduced by Kohavi et al,

we propose an algorithm named Dynamic Recursive Feature Elimination (DRFE) to find the optimal subset of features for reducing noise of the data and increasing the performance of classifiers In our method, we use Random Forest

as induced classifier and develop our own defined feature elimination function

by adding extra terms to the feature scoring We conducted experiments with two public datasets: Colon cancer and Leukemia cancer The experimental re-sults of the real world data showed that the proposed method has higher predic-tion rate compared to the baseline algorithm The obtained results are compara-ble and sometimes have better performance than the widely used classification methods in the same literature of feature selection

1 Introduction

Machine learning techniques have been widely used in various fields such as text mining, network security and especially in bioinformatics There are wide ranges of learning algorithms that have been studied and developed, i.e Decision Trees, K Nearest-Neighbor, Support Vector Machine, etc These existing learning algorithms

do well in most cases However, as the number of features in a dataset is large, the performance of these algorithms is degraded In that case, the whole set of features of

a dataset usually over-describes the data relationships Thus, an important issue is how to select a relevant subset of features based on their criteria A good feature se-lection method should heighten the success probability of the learning methods [1, 2]

In other words, this mechanism helps to eliminate noises or non-representative tures which can impede the recognition process

fea-Recently, Random Forest (RF) was proposed based on an ensemble of CART tree classifications [3] This method turns out better results compared to other classifiers including Adaboost, Support Vector Machine and Neural Network Researchers ap-plied RF as a feature selection method [4, 5] Some tried RF directly [4] and others

Trang 23

adapted it for relevance feedback [5] The approach presented in [5] attempts to dress this problem with correlation techniques In this paper, we introduce a new method of feature selection based on Recursive Feature Elimination The proposed method reduces the set of features via feature ranking criterion This criterion re-evaluates the importance of features according to the Gini index [6, 7] and the correla-tion of training and validation accuracy which are obtained from RF algorithm By that way, we take both feature contribution and correlation of training error into ac-count We applied the proposed algorithm to classify several datasets such as Colon cancer and Leukemia cancer The DRFE showed better classification accuracy than

ad-RF and sometimes it showed better results compared to other studies

The rest of this paper is organized as follows In section 2 we describe feature lection approaches In Section 3 we briefly review RF and its characteristics that will

se-be used in proposed method The framework of proposed method is presented in tion 4 Details of the new feature elimination method will be introduced in Section 5 Section 6 explains the experimental design of proposed method and the analysis of obtained results Some concluding remarks are given in Section 7

Sec-2 Feature Selection Problem

In this section, we briefly summarize the space dimension reduction and feature tion methodologies Feature selection approach has been shown as a very effective way in removing redundant and irrelevant features, so that it increases the efficiency

selec-of the learning task and improves learning performance such as learning time, vergence rate, accuracy, etc A lot of studies have focused on feature selection litera-ture [1, 2, 8-11] As mentioned in [1, 2], there are two ways to determine the starting point in a searching space The first strategy might start with nothing and succes-

con-sively adds relevance features called forward selection Another one, named ward elimination, starts with all features and successively removes irrelevance ones

back-There are two different approaches used for feature selection, i.e Filter approach and Wrapper approach [1, 2] The Filter approach considers the feature selection process as precursor stage of learning algorithms The most disadvantage of this approach is that there is no relationship between the feature selection process and the performance of learning algorithms The second approach focuses on a specific machine learning algo-rithm It evaluates the selected feature subset based on the goodness of learning algo-rithms such as the accuracy, recall and precision values The disadvantage of this ap-proach is high computation cost Some researchers tried to propose methods that can speed up the evaluating process to decrease this cost Some studies used both filter and wrapper approaches in their algorithms called hybrid approaches [9, 10, 12-14] In these methods, the feature criteria or randomly selection methods are used to choose the can-didate feature subsets The cross validation mechanism is employed to decide the final best subset among the whole candidate subsets [6]

3 Random Forest

Random Forest is a special kind of ensemble learning techniques [3] It builds an ensemble of CART tree classifications using bagging mechanism [6] By using bagging, each node of trees only selects a small subset of features for the split, which

Trang 24

DRFE: Dynamic Recursive Feature Elimination for Gene Identification 3

enables the algorithm to create classifiers for high dimensional data very quickly One have to specify the number of randomly selected features (mtry) at each split The

default value is sqrt(p) for classification where p is number of features The Gini

index [6, 7] is used as the splitting criterion The largest possible tree is grown and not

pruned One should choose the big enough number of trees (ntree) to ensure that

every input feature is predicted at least several times The root node of each tree in the forest keeps a bootstrap sample from the original data as the training set The out-of-bag (OOB) estimates are based on roughly one third of the original data set By con-trasting these OOB predictions with the training set outcomes, one can arrive at an estimation of the predicting error rate, which is referred to as the OOB estimate of error rate

To represent what is the out-of-bag (OOB) estimate method, we assume a method for building a classifier from training set We can construct classifiers H(x, Tk) based

on bootstrap training set Tk from given training set T The out-of-bag classifier of each sample (x, y) in training set is defined as the aggregate of the vote only over those classifiers for which Tk does not contain that sample Thus the out-of-bag esti-mation of the generalization error is the error rate of the out-of-bag classifier on the training set

The Gini index is defined as squared probabilities of membership for each target category in the node

21

gini N = ⎛ ⎜ − p ω ⎞ ⎟

where p(ω j ) is the relative frequency of class ω j at node N It means if all the samples

are on the same category, the impurity is zero; otherwise it is positive value Some algorithm such as CART [6], SLIQ [18], and RF [3, 7] were used Gini index as split-

ting criterion It tries to minimize the impurity of the nodes resulting from split In

Random forest, the Gini decrease for each individual variable over all trees in the forest gives a fast variable important that is often very consistent with the permutation importance measure [3, 7]

4 Proposed Approach

The proposed method used Random Forest module to estimate the performance

con-sisting of the cross validation accuracy and the importance of each feature in training data set Even though RF robust against over-fitting problem itself [3, 6], our ap-proach can not inherit this characteristic To deal with the over-fitting problem, we use n-fold cross validation technique to minimize generalization error [6]

The Feature Evaluation module computes the feature importance ranking values according to the obtained results from Random Forest module (see Equation 2) The

irrelevant feature(s) are eliminated and only important features are survived by mean

of feature ranking value The survival features are again used as input data of Random Forest module This process is repeatedly executed until it satisfies the desired

criteria

Trang 25

Fig 1 The main procedures of our approach

The set of features, which is a result of learning phase, is used as a filter of test dataset in classification phase The detail of proposed algorithm will be presented in next section The overall procedure of our approach is shown in Fig 1

5 Dynamic Recursive Feature Elimination Algorithm

When computing the ranking criteria in wrapper approaches, they usually concentrate much on the accuracies of the features, but not much on the correlation of the fea-tures A feature with good ranking criteria may not create a good result Also the combination of several features with good ranking criteria, may not give out a good result To remedy the problem, we propose a procedure named Dynamic Recursive Feature Elimination (DRFE)

1 Train data by Random Forest with the cross validation

2 Calculate the ranking criterion for all features F i rank where i=1 n (n is the

number of features)

3 Remove feature by using DynamicFeatureElimination function (for

computa-tional reasons, it may be more efficient if we remove several features at a time)

4 Back to step 1 until reach the desired criteria

In step 1, we use Random Forest with n-folders cross vadilation to train the

classi-fier In the j th cross validation, we will obtain a turtle (Fj, Ajlearn, Ajvalidation) that are the feature importance, the learning accuracy and the validation accuracy, respec-tively We will use those values to compute the ranking criterion in step 2

The cores of our algorithm are presented in step 2 In this step, we use the results from step 1 to build ranking criterions which will be used in step 3 The ranking crite-

rion of feature i th is computed as follow

, 1

learn validation n

Trang 26

DRFE: Dynamic Recursive Feature Elimination for Gene Identification 5

where j=1, , n is the number of cross validation folders, F i,j , A j

learn

and A j validation

are the feature importance in terms of the node impurity which can be computed by Gini inpurity, the learning accuracy and the validation accuracy of feature j-th obtained

from RandomForest module, respectively ε is the real number with very small value

The first factor (F i,j) is presented the Gini decrease for each feature over all trees in the forest when we train data by RF Obviously, the higher decrease of Fi,j is obtained, the better rank of feature we have [3, 6] We use the second factor to deal with the overfitting issue [6] as well as the desire of high accuracy The numerator of the fac-tor presents for our desire to have high accuracy The more this value we get, the better the rank of the feature is We want to have a high accuracy in learning and also want not too fit the training data which called overfitting problem To solve this issue,

we applied the n-folder cross validation technique [6] We can see that the less ence between the learning accuracy and the validation accuracy is, the result is the more stability of accuracy In the other words, the target of denominator is to reduce overfitting problem In the case that the learning accuracy is equal to the validation accuracy, the difference is equal to 0, we use ε with very small value to avoid the fraction coming to ∞ We want to choose the feature with both high stability and high accuracy To deal with this problem, the procedure choose a feature subset only if the validation of this selected feature subset is higher than the validation of the previous selected feature set This heuristic method ensures that the feature set we chose al-ways have better accuracy As a result of step 2, we have an ordered-list of ranking criterion of features

differ-In step 3, we propose our feature elimination strategy based on backward approach The proposed feature elimination strategy depends on both ranking criterion and vali-dation accuracy The ranking criterion makes the order of features be eliminated and the validation accuracy is used to decide whether the chosen subset of features is permanently eliminated In normal case, our method eliminates features having the

smallest value of ranking criterion The new subset is validated by RandomForest

module The obtained validation accuracy plays a role of decision making It is used

to evaluate whether the selected subset is accepted as new candidate of features If the obtained validation accuracy is lower than the previous selected subset accuracy, it tries to eliminate other features based on their rank values

This iteration is stopped whenever the validation accuracy of the new subset is higher than the previous selected subset accuracy If there is no feature to create new subset and no better validation accuracy, the current subset of features is considered as the final result of our learning algorithm Otherwise the procedure goes back to step 1

6 Experiments

We tested the proposed algorithm with several datasets including two public datasets (Leukemia and Colon cancer) to validate our approach In this section, we represent the description of used datasets, our experimental configurations, and some evalua-tions about the experimental results

6.1 Datasets

The colon cancer dataset contains gene expression information extracted from DNA microarrays [1] The dataset consists of 62 samples in which 22 are normal samples

Trang 27

and 40 are cancer tissue samples, each has 2000 features We randomly choose 31 samples for training set and the remaining 31 samples were used as testing set (Availble at: http://sdmc.lit.org.sg/GEDatasets/Data/ColonTumor.zip)

The leukemia dataset consists of 72 samples divided into two classes ALL and AML [15] There are 47 ALL and 25 AML samples and each contains 7129 features This dataset was divided into a training set with 38 samples (27 ALL and 11 AML) and a testing set with 34 samples (20 ALL and 14 AML) (Availble at: http://sdmc.lit.org.sg/GEDatasets/ Data/ALL-AML_Leukemia.zip)

6.2 Experimental Environments

Our proposed algorithm was coded using R language (http://www.r-project.org; R

Development Core Team, 2004), and RandomForest packages (from A Liaw and M

Wiener) for random forest module All experiments are conducted on a Pentium IV 1.8 GHz personal computer The learning and validation accuracies were determined

by means of 4-fold cross validation The data was randomly split into a training set and a testing set In this paper, we used RF with the original dataset as the base-line method The proposed algorithm and the base-line algorithm were executed with the same training and testing datasets to compare the efficiency of the two methods

Fig 2 The comparison of classification accuracy between DRFE (dash line) and RF (dash-dot

line) via 50 trials with parameter ntree = {500, 1000, 1500, 2000} in case of Colon dataset

Trang 28

DRFE: Dynamic Recursive Feature Elimination for Gene Identification 7

6.3 Experimental Results and Analysis

Table 1 The average classification rate of Colon cancer over 50 trials (average % of

classification accuracy ±standard deviation)

RF only 75.6±8.9 76.0±9 79.3±6.8 78.0±7.1 DRFE 83.5±5.6 85.5±4.5 84.0±5.1 83.0±6.0

Some studies have done in terms of feature selection approaches The comparison

of those studies’ results and our approach’s result are depicted in Table 2 Our method sometimes showed better results compared to the old ones In addition, the standard deviation values of the proposed method are much lower than both RF (see Table 1) and other methods (Table 2) It shows that the proposed method turned out more sta-ble results than previous ones

Table 2 The best prediction rate of some studies in case of Colon dataset

Type of classifier Prediction rate (%)

among 7129 given set of features In this experiment, the ntree parameter was set to

1000, 2000, 3000, and 4000 By applying DRFE, the classification accuracies are significantly improved in all 50 trials (Fig 3)

Trang 29

Fig 3 The comparison of classification accuracy between DRFE (dash line) and RF (dash-dot

line) via 50 trials with parameter ntree = {1000, 2000, 3000, 4000} in case of Leukemia dataset

The summary of classification results are depicted in Table 3 In those ments, the tree number parameters do not significantly affect the classified results

experi-We selected 50 as the number of feature elimination which is called Step parameter

(Step=50) Our proposed method achieved the accuracy of 95.94% when performing

on about 55 genes predictors retained by using DRFE procedure This number of obtained genes only makes up about 0.77% (55/7129) of the whole set of genes

Table 3 Classification results of leukemia cancer (average % of classification accuracy

±standard deviation)

RF only 77.59±2.6 77.41±1.9 77.47±2.5 76.88±1.9 DRFE 95.71±3.1 95.53±3.3 95.94±2.7 95.76±2.8

And again, we compare the prediction results of our method and some other ies’ results performed on Leukemia dataset (Table 4) The table shows the classifica-tion accuracy of our method is much higher than these studies’ one

Trang 30

DRFE: Dynamic Recursive Feature Elimination for Gene Identification 9

Table 4 The best prediction rate of some studies in case of Leukemia data set

Type of classifier Prediction rate (%)

Combined kernel for SVM [16] 85.3±3.0

Multi-domain gating network [17] 75.0

7 Conclusions

In this paper, we introduced the novel method in terms of feature selection The RF algorithm itself is particularly suited for analyzing high-dimensional dataset It can easily deal with a large number of features as well as a small number of training sam-ples Our method not only employed RF by mean of conventional REF but also made

it fluently adapt to feature elimination task by using the DRFE procedure Based on the defined ranking criterion and the dynamic feature elimination strategy, the pro-posed method obtains higher classification accuracy and more stable results than the original RF The experiments achieved a high recognition accuracy of 85.5%±4.5 when performing on Colon cancer dataset with only a subset of 141 genes and the accuracy of 95.94%±2.7 in case of Leukemia cancer using a subset of 67 genes The experimental results also showed a significant improvement in the classification accu-racy compare to the original RF algorithm especially in case of Leukemia cancer dataset

Acknowledgement

This research was supported by RIC (Regional Innovation Center) in Hankuk tion University RIC is a Kyounggi-Province Regional Research Center designated by Korea Science and Engineering Foundation and Ministry of Science & Technology

3 Breiman, L.: Random forest, Machine Learning, vol 45 (2001) pages: 5–32

4 Torkkola, K., Venkatesan, S., Huan Liu: Sensor selection for maneuver classification, Proceedings The 7th International IEEE Conference on Intelligent Transportation Sys-tems (2004) page(s):636 - 641

5 Yimin Wu, Aidong Zhang: Feature selection for classifying high-dimensional numerical data, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol 2 (2004) Pages: 251-258

Trang 31

6 Duda, R O., Hart, P E., Stork, D G.: Pattern Classification (2nd Edition), John Wiley &

9 Fröhlich, H., Chapelle, O., and Schölkopf, B.: Feature Selection for Support Vector chines by Means of Genetic Algorithms, 15th IEEE International Conference on Tools with Artificial Intelligence (2003) pages: 142

Ma-10 Chen, Xue-wen: Gene Selection for Cancer Classification Using Bootstrapped Genetic Algorithms and Support Vector Machines, IEEE Computer Society Bioinformatics Con-ference (2003) pages: 504

11 Zhang, H., Yu, Chang-Yung and Singer, B.: Cell and tumor classification using gene pression data: Construction of forests, Proceeding of the National Academy of Sciences of the United States of America, vol 100 (2003) pages: 4168-4172

ex-12 Das, S.: Filters, wrappers and a boosting-based hybrid for feature selection, Proceedings of the 18th ICML ( 2001)

13 Ng, A Y.: On feature selection: learning with exponentially many irrelevant features as training examples”, Proceedings of the Fifteenth International Conference on Machine Learning (1998)

14 Xing, E., Jordan, M., and Carp, R.: Feature selection for highdimensional genomic croarray data”, Proc of the 18th ICML (2001)

mi-15 Alon, U., Barkai, N., Notterman, D., Gish, K., Ybarra, S., Mack, D., and Levine, A.: Broad Patterns of Gene Expression Revealed by Clustering Analysis of Tumor and Nor-mal Colon Tissues Probed by Oligonucleotide Arrays, Proceedings of National Academy

of Sciences of the United States of American, vol 96 (1999) pages: 6745-6750

16 Nguyen, H.-N, Ohn, S.-Y, Park, J., and Park, K.-S.: Combined Kernel Function Approach

in SVM for Diagnosis of Cancer, Proceedings of the First International Conference on Natural Computation (2005)

17 Su, T., Basu, M., Toure, A.: Multi-Domain Gating Network for Classification of Cancer Cells using Gene Expression Data, Proceedings of the International Joint Conference on Neural Networks (2002) pages: 286-289

18 Mehta M., Agrawal R., Rissanen J.: SLIQ: A Fast Scalable Classifier for Data Mining, Proceeding of the International Conference on Extending Database Technology (1996) pages: 18-32

Trang 32

Gene Feature Extraction Using T-Test Statistics

and Kernel Partial Least Squares

Shutao Li1, Chen Liao1, and James T Kwok21

College of Electrical and Information Engineering

Hunan UniversityChangsha 410082, China

2 Department of Computer ScienceHong Kong University of Science and Technology

Clear Water Bay, Hong Kongshutao li@yahoo.com.cn, lc337199@sina.com, jamesk@cs.ust.hk

Abstract In this paper, we propose a gene extraction method by

us-ing two standard feature extraction methods, namely the T-test methodand kernel partial least squares (KPLS), in tandem First, a preprocess-ing step based on the T-test method is used to filter irrelevant and noisygenes KPLS is then used to extract features with high information con-tent Finally, the extracted features are fed into a classifier Experimentsare performed on three benchmark datasets: breast cancer, ALL/AMLleukemia and colon cancer While using either the T-test method orKPLS does not yield satisfactory results, experimental results demon-strate that using these two together can significantly boost classificationaccuracy, and this simple combination can obtain state-of-the-art per-formance on all three datasets

1 Introduction

Gene expression studies by DNA microarrays provide unprecedented chances cause researchers can measure the expression level of tens of thousands of genessimultaneously Using this microarray technology, a comprehensive understand-ing of exactly which genes are being expressed in a specific tissue under variousconditions can now be obtained [3]

be-However, since the gene dataset usually includes only a few samples but withthousands or even tens of thousands of genes, such a limited availability ofhigh-dimensional samples is particularly problematic for training most classifiers

As such, oftentimes, dimensionality reduction has to be employed Ideally, agood dimensionality reduction method should eliminate genes that are irrelevant,redundant, or noisy for classification, while at the same time retain all the highlydiscriminative genes [11]

In general, there are three approaches to gene (feature) extraction, namely,the filter, wrapper and embedded approaches In the filter approach, genes areselected according to the intrinsic characteristics It works as a preprocessing stepwithout the incorporation of any learning algorithm Examples include the near-est shrunken centroid method, TNoM-score based method and the T-statistics

I King et al (Eds.): ICONIP 2006, Part III, LNCS 4234, pp 11–20, 2006.

c

 Springer-Verlag Berlin Heidelberg 2006

Trang 33

method [8] In the wrapper approach, a learning algorithm is used to score thefeature subsets based on the resultant predictive power, and an optimal featuresubset is searched for a specific classifier [4] Examples include recursive featureelimination, and genetic algorithm-based algorithms.

In this paper, we propose a new gene extraction method based on the filter proach First, genes are preprocessed by the T-test method to filter irrelevant andnoisy genes Then, kernel partial least squares (KPLS) is used to extract featureswith high information content and discriminative power The rest of this paper isorganized as follows In Section 2, we first review the T-test method and KPLS.The new gene extraction method is presented in Section 3 Section 4 then presentsthe experimental results, which is followed by some concluding remarks

ap-2 Review

In the following, we suppose that a microarray dataset containing n samples is given, with each sample x represented by the expression levels of m genes.

2.1 T-Test Method

The method is based on the t-statistics [7] Denote the two classes as positive

(+) class and negative (−) class For each feature x j , we compute the mean μ+j (respectively μ −

j ) and standard deviation δ+j (respectively δ −

j ) for the + class(respectively,− class) samples Then a score T (x j) can be obtained as:

where n+and n −are the numbers of samples in the positive and negative classesrespectively

2.2 Kernel Partial Least Squares (KPLS)

Given a set of input samples{x i } n

i=1 (where each x i ∈ R m) and the ing set of outputs{y i } n

correspond-i=1 (where y i ∈ R) Here, only one-dimensional output is

needed because only two-class classification is considered With the use of a kernel,

a nonlinear transformation of the input samples{x i } n

i=1from the original inputspace into a feature spaceF is obtained, i.e mapping φ : x i ∈ R m → φ(x i)∈ F.

The aim of KPLS is then to construct a linear PLS model in this kernel-inducedfeature spaceF Effectively, a nonlinear kernel PLS in the original input space is

obtained and the mutual orthogonality of the score vectors can be retained

Let Φ be the n × m  matrix of input samples in the feature space F, and its ith row be the vector φ(x i)T Let m  be the dimensionality of φ(x i), which can

be infinite Denote φ  the n × m  deflated dataset and Y  the n × 1 deflated class

label Then the rule of deflation is

Trang 34

Gene Feature Extraction Using T-Test Statistics and KPLS 13

The process is iterated for F ac times Subsequently, the deflated dataset can be

obtained from the original dataset and the PLS component, while the deflatedclass label be obtained from the original class labels and the PLS component

Denote the sequence of t’s and u’s obtained n × 1 vectors t1, t2, , t F ac and u1, u2, , u F ac , respectively Moreover, let T = [t1, t2, , t F ac ] and U = [u1, u2, , u F ac] The “kernel trick” can then be utilized instead of explicitly

mapping the input data, and results in: K = ΦΦ T , where K stands for the n × n kernel matrix: K(i, j) = k(x i , x j ), where k is the kernel function K can now be directly used in the deflation instead of φ, as

K  = (I

Here, K  is the deflated kernel matrix and I n is n-dimensional identity matrix.

Now Eq.(2) takes the place of Eq.(1) So deflated kernel matrix is obtained by theoriginal kernel matrix and the PLS component In kernel PLS, the assumption

that the variables of X have zero mean in linear PLS should also hold The

procedure must be applied to centralize the mapped data in the feature space

dimensional space K t is the n t ×n kernel matrix defined on the test set such that

K t (i, j) = k(z i , x j ) T T KU is an upper triangular matrix and thus invertible The centralized test set kernel Gram matrix K tcan be calculated by [10,9]

Trang 35

3 Gene Extraction Using T-Test and KPLS

While one can simply use the T-test method or KPLS described in Section 2for gene extraction, neither of them yields satisfactory performance in practice1

In this paper, we propose using the T-test and KPLS in tandem in performinggene extraction Its key steps are:

1: (Preprocessing using T-test): Since the samples are divided into two classes,one can compute the score for each gene by using the T-statistics Those

genes with scores greater than a predefined threshold T are considered as

discriminatory and are selected On the other hand, those genes whose scores

are smaller than T are considered as irrelevant/noisy and are thus eliminated.

2: (Feature extraction using KPLS): The features extracted in the first step arefurther filtered by using KPLS

3: (Training and Classification): Using the features extracted, a new trainingset is formed which is then used to train a classifier The trained classifiercan then be used for predictions on the test set

A schematic diagram of the whole process is shown in Figure 1

Fig 1 The combined process of gene extraction and classification

Trang 36

Gene Feature Extraction Using T-Test Statistics and KPLS 15

Table 1 Parameter used in the classifiers

breast cancer leukemia colon cancer

Table 2 Testing accuracies (%) when either the T-test or KPLS is used

breast cancer leukemia colon cancer

Trang 37

Table 4 Testing accuracy (%) on the leukemia dataset

sam-3 Colon cancer dataset: It contains 2,000 genes and 62 samples 22 of thesesamples are of normal colon tissues and the remaining 40 are of tumortissues [1]

Using the genes selected, the following classifiers are constructed and compared

in the experiments:

1 K-nearest neighbor classifier (k-NN).

2 Feedforward neural network (NN) with a single layer of hidden units Here,

we use the logistic function for the hidden units and the linear function for theoutput units Back-propagation with adaptive learning rate and momentum

is used for training

3 Support vector machine (SVM) In the experiments, the linear kernel isalways used

Trang 38

Gene Feature Extraction Using T-Test Statistics and KPLS 17

Table 5 Testing accuracy (%) on the colon cancer dataset

Each of these classifiers involves some parameters The parameter settings used

on the different datasets are shown in Table 1 Because of the small training setsize, leave-one-out (LOO) cross validation is used to obtain the testing accuracy.Both gene selection and classification are put together in each LOO iteration,i.e., they are trained on the training subset and then the performance of theclassifier with the selected features is assessed with the left out examples

4.2 Results

There are three adjustable parameters in the proposed method:

1 The threshold T associated with the T -test method;

2 The width parameter γ in the Gaussian kernel

k(x, y) = exp( −x − y2/γ),

used in KPLS;

3 The number (F ac) of score vectors used in KPLS.

Trang 39

Table 6 Testing accuracies (%) obtained by the various methods as reported in the

literature

As a baseline, we first study the individual performance of using either theT-test method and KPLS for gene extraction Here, only the SVM is used as theclassifier As can be seen from Table 2, the accuracy is not high Moreover, theperformance is not stable when different parameter settings are used

We now study the performance of the proposed method that uses both theT-test method and KPLS in tandem Testing accuracies, at different parametersettings, on the three datasets are shown in Tables 3, 4 and 5, respectively Ascan be seen, the proposed method can reach the best classification performance

of 100% on both the breast cancer and leukemia datasets On the colon cancerdataset, it can also reach 91.9%

Besides, on comparing the three classifiers used, we can conclude that theneural network can attain the same performance as the SVM However, its train-ing time is observed to be much longer than that of the SVM On the other hand,

the K-NN classifier does not perform as well in our experiments.

We now compare the performance of the proposed method with those of theother methods as reported in the literature Note that all these methods areevaluated using leave-one-out cross-validation and so their classification accura-cies can be directly compared As can be seen in Table 6, the proposed method,which attains the best classification accuracy (of 100%) on both the breast can-

cer and leukemia datasets, outperforms most of the methods Note that the Joint Classifier and Feature Optimization (JCFO) method [6] (using the linear ker-

nel) can also attain 100% on the Leukemia dataset However, JCFO relies onthe Expectation-Maximization (EM) algorithm [6] and is much slower than theproposed method

Trang 40

Gene Feature Extraction Using T-Test Statistics and KPLS 19

5 Conclusions

In this paper, we propose a new gene extraction scheme based on the T-testmethod and KPLS Experiments are performed on the breast cancer, leukemiaand colon cancer datasets While the use of either the T-test method or KPLS forgene extraction does not yield satisfactory results, the proposed method, whichuses both the T-test method and KPLS in tandem, shows superior classificationperformance on all three datasets The proposed gene extraction method thusproves to be a reliable gene extraction method

2 A Ben-Dor, L Bruhn, N Friedman, I Nachman, M Schummer, and Z Yakhini

Tissue classification with gene expression profiles In Proceedings of the Fourth

Annual International Conference on Computational Molecular Biology, pages 54–

64, 2000

3 H Chai and C Domeniconi An evaluation of gene selection methods for

multi-class microarray data multi-classification In Proceedings of the Second European

Work-shop on Data Mining and Text Mining for Bioinformatics, pages 3–10, Pisa, Italy,

September 2004

4 K Duan and J.C Rajapakse A variant of SVM-RFE for gene selection in cancer

classification with expression data In Proceedings of the IEEE Symposium on

Computational Intelligence in Bioinformatics and Computational Biology, pages

49–55, 2004

5 T.R Golub, D.K Slonim, P Tamayo, C Huard, M Gaasenbeek, J.P Mesirov,

H Coller, M.L Loh, J.R Downing, M.A Caligiuri, C.D Bloomfield, and E.S.Lander Molecular classification of cancer: Class discovery and class prediction by

gene expression monitoring Science, 286(5439):531–537, 1999.

6 B Krishnapuram, L Carin, and A Hartemink Gene expression analysis: Jointfeature selection and classifier design In B Sch¨olkopf, K Tsuda, and J.-P Vert,

editors, Kernel Methods in Computational Biology, pages 299–318 MIT, 2004.

7 H Liu, J Li, and L Wong A comparative study on feature selection and

classi-fication methods using gene expression profiles and proteomic patterns Genome

Informatics, 13:51–60, 2002.

8 B Ni and J Liu A hybrid filter/wrapper gene selection method for microarray

classification In Proceedings of International Conference on Machine Learning and

Cybernetics, pages 2537–2542, 2004.

9 R Rosipal Kernel partial least squares for nonlinear regression and discrimination

Neural Network World, 13(3):291–300, 2003.

Ngày đăng: 11/05/2018, 14:37

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm