1. Trang chủ
  2. » Công Nghệ Thông Tin

Recent advances on soft computing and data mining herawan, ghazali deris 2014 05 30

697 727 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 697
Dung lượng 21,33 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Soft Computing Track A Fuzzy Time Series Model in Road Accidents Forecast.. eds., Recent Advances on Soft Computing and Data Mining SCDM 2014, Advances in Intelligent Systems and Computi

Trang 1

Advances in Intelligent Systems and Computing 287

Recent Advances

on Soft Computing and Data Mining

Tutut Herawan

Rozaida Ghazali

Mustafa Mat Deris Editors

Proceedings of the First International Conference on Soft Computing

and Data Mining (SCDM-2014)

Universiti Tun Hussein Onn Malaysia

Johor, Malaysia June 16th–18th, 2014

Trang 2

Advances in Intelligent Systems and Computing

Trang 3

About this Series

The series “Advances in Intelligent Systems and Computing” contains publications on theory,applications, and design methods of Intelligent Systems and Intelligent Computing Virtually alldisciplines such as engineering, natural sciences, computer and information science, ICT, eco-nomics, business, e-commerce, environment, healthcare, life science are covered The list of top-ics spans all the areas of modern intelligent systems and computing

The publications within “Advances in Intelligent Systems and Computing” are primarilytextbooks and proceedings of important conferences, symposia and congresses They cover sig-nificant recent developments in the field, both of a foundational and applicable character Animportant characteristic feature of the series is the short publication time and world-wide distri-bution This permits a rapid and broad dissemination of research results

Trang 4

Tutut Herawan · Rozaida Ghazali

Mustafa Mat Deris

Editors

Recent Advances on Soft

Computing and Data Mining

Proceedings of the First International

Conference on Soft Computing and

Data Mining (SCDM-2014)

Universiti Tun Hussein Onn Malaysia, Johor, Malaysia June, 16th–18th, 2014

ABC

Trang 5

ISSN 2194-5357 ISSN 2194-5365 (electronic)

ISBN 978-3-319-07691-1 ISBN 978-3-319-07692-8 (eBook)

DOI 10.1007/978-3-319-07692-8

Springer Cham Heidelberg New York Dordrecht London

Library of Congress Control Number: 2014940281

c

Springer International Publishing Switzerland 2014

This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broad- casting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known

or hereafter developed Exempted from this legal reservation are brief excerpts in connection with reviews

or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its cur- rent version, and permission for use must always be obtained from Springer Permissions for use may be obtained through RightsLink at the Copyright Clearance Center Violations are liable to prosecution under the respective Copyright Law.

The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made The publisher makes no warranty, express or implied, with respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)

Trang 6

We are honored to be part of this special event in the First International Conference onSoft Computing and Data Mining (SCDM-2014) SCDM-2014 will be held at UniversitiTun Hussein Onn Malaysia, Johor, Malaysia on June 16th–18th, 2014 It has attracted

145 papers from 16 countries from all over the world Each paper was peer reviewed by

at least two members of the Program Committee Finally, only 65 (44%) papers withthe highest quality were accepted for oral presentation and publication in these volumeproceedings

The papers in these proceedings are grouped into two sections and two in conjunctionworkshops:

• Soft Computing

• Data Mining

• Workshop on Nature Inspired Computing and Its Applications

• Workshop on Machine Learning for Big Data Computing

On behalf of SCDM-2014, we would like to express our highest gratitude to be giventhe chance to cooperate with Applied Mathematics and Computer Science ResearchCentre, Indonesia and Software and Multimedia Centre, Universiti Tun Hussein OnnMalaysia for their support Our special thanks go to the Vice Chancellor of UniversitiTun Hussein Onn Malaysia, Steering Committee, General Chairs, Program CommitteeChairs, Organizing Chairs, Workshop Chairs, all Program and Reviewer Committeemembers for their valuable efforts in the review process that helped us to guarantee thehighest quality of the selected papers for the conference

We also would like to express our thanks to the four keynote speakers, Prof Dr.Nikola Kasabov from KEDRI, Auckland University of Technology, New Zealand; Prof

Dr Hamido Fujita from Iwate Prefectural University (IPU); Japan, Prof Dr HojjatAdeli from The Ohio State University; and Prof Dr Mustafa Mat Deris from SCDM,Universiti Tun Hussein Onn Malaysia

Our special thanks are due also to Prof Dr Janusz Kacprzyk and Dr ThomasDitzinger for publishing the proceeding in Advanced in Intelligent and Soft Computing

of Springer We wish to thank the members of the Organizing and Student Committeesfor their very substantial work, especially those who played essential roles

Trang 7

VI Preface

We cordially thank all the authors for their valuable contributions and other pants of this conference The conference would not have been possible without them

partici-EditorsTutut HerawanRozaida GhazaliMustafa Mat Deris

Trang 8

Conference Organization

Patron

Prof Dato’ Dr Mohd Noh Vice-Chancellor of Universiti Tun Hussein Onn

Honorary Chair

A Fazel Famili National Research Council of Canada

Hamido Fujita Iwate Prefectural University, Japan

Steering Committee

Nazri Mohd Nawi Universiti Tun Hussein Onn Malaysia, UTHMJemal H Abawajy Deakin University, Australia

Chair

Rozaida Ghazali Universiti Tun Hussein Onn Malaysia

Mustafa Mat Deris Universiti Tun Hussein Onn Malaysia

Secretary

Noraini Ibrahim Universiti Tun Hussein Onn Malaysia

Norhalina Senan Universiti Tun Hussein Onn Malaysia

Organizing Committee

Hairulnizam Mahdin Universiti Tun Hussein Onn Malaysia

Suriawati Suparjoh Universiti Tun Hussein Onn Malaysia

Trang 9

VIII Conference Organization

Rosziati Ibrahim Universiti Tun Hussein Onn MalaysiaMohd Hatta b Mohd

Ali @ Md Hani Universiti Tun Hussein Onn Malaysia

Noorhaniza Wahid Universiti Tun Hussein Onn MalaysiaMohd Najib Mohd Salleh Universiti Tun Hussein Onn Malaysia

Program Committee Chair

Mohd Farhan Md Fudzee Universiti Tun Hussein Onn MalaysiaShahreen Kassim Universiti Tun Hussein Onn Malaysia

Proceeding Chair

Rozaida Ghazali Universiti Tun Hussein Onn MalaysiaMustafa Mat Deris Universiti Tun Hussein Onn Malaysia

Workshop Chair

Prima Vitasari Institut Teknologi Nasional, Indonesia

Program Committee

Soft Computing

Abir Jaafar Hussain Liverpool John Moores University, UKAdel Al-Jumaily University of Technology, Sydney

Azizul Azhar Ramli Universiti Tun Hussein Onn Malaysia

Dhiya Al-Jumeily Liverpool John Moores University, UK

Iwan Tri Riyadi Yanto Universitas Ahmad Dahlan, Indonesia

Trang 10

Conference Organization IX

Meghana R Ransing University of Swansea, UK

Muh Fadel Jamil Klaib Jadara University, Jordan

Mohd Najib Mohd Salleh Universiti Tun Hussein Onn Malaysia

Mustafa Mat Deris Universiti Tun Hussein Onn Malaysia

Natthakan Iam-On Mae Fah Luang University, Thailand

Nazri Mohd Nawi Universiti Tun Hussein Onn Malaysia

R.B Fajriya Hakim Universitas Islam Indonesia

Rajesh S Ransing University of Swansea, UK

Rosziati Ibrahim Universiti Tun Hussein Onn Malaysia

Rozaida Ghazali Universiti Tun Hussein Onn Malaysia

Salwani Abdullah Universiti Kebangsaan Malaysia

Siti Mariyam Shamsuddin Universiti Teknologi Malaysia

Siti Zaiton M Hashim Universiti Teknologi Malaysia

Theresa Beaubouef Southeastern Louisiana University

Data Mining

Vietnam

VietnamBeniamino Murgante University of Basilicata, Italy

Kamaruddin Malik Mohamad Universiti Tun Hussein Onn Malaysia

Md Anisur Rahman Charles Sturt University, Australia

Md Yazid Md Saman Universiti Malaysia Terengganu

Mohd Hasan Selamat Universiti Putra Malaysia

Norwati Mustapha Universiti Putra Malaysia

Patrice Boursier University of La Rochelle, France

Trang 11

X Conference Organization

Prabhat K Mahanti University of New Brunswick, CanadaRoslina Mohd Sidek Universiti Malaysia Pahang

Palaiahnakote Shivakumara Universiti Malaya

Patricia Anthony Lincoln University, New Zealand

Vera Yuk Ying Chung University of Sydney

Wan Maseri Wan Mohd Universiti Malaysia Pahang

You Wei Yuan ZhuZhou Institute of Technology, PR ChinaZailani Abdullah Universiti Malaysia Terengganu

Workshop on Nature Inspired Computing and Its Applications

Somnuk

Phon-Amnuaisuk (Chair) Institut Teknologi Brunei

Ak Hj Azhan Pg Hj Ahmad Institut Teknologi Brunei

Atikom Ruekbutra

Hj Idham M Hj Mashud Institut Teknologi Brunei

Hj Rudy Erwan bin Hj Ramlie Institut Teknologi Brunei

Somnuk Phon-Amnuaisuk Institut Teknologi Brunei

Werasak Kurutach Mahanakorn University of Technology

Workshop on Machine Learning for Big Data Computing

Norbahiah Ahmad (Chair) Universiti Teknologi Malaysia

Siti Mariyam Shamsuddin Universiti Teknologi Malaysia

Trang 12

Soft Computing Track

A Fuzzy Time Series Model in Road Accidents Forecast . 1

Lazim Abdullah, Chye Ling Gan

A Jordan Pi-Sigma Neural Network for Temperature Forecasting in Batu

Pahat Region . 11

Noor Aida Husaini, Rozaida Ghazali, Lokman Hakim Ismail,

Tutut Herawan

A Legendre Approximation for Solving a Fuzzy Fractional Drug

Transduction Model into the Bloodstream . 25

Ali Ahmadian, Norazak Senu, Farhad Larki, Soheil Salahshour,

Mohamed Suleiman, Md Shabiul Islam

A Multi-reference Ontology for Profiling Scholars’ Background

Knowledge . 35

Bahram Amini, Roliana Ibrahim, Mohd Shahizan Othman,

Mohd Nazir Ahmad

A New Binary Particle Swarm Optimization for Feature Subset Selection

with Support Vector Machine . 47

Amir Rajabi Behjat, Aida Mustapha, Hossein Nezamabadi-Pour,

Md Nasir Sulaiman, Norwati Mustapha

A New Hybrid Algorithm for Document Clustering Based on Cuckoo

Search and K-means . 59

Ishak Boushaki Saida, Nadjet Kamel, Bendjeghaba Omar

A New Positive and Negative Linguistic Variable of Interval Triangular

Type-2 Fuzzy Sets for MCDM . 69

Nurnadiah Zamri, Lazim Abdullah

Trang 13

XII Contents

A New Qualitative Evaluation for an Integrated Interval Type-2 Fuzzy

TOPSIS and MCGP . 79

Nurnadiah Zamri, Lazim Abdullah

A Performance Comparison of Genetic Algorithm’s Mutation

Operators in n-Cities Open Loop Travelling Salesman

Problem . 89

Hock Hung Chieng, Noorhaniza Wahid

A Practical Weather Forecasting for Air Traffic Control System Using

Fuzzy Hierarchical Technique . 99

Azizul Azhar Ramli, Mohammad Rabiul Islam, Mohd Farhan Md Fudzee,

Mohamad Aizi Salamat, Shahreen Kasim

Adapted Bio-inspired Artificial Bee Colony and Differential Evolution for

Feature Selection in Biomarker Discovery Analysis 111

Syarifah Adilah Mohamed Yusoff, Rosni Abdullah, Ibrahim Venkat

An Artificial Intelligence Technique for Prevent Black Hole Attacks in

MANET 121

Khalil I Ghathwan, Abdul Razak B Yaakub

ANFIS Based Model for Bispectral Index Prediction 133

Jing Jing Chang, S Syafiie, Raja Kamil Raja Ahmad,

Thiam Aun Lim

Classify a Protein Domain Using SVM Sigmoid Kernel 143

Ummi Kalsum Hassan, Nazri Mohd Nawi, Shahreen Kasim,

Azizul Azhar Ramli, Mohd Farhan Md Fudzee, Mohamad Aizi

Salamat

Color Histogram and First Order Statistics for Content Based Image

Retrieval 153

Muhammad Imran, Rathiah Hashim, Noor Eliza Abd Khalid

Comparing Performances of Cuckoo Search Based Neural Networks 163

Nazri Mohd Nawi, Abdullah Khan, M.Z Rehman, Tutut Herawan,

Mustafa Mat Deris

CSLMEN: A New Cuckoo Search Levenberg Marquardt Elman Network

for Data Classification 173

Nazri Mohd Nawi, Abdullah Khan, M.Z Rehman, Tutut Herawan,

Mustafa Mat Deris

Enhanced MWO Training Algorithm to Improve Classification Accuracy

of Artificial Neural Networks 183

Ahmed A Abusnaina, Rosni Abdullah, Ali Kattan

Trang 14

Contents XIII

Fuzzy Modified Great Deluge Algorithm for Attribute Reduction 195

Majdi Mafarja, Salwani Abdullah

Fuzzy Random Regression to Improve Coefficient Determination in

Fuzzy Random Environment 205

Nureize Arbaiy, Hamijah Mohd Rahman

Honey Bees Inspired Learning Algorithm: Nature Intelligence Can

Predict Natural Disaster 215

Habib Shah, Rozaida Ghazali, Yana Mazwin Mohmad Hassim

Hybrid Radial Basis Function with Particle Swarm Optimisation

Algorithm for Time Series Prediction Problems 227

Ali Hassan, Salwani Abdullah

Implementation of Modified Cuckoo Search Algorithm on Functional

Link Neural Network for Climate Change Prediction via Temperature

and Ozone Data 239

Siti Zulaikha Abu Bakar, Rozaida Ghazali, Lokman Hakim Ismail,

Tutut Herawan, Ayodele Lasisi

Improving Weighted Fuzzy Decision Tree for Uncertain Data

Classification 249

Mohd Najib Mohd Salleh

Investigating Rendering Speed and Download Rate of Three-Dimension

(3D) Mobile Map Intended for Navigation Aid Using Genetic Algorithm 261

Adamu I Abubakar, Akram Zeki, Haruna Chiroma,

Tutut Herawan

Kernel Functions for the Support Vector Machine: Comparing

Performances on Crude Oil Price Data 273

Haruna Chiroma, Sameem Abdulkareem, Adamu I Abubakar,

Tutut Herawan

Modified Tournament Harmony Search for Unconstrained Optimisation

Problems 283

Moh’d Khaled Shambour, Ahamad Tajudin Khader,

Ahmed A Abusnaina, Qusai Shambour

Multi-objective Particle Swarm Optimization for Optimal Planning of

Biodiesel Supply Chain in Malaysia 293

Maryam Valizadeh, S Syafiie, I.S Ahamad

Nonlinear Dynamics as a Part of Soft Computing Systems: Novel

Approach to Design of Data Mining Systems 303

Elena N Benderskaya

Trang 15

XIV Contents

Soft Solution of Soft Set Theory for Recommendation in Decision

Making 313

R.B Fajriya Hakim, Eka Novita Sari, Tutut Herawan

Two-Echelon Logistic Model Based on Game Theory with Fuzzy

Variable 325

Pei Chun Lin, Arbaiy Nureize

Data Mining Track

A Hybrid Approach to Modelling the Climate Change

Effects on Malaysia’s Oil Palm Yield at the Regional

Scale 335

Subana Shanmuganathan, Ajit Narayanan, Maryati Mohamed,

Rosziati Ibrahim, Haron Khalid

A New Algorithm for Incremental Web Page Clustering Based on

k-Means and Ant Colony Optimization 347

Yasmina Boughachiche, Nadjet Kamel

A Qualitative Evaluation of Random Forest Feature Learning 359

Adelina Tang, Joan Tack Foong

A Semantic Content-Based Forum Recommender System Architecture

Based on Content-Based Filtering and Latent Semantic Analysis 369

Naji Ahmad Albatayneh, Khairil Imran Ghauth, Fang-Fang Chua

A Simplified Malaysian Vehicle Plate Number

Recognition 379

Abd Kadir Mahamad, Sharifah Saon, Sarah Nurul Oyun Abdul Aziz

Agglomerative Hierarchical Co-clustering Based on Bregman

Divergence 389

Guowei Shen, Wu Yang, Wei Wang, Miao Yu, Guozhong Dong

Agreement between Crowdsourced Workers and Expert Assessors in

Making Relevance Judgment for System Based IR Evaluation 399

Parnia Samimi, Sri Devi Ravana

An Effective Location-Based Information Filtering System on Mobile

Devices 409

Marzanah A Jabar, Niloofar Yousefi, Ramin Ahmadi,

Mohammad Yaser Shafazand, Fatimah Sidi

An Enhanced Parameter-Free Subsequence Time Series Clustering for

High-Variability-Width Data 419

Navin Madicar, Haemwaan Sivaraks, Sura Rodpongpun,

Chotirat Ann Ratanamahatana

Trang 16

Contents XV

An Optimized Classification Approach Based on Genetic Algorithms

Principle 431

Ines Bouzouita

Comparative Performance Analysis of Negative Selection Algorithm with

Immune and Classification Algorithms 441

Ayodele Lasisi, Rozaida Ghazali, Tutut Herawan

Content Based Image Retrieval Using MPEG-7 and Histogram 453

Muhammad Imran, Rathiah Hashim, Noor Elaiza Abd Khalid

Cost-Sensitive Bayesian Network Learning Using

Sampling 467

Eman Nashnush, Sunil Vadera

Data Treatment Effects on Classification Accuracies of Bipedal Running

and Walking Motions 477

Wei Ping Loh, Choo Wooi H’ng

Experimental Analysis of Firefly Algorithms for Divisive Clustering of

Web Documents 487

Athraa Jasim Mohammed, Yuhanis Yusof, Husniza Husni

Extended Nạve Bayes for Group Based Classification 497

Noor Azah Samsudin, Andrew P Bradley

Improvement of Audio Feature Extraction Techniques in Traditional

Indian Musical Instrument 507

Kohshelan, Noorhaniza Wahid

Increasing Failure Recovery Probability of Tourism-Related Web

Services 517

Hadi Saboohi, Amineh Amini, Tutut Herawan

Mining Critical Least Association Rule from Oral Cancer Dataset 529

Zailani Abdullah, Fatiha Mohd, Md Yazid Mohd Saman,

Mustafa Mat Deris, Tutut Herawan, Abd Razak Hamdan

Music Emotion Classification (MEC): Exploiting Vocal and Instrumental

Sound Features 539

Mudiana Mokhsin Misron, Nurlaila Rosli, Norehan Abdul Manaf,

Hamizan Abdul Halim

Resolving Uncertainty Information Using Case-Based Reasoning

Approach in Weight-Loss Participatory Sensing Campaign 551

Andita Suci Pratiwi, Syarulnaziah Anawar

Trang 17

XVI Contents

Towards a Model-Based Framework for Integrating Usability Evaluation

Techniques in Agile Software Model 561

Saad Masood Butt, Azura Onn, Moaz Masood Butt,

Nadra Tabassam

Workshop on Nature Inspired Computing and Its

Applications

Emulating Pencil Sketches from 2D Images 571

Azhan Ahmad, Somnuk Phon-Amnuaisuk, Peter D Shannon

Router Redundancy with Enhanced VRRP for Intelligent Message

Routing 581

Haja Mohd Saleem, Mohd Fadzil Hassan, Seyed M Buhari

Selecting Most Suitable Members for Neural Network Ensemble Rainfall

Forecasting Model 591

Harshani Nagahamulla, Uditha Ratnayake, Asanga Ratnaweera

Simulating Basic Cell Processes with an Artificial Chemistry System 603

Chien-Le Goh, Hong Tat Ewe, Yong Kheng Goh

The Effectiveness of Sampling Methods for the Imbalanced Network

Intrusion Detection Data Set 613

Kok-Chin Khor, Choo-Yee Ting, Somnuk Phon-Amnuaisuk

Workshop on Machine Learning for Big Data Computing

A Clustering Based Technique for Large Scale Prioritization during

Requirements Elicitation 623

Philip Achimugu, Ali Selamat, Roliana Ibrahim

A Comparative Evaluation of State-of-the-Art Cloud Migration

Nur Syahela Hussien, Sarina Sulaiman, Siti Mariyam Shamsuddin

Enhanced Rules Application Order Approach to Stem Reduplication

Words in Malay Texts 657

M.N Kassim, Mohd Aizaini Maarof, Anazida Zainal

Islamic Web Content Filtering and Categorization on Deviant Teaching 667

Nurfazrina Mohd Zamry, Mohd Aizaini Maarof, Anazida Zainal

Trang 18

Contents XVII

Multiobjective Differential Evolutionary Neural Network for Multi Class

Pattern Classification 679

Ashraf Osman Ibrahim, Siti Mariyam Shamsuddin,

Sultan Noman Qasem

Ontology Development to Handle Semantic Relationship between Moodle

E-Learning and Question Bank System 691

Arda Yunianta, Norazah Yusof, Herlina Jayadianti,

Mohd Shahizan Othman, Shaffika Suhaimi

Author Index 703

Trang 19

T Herawan et al (eds.), Recent Advances on Soft Computing and Data Mining

SCDM 2014, Advances in Intelligent Systems and Computing 287,

1 DOI: 10.1007/978-3-319-07692-8_1, © Springer International Publishing Switzerland 2014

A Fuzzy Time Series Model in Road Accidents Forecast

Lazim Abdullah and Chye Ling Gan

School of Informatics and Applied Mathematics, Universiti Malaysia Terengganu,

21030 Kuala Terengganu, Malaysia

{lazim abdullah,lazim_m}@umt.edu.my

Abstract Many researchers have explored fuzzy time series forecasting models

with the purpose to improve accuracy Recently, Liu et al., have proposed a new method, which an improved version of Hwang et al., method The method has proposed several properties to improve the accuracy of forecast such as levels

of window base, length of interval, degrees of membership values, and existence of outliers Despite these improvements, far too little attention has been paid to real data applications Based on these advantageous, this paper investigates the feasibility and performance of Liu et al., model to Malaysian road accidents data Twenty eight years of road accidents data is employed as experimental datasets The computational results of the model show that the performance measure of mean absolute forecasting error is less than 10 percent Thus it would be suggested that the Liu et al., model practically fit with the Malaysian road accidents data

Keywords: Fuzzy time series, time-variant forecast, length of interval, window

base, road accidents

1 Introduction

In the new global economy, forecasting plays important activities in daily lives as it often has been used to forecast weather, agriculture produce, stock price and students’ enrolment One of the most traditional approaches in forecasting is Box Jenkins model which was proposed by Box and Jenkin [1] Traditional forecasting methods can be dealt with many forecasting cases However, one of the limitations in implementing traditional forecasting is its incapability in fulfilling sufficient historical data To solve this problem, Song and Chissom [2] proposed the concept of fuzzy time series This concept was proposed when they attempted to implement fuzzy set theory for forecasting task Over time, this method had been well received by researchers due to its capability in dealing with vague and incomplete data Fuzzy set theory is created for handling of uncertain environment and fuzzy numbers provided various opportunities to compile difficult and complex problems Fuzzy time series method is appeared successfully dealt with uncertainty of series and in some empirical studies, it provides higher accuracy Some of the recent research in fuzzy time series performances can be retrieved from [3], [4], [5] However, the issues of accuracy and performance of fuzzy time series still very much debated

Trang 20

2 L Abdullah and C.L Gan

Defining window bases variable w is one the efforts to increase forecasting

accuracy especially in time-variant fuzzy time series Song and Chissom [6] have

showed that the effect of forecasting result with the changes of w Hwang et al., [7]

used Song and Chissom’s method as a basis to calculate the variations of the

historical data and recognition of window base, but one level is extended to w level

Hwang et al., [7] forecasted results were better than those presented by Song and Chissom’s method due to the fact that the proposed method simplifies the arithmetic operation process However, the time-variant model of Hwang et al.,[7] was not withstand as time keeps moving on and on Liu et al., [8] took an initiative to revise Hwang et al.’s model with the purpose to overcome several drawbacks Among the drawbacks of Hwang et al., are the length of intervals and number of intervals Huarng [9] and Huarng and Yu [10] argue that different lengths and numbers of intervals may affect the accuracy of forecast Furthermore, Hwang et al., method did not provide suggestions with regard to the determination of window base Also, they uniformly set 0.5 as the membership values in the fuzzy set, without giving variations

in degree With a good intention to improve these drawbacks, Liu et al., [8] proposed

a new model with the goals to effectively determine the best interval length, level of window base and degrees of membership values These moves surely targeted to increase the accuracy of forecasted values Although the Liu et al.,’s method is considered as a excellent technique to increase accuracy but the method has never been conceptualized in real applications So far, however, there have been little discussions about testing this improved fuzzy time series to road accidents data

In road accident forecast, many researchers used traditional ARMAto correct the error terms Gandhi and Hu [11], for example, used a differential equation model to represent the accident mechanism with time-varying parameters and an ARMA process of white noise is attached to model the equation error Another example, is the combination of a regression model and ARIMA model presented by Van den Bossche et al., [12] In Malaysia, Law et al., [13] made a projection of the vehicle ownership rate to the year 2010 and use this projection to predict the road accident death in 2010 by using an ARIMA model The projection takes into account the changes in population and the vehicle ownership rate The relationship between death rate and population, vehicle ownership rate were described utilizing transfer noise function in ARIMA analysis Other than ARIMA, Chang [14] analyzed freeway accident frequencies using negative binomial regression versus artificial neural network In line with well acceptance of fuzzy knowledge in forecasting research, Jilani and Burney [15] recently presented a new multivariate stochastic fuzzy forecasting model These new methods were applied for forecasting total number of car road accidents casualties in Belgium using four secondary factors However there have been no studies to bring the light of the relationship between fuzzy time series model and road accidents data The recent discovery of an improved fuzzy time series motivates the need to explore the applicability of the time variant fuzzy time series to road accidents data The present paper takes an initiative to implement the fuzzy time series in forecasting of Malaysian road accidents Specifically this paper intends to test the variant time fuzzy time series Liu et al., [8] model to the Malaysian road

accidents data

Trang 21

A Fuzzy Time Series Model in Road Accidents Forecast 3

This paper is organized as follows Conceptual definitions of time-variant fuzzy time series, window bases and its affiliates are discussed in Section 2 The vigour of computational steps of road accidents data are elucidated in Section 3 The short

conclusion finally presented in Section 4

2 Preliminaries

The concept of fuzzy logic and fuzzy set theory were introduced to cope with the ambiguity and uncertainty of most of the real-world problems Chissom [6] introduced the concept of fuzzy time series and since then a number of variants were published by many authors The basic concepts of fuzzy set theory and fuzzy time series are given by Song and Chissom [2] and some of the essentials are reproduced to make the study self-contained The basic concepts of fuzzy time series are explained

by Definition 1 to Definition 4

Definition 1 Y(t)(t = ,0,1,2, ), is a subset of R Let Y(t) be the universe of

discourse defined by the fuzzy set μi(t ) If F(t) consists of μi(t )(i =1,2, ), F(t) is called a fuzzy time series on Y(t), i=1,2,…

Definition 2 If there exists a fuzzy relationship R(t-1, t), such that F(t) =F(t-1) ◦R(t-1,

t), where ◦ is an arithmetic operator, then F(t) is said to be caused by F(t-1) The relationship between F(t) and F(t-1) can be denoted by F(t-1) →F(t)

Definition 3 Suppose F(t) is calculated by F(t-1) only, and F(t) = F(t-1) ◦R(t-1, t) For

any t, if R(t-1, t) is independent of t, then F(t) is considered a time-invariant fuzzy time series Otherwise, F(t) is time-variant

Definition 4 Suppose F(t-1) = Ãi and F(t) = Ãj , a fuzzy logical relationship can be defined as Ãi→Ãj where Ãi and Ãj are called the left-hand side and right-hand side of the fuzzy logical relationship, respectively

These definitions become the basis in explaining fuzzy time series Liu et al., method Liu et al., proposed the forecasting method with the aim at improving Hwang et al.’s method Liu et al., method’s has successfully overcome some drawbacks of Hwang et al., method’s by finding the best combination between the length of intervals and the window bases Detailed algorithms of Liu et al [8], are not explained in this paper

3 Implementation

In this experiment, Liu et al.,’s method is tested to the Malaysian road accidents data An official road accidents data released by Royal Malaysian Police [16] are employed to the model The calculation is executed in accordance with the proposed method For the purpose of clarity and simplicity, the following computations are

Trang 22

4 L Abdullah and C.L Gan

limited to forecasted value of the year 2009 Also, due to space limitation, the historical data from the year 2004 to 2008 are accounted in these computational steps

Step 1: Collect the historical data of road accident in Malaysia., Dvt, for the year

2004 to 2008

Step 2: Examine outliers The studentized residual analysis method is applied to

determine whether there exist outliers in historical data Statistical software is used to calculate the residual Table 1 shows the outliers examination for the last five years before 2009

Table 1 Studentized deleted residual of the historical data

Year Number of Road Accident, Dvt Studentized deleted residual

Step 3: Calculate the variation of the historical data For example, the variation of

year 2005 is calculated as follow:

Variation= Rv2005-Rv2004= 328,264-326,815= 1,449 Similarly, the variations of all data are computed It can be seen that the minimum

of the variations in the data is -4,595 (Dmin) and the maximum is 28,162 (Dmax) To simplify computations, let D1=405 and D2=338

U=[Dmin -D1 , Dmax + D2] =[ -4,595-405, 28,162+338]=[-5,000, 28,500]

Step 4: Calculate Ad by dividing all the variations (Step 3) with number of data

minus one:

200,1268.023,1228

6663,331

29

728,9

5054904,10

=

Ad

For simplicity, Ad=12,200 is divided by 10 that yields the unit value 1,220 Thus,

there are 10 possible interval lengths (1,220, 2,440, …, 12,200) The membership function for l=1,220 is 0.9 When l=2,440, its membership value is 0.8 The rest of membership function can be obtained in similar fashion The corresponding relations are shown in Table 2

Trang 23

A Fuzzy Time Series Model in Road Accidents Forecast 5

Table 2 Interval length and the corresponding membership values for the road accident

Step 5: Using l=3,660 or membership value=0.7 as an example, the number of

intervals (fuzzy set) is calculated as follows:

660 , 3

) 000 , 5 ( 000 ,

=Therefore, there are 10 intervals (fuzzy sets) and the interval midpoints are shown below

A1=1/u1+0.7/u2+0/u3+0/u4+0/u5+0/u6+0/u7+0/u8+0/u9+0/u10

A2=0.7/u1+1/u2+0.7/u3+0/u4+0/u5+0/u6+0/u7+0/u8+0/u9+0/u10

A3=0/u1+0.7/u2+1/u3+0.7/u4+0/u5+0/u6+0/u7+0/u8+0/u9+0/u10

A4=0/u1+0/u2+0.7/u3+1/u4+0.7/u5+0/u6+0/u7+0/u8+0/u9+0/u10

A5=0/u1+0/u2+0/u3+0.7/u4+1/u5+0.7/u6+0/u7+0/u8+0/u9+0/u10

A6=0/u1+0/u2+0/u3+0/u4+0.7/u5+1/u6+0.7/u7+0/u8+0/u9+0/u10

Trang 24

6 L Abdullah and C.L Gan

A7=0/u1+0/u2+0/u3+0/u4+0/u5+0.7/u6+1/u7+0.7/u8+0/u9+0/u10

A8=0/u1+0/u2+0/u3+0/u4+0/u5+0/u6+0.7/u7+1/u8+0.7/u9+0/u10

A9=0/u1+0/u2+0/u3+0/u4+0/u5+0/u6+0/u7+0.7/u8+1/u9+0.7/u10

A10=0/u1+0/u2+0/u3+0/u4+0/u5+0/u6+0/u7+0/u8+0.7/u9+1/u10

Step 6: Fuzzify the variation of the data If the variation at time i is within the

scope of uj, then it belongs to fuzzy set Ãj The fuzzy variation at time i is denoted as

F (i) The variation between year 1991 and 1992 is 22,041, which falls in the range of

u8= [20,620, 24,280], so it belongs to the fuzzy set Ã8 That is F(1992)= Ã8 Similarly, the corresponding fuzzy sets of the remaining variations can be obtained

Step 7: Calculate the fuzzy time series F(t) at window base w The window base

has to be more than or equal to 2 in order to perform a fuzzy composition operation Therefore, w is set as 2 initially Let C(t) be the criterion matrix of F(t) and OW(t) be

the operation matrix at window base w

The fuzzy relation matrix R(t) is computed by performing the fuzzy composition operation of C(t) and OW(t)

To get F(t), we can calculate the maximum of every column in matrix R(t)

Assume the window base is 4 and l=3,660 For example, the criterion matrix C(2009) of F(2009) is F(2008)

[0 0 0 0.7 1 0.7 0 0 0 0]

][)2008()2009

00007.017.0000

07.017.0000000)

2005

(

)2006

(

)2007

(

~ 2

~ 5

~ 8 4

A A A

0

0

0

0007.00107.07.00107.00000

Trang 25

A Fuzzy Time Series Model in Road Accidents Forecast 7

Table 3 Part of the fuzzy time series at window base w= 4

Year

F Variation

Fuzzy Variation u1 u2 u3 u4 u5 u6 u7 u8 u9 u10

130,1549.0470,111810,749.0

Cv

Next, calculate the forecasted value of Fv2009m,

Fv2009=Cv2009+Rv2008=7,570+373,047 =380,617

Step 9: To obtain the best forecasted values, the search algorithm is use to identify

the best window base and interval length Step 5 to step 8 are repeated for different window base and interval length The best forecasted value is computed for different window base and interval length Mean absolute deviation, MAD of each window base and interval length are calculated using the formula

Rv Fv MAD

n

MAD values for each different window base and interval length are presented in Table 4

Trang 26

8 L Abdullah and C.L Gan

Table 4 The value of MAD for different window base and interval length

i=0.6 i=0.6 i=0.5 i=0.6 i=0.7 i=0.8 Sum of

|Fvt-Rvt| 257,473 239,312 223,234 226,262 204,781 222,879 MAD 10,619 10,878 9,706 9,837 8,904 9,690.4

Table 4 shows the lowest MAD happens when window base w=4 and interval length i=0.7 Window base w=4 and interval length i=0.7 are used to calculate the

forecasted value for the year of 2009 using Step 7 and 8 It turns out that the forecasted value of road accident for the year 2009 is 380,617 cases

The forecasted values for all the years are executed with the similar fashion It involves huge computational loads therefore details of calculations for every tested year is not shown in this paper To get better understanding of the forecasting performance, this paper provides trends for the forecasted results and the actual number of road accidents The behaviours of these two curves can be seen in Fig 1

Fig 1 Graph of actual versus forecasted value number of accidents

Performance of the model is also measured using mean absolute percentage error (MAPE, and mean square error (MSE) In the case of road accidents data, Liu et al., [5], method provides the three error measures as below

MAPE (%) = 5.09 MSE = 106,670,326 MAD= 8,904

Trang 27

A Fuzzy Time Series Model in Road Accidents Forecast 9

It is shows that Liu et al.,’s method gives less than 10 % of MAPE thus the Liu et al.,’s model can be considered as a good model in forecasting road accidents in Malaysia

Summarily the method starts with examining studentized residual analysis method

to check any outliers in the historical data Upon calculation of the variations of historical data, the scope of the universe of discourse is defined, and the length of interval as well as its corresponding membership value is determined A systematic search algorithm is then designed to determine the best combination between the length of intervals and window bases The model performances are evaluated by observing error analysis with the actual data

4 Conclusions

Fuzzy time series model has been widely used in forecasting with anticipation of finding viable results Some researches state that fuzzy time series with refined model may gives more accurate forecasting result Recently Liu et al., [8] proposed a refined model of Hwang et al., [7] and received much attention due to its capability of dealing with vague and incomplete data This paper has initiated a move to test the capability

of the time-variant fuzzy forecasting Liu et al.’s method to Malaysian road accidents data The method has provided a systematic way to evaluate the length of intervals and the window base for road accident data The model also considered outliers that normally influencing the overall performance of forecasting The mean square error

of less than 10 % validated the feasibility of the Liu et al.,’s model in forecasting of Malaysians road accidents data

Trang 28

10 L Abdullah and C.L Gan

10 Huarng, K.H., Yu, H.K.: A type 2 fuzzy time series model for stock index forecasting Physica A: Statistical Mechanics and its Applications 353, 445–462 (2005)

11 Gandhi, U.N., Hu, S.J.: Data-based approach in modeling automobile crash Int J Impact Eng 16(1), 95–118 (1995)

12 Van den Bossche, F., Wets, G., Brijs, T.: A Regression Model with ARMA Errors to Investigate the Frequency and Severity of Road Traffic Accidents In: Proceedings of the 83rd Annual Meeting of the Transportation Research Board, USA (2004)

13 Law, T.H., Radin Umar, R.S., Wong, S.V.: The Malaysian Government’s Road Accident Death Reduction Target for Year 2010 Transportation Research 29(1), 42–50 (2004)

14 Chang, L.Y.: Analysis of freeway accident frequencies Negative binomial regression versus artificial neural network Safety Sci 43, 541–557 (2005)

15 Jilani, T.A., Burney, S.M.A., Ardil, C.: Multivariate High Order Fuzzy Time Series Forecasting for Car Road Accidents World Academy of Science, Engineering and Technology 25, 288–293 (2007)

16 Royal Malaysia Police, Statistics of road accident and death,

http://www.rmp.gov.my/rmp (accessed on April 8, 2010)

Trang 29

T Herawan et al (eds.), Recent Advances on Soft Computing and Data Mining

SCDM 2014, Advances in Intelligent Systems and Computing 287,

11 DOI: 10.1007/978-3-319-07692-8_2, © Springer International Publishing Switzerland 2014

A Jordan Pi-Sigma Neural Network for Temperature

Forecasting in Batu Pahat Region

Noor Aida Husaini1, Rozaida Ghazali1, Lokman Hakim Ismail1, and Tutut Herawan2,3

1 Universiti Tun Hussein Onn Malaysia

86400 Parit Raja, Batu Pahat, Johor, Malaysia

2 University of Malaya

50603 Pantai Valley, Kuala Lumpur, Malaysia

3 AMCS Research Center, Yogyakarta, Indonesia

gi090003@siswa.uthm.edu.my, {rozaida,lokman}@uthm.edu.my, tutut@um.edu.my

Abstract This paper disposes towards an idea to develop a new network model

called a Jordan Pi-Sigma Neural Network (JPSN) to overcome the drawbacks of ordinary Multilayer Perceptron (MLP) whilst taking the advantages of Pi-Sigma Neural Network (PSNN) JPSN, a network model with a single layer of tuneable weights with a recurrent term added in the network, is trained using the standard backpropagation algorithm The network was used to learn a set of historical temperature data of Batu Pahat region for five years (2005-2009), obtained from Malaysian Meteorological Department (MMD) JPSN’s ability to predict the future trends of temperature was tested and compared to that of MLP and the standard PSNN Simulation results proved that JPSN’s forecast comparatively superior to MLP and PSNN models, with the combination of

learning rate 0.1, momentum 0.2 and network architecture 4-2-1 and lower

prediction error Thus, revealing a great potential for JPSN as an alternative mechanism to both PSNN and MLP in predicting the temperature measurement for one-step-ahead

Keywords: Jordan pi-sigma, Neural network, Temperature forecasting

1 Introduction

Temperature has a significant impact on different sectors of activities which are exposed to temperature changes, agricultural interests and property [1] One of the most sensitive issues in dealing with temperature forecasting is to consider that other variables might be affecting the temperature Currently, temperature forecasting, which is a part of weather forecasting, is mainly issued in qualitative terms with the use of conventional methods, assisted by the data projected images taken by meteorological satellites to assess future trends [2, 3] A great concern in developing methods for more accurate predictions for temperature forecasting employ the use

of physical methods, statistical-empirical methods and numerical-statistical methods [4, 5] Those methods for estimating temperature can work efficiently; however, it is

Trang 30

12 N.A Husaini et al

inadequate to represent the efficiency of temperature forecasting due to the relatively primitive output post-processing of the current techniques which is competitively superior to subjective prediction Therefore, because temperature parameters itself can

be nonlinear and complex, a powerful method is needed to deal with it [6]

With the advancement of computer technology and system theory, there have been more meteorological models conducted for temperature forecasting [2, 3], including soft computing approaches (e.g: neural network (NN), fuzzy systems, swarm

techniques, etc.) For instance, Pal et al [7] hybridised the MLP with Self-Organizing

Feature Map (SOFM) to form a new model called SOFM-MLP to predict the maximum and minimum temperature by considering various atmospheric parameters The SOFM-MLP was pre-processed by using Feature Selection (FS) They found that the combination of FS and SOFM-MLP produces good prediction by using only few

atmospheric parameters as inputs Similarly, Paras et al., in their work [2] have

discussed the effectiveness of NN with back-propagation learning to predict maximum temperature, minimum temperature and relative humidity They observed that the NN becomes advantageous in predicting those atmospheric variables with high degree of accuracy, thus can be an alternative to traditional meteorological approaches Lee, Wang & Chen [8] proposed a new method for temperature prediction to improve the rate of forecasting accuracy using high-order fuzzy logical relationships by adjusting the length of each interval in the universe of discourse On

the other hand, Smith, et al [9] noted that ward-style NN can be used for predicting

the temperature based on near real-time data with the reduction of prediction error by increasing the number of distinct observations in the training set

Meanwhile, Radhika & Shashi [10] used Support Vector Machine (SVM) for one-step-ahead prediction They found that SVM consistently gives better results compared to MLP trained with BP algorithm Baboo & Shereef [11] forecast temperature using real-time dataset and compared it with practical working of meteorological department Results showed that the convergence analysis is improved

by using simplified Conjugate Gradient (CG) method However, all of the studies mentioned above are considered as black box models, in which they take and give out information [2] without providing users with a function that describes the relationship between the input and output Indeed, such approaches prone to overfit the data Consequently, they also suffer long training times and often reach local minima in the error surface [12] On the other hand, the development of Higher Order Neural Network (HONN) has captured researchers’ attention Pi-Sigma Neural Network (PSNN) which lies within this area, has the ability to converge faster and maintain the high learning capabilities of HONN [13] The uses of PSNN itself for temperature forecasting are preferably acceptable Yet, this paper focuses on developing a new alternative network model: Jordan Pi-Sigma Neural Network (JPSN) to overcome such drawbacks in MLP and taking the advantages of PSNN with the recurrent term added for temporal sequences of input-output mappings Presently, JPSN is used to learn the historical temperature data of a suburban area in Batu Pahat, and to predict the temperature measurements for the next-day These results might be helpful in modelling the temperature for predictive purposes

Trang 31

A JPSN for Temperature Forecasting in Batu Pahat Region 13

The rest of this paper is organized as follow Section 2 describes Jordan Pi-Sigma neural network Section 3 describes experiments and comparison results Finally, the conclusion of this work is described in Section 4

2 Jordan Pi-Sigma Neural Network

This section discusses the motivations behind the development of JPSN, describes the basic architecture of JPSN, and outlines the learning processes of JPSN

2.1 The Architecture of JPSN

The structure of JPSN is quite similar to the ordinary PSNN [14] The main difference

is the architecture of JPSN is constructed by having a recurrent link from output layer back to the input layer, just like the Jordan Neural Network (JNN) [15] had Fig 1 indicates the architecture of the proposed JPSN

Fig 1 The Architecture of JPSN

From Fig 1, x( )t− 1 is the ith component of x at ( )t− 1 th time, wij is the tuneable weights, j is the summing units, N is the number of input nodes and f is a suitable transfer function Unlike the ordinary PSNN, the JPSN deals with a factor to reduce the current value in the context node y( )t − 1 before addressing the new copy value to the value in the context node This feature provides the JPSN with storage capabilities

by retaining previous output values in an attempt to model a memory of past event values Those feedback connection results in nonlinear nature of the neuron Typically, JPSN starts with a small network order for a given problem and then the recurrent link is added during the learning process The learning process stops when the mean squared of the error is less than a pre-specified minimum error (0.0001) The new copy value can be examined by:

New copy value = Output Activation Value + Existing Copy Value * Weight Factor

Trang 32

14 N.A Husaini et al

Let the number of summing unit to be j and W to be the weight matrix of size

on unseen data Z− 1 denotes time delay operation y( )t indicates the output of the kth

node in the ( )t− 1th time, which is employed as a new input of the ith layer The context unit node influenced the input node which represents the vital of JPSN model Weights from the content unit node y( )t− 1 to the summing unit j are set to 1 in order

to reduce the complexity of the network

2.2 The Learning Algorithm of JPSN

We used back-propagation (BP) algorithm [16] with the recurrent link from output layer back to the input layer nodes for supervised learning in the JPSN We initialised the weights to a small random value before adaptively trained the weights Generally, JPSN can be operated in the following steps:

For each training example,

(1) Calculate the output

t y

j h

where h ( )t represents the activation of the j unit at time t The unit’s transfer function

f sigmoid activation function, which bounded the output range into the of [ ]0,1

(2) Compute the output error at time ( )t using standard Mean Squared Error (MSE) by

minimising the following index:

Trang 33

A JPSN for Temperature Forecasting in Batu Pahat Region 15

i x j h M j ij

1

where h j is the output of summing unit and η is the learning rate The learning rate is used to control the learning step, and has a very important effect on convergence time (4) Update the weight:

wij = wij+ Δ wij (5) (5) To accelerate the convergence of the error in the learning process, the momentum term, α

is added into Equation (5) Then, the values of the weight for the interconnection on neurons are calculated and can be numerically expressed as

w ij =w ij+αΔw ij, (6) where the value of α is a user-selected positive constant (0≤α≤1) The JPSN algorithm

is terminated when all the stopping criteria (training error, maximum epoch and early stopping) are satisfied If not, repeat step (1)

Algorithm 1 JPSN Algorithm

The utilization of product units in the output layer indirectly incorporates the capabilities of JPSN while using a small number of weights and processing units Therefore, the proposed JPSN combines the properties of both PSNN and JNN so that better performance can be achieved When utilising the proposed JPSN as predictor for one-step-ahead, the previous input values are used to predict the next element in the data The unique architecture of JPSN may also avoid from the combinatorial explosion of higher-order terms as the network order increases The JPSN has

a topology of a fully connected two-layered feedforward network Considering the fixed weights that are not tuneable, it can be said that the summing layer is not

“hidden” as in the case of the MLP This is by means; such a network topology with only one layer of tuneable weights may reduce the training time

3 Experiments Results and Discussion

In this section, we implemented JPSN using MATLAB 7.10.0 (R2010a) on Pentium® Core ™2 Quad CPU All the networks were trained and tested with daily temperature data gathered from National Forecast Office, Malaysian Meteorological Department (MMD) The network models were built considering five (5) different numbers of input nodes ranging from 4 to 8 [17] A single neuron was considered for the output layer The number of hidden layer (for MLP), and higher order terms (for PSNN and JPSN) was initially started with 2 nodes, and increased by one until a maximum of 5 nodes [13, 18] The combination of 4 to 8 input nodes and 2 to 5 nodes for hidden layer/higher order terms of the three (3) network models yields a total of 1215 Neural

Trang 34

16 N.A Husaini et al

Network (NN) architectures for each in-sample training dataset Since the forecasting horizon is one-step-ahead, the output variable represents the temperature measurement of one-day ahead Each of data series is segregated in time order and is divided into 3 sets; the training, validation and the out-of-sample data To avoid computational problems, the data is normalised between the upper and lower bounds

of the network transfer function, ( x)

e− +

an overall measure of bias and scatter, and also for measuring network performance Obviously, the smaller NMSE value might give better performance For the purpose

of comparison, we used the following notations (refer to Table 1):

Table 1. Performance Metrics Formulae

1

* 1

= −

= n

i P i P i n

1

* 2

σ

( )2 1

* 1

1

* 1

Trang 35

A JPSN for Temperature Forecasting in Batu Pahat Region 17

and average results of 10 simulations/runs have been collected The stopping criterion during the learning process was considered to be the pertinent epoch at the minimum error, which were set to 3000 and 0.0001 respectively [21] To assess the performance

of all network models used in this study, the aforementioned performance metrics in Section 3.1 are used Convergence is achieved when the output of the network meets the earlier mentioned stopping criterion Based on experiment, the best value for the momentum term α =0.2 and the learning rate η =0.1, were chosen based on the simulation results being made by trial-and-error procedure Likewise, number of input = 4 also have been chosen to be fixed in order to exemplify the effect of all network parameters

3.2.1 The Network Parameters

The network parameter, viz the learning rate η and momentum term α are added

in the training process A higher learning rate can be used to speed up the learning

process, however, if it is set too high, the algorithm might diverged and vice-versa Fig 2 (a) presents the number of epochs versus different values of learning rate with momentum; α ={0.2,0.4,0.6,0.8} The figure clearly indicates that a higher learning rate leads the algorithm to converge quickly It is however, the epochs start to rise when η=0.7 and η=0.9 (refer to α = 0 8), in which the network began to overfit, thus leading to longer training time

Furthermore, the momentum term is also an important factor for the learning

process Fig 2 (b) indicates the effects of the momentum term on the model convergence with learning rate; η ={0.1,0.3,0.5,0.7,0.9} In the early stage of training with small momentum, the number of epochs were reduced to some point, and then increased again The number of epochs were increased when α ≥0.7 for most of learning rate, η This is due to the smaller number of momentum, α which leads the network to diverge Subsequently, it can be seen that larger value of momentum term affects the number of epochs reached Therefore, the higher rate of momentum term can be used to achieve a smaller number of epochs Thus, it can be concluded that a higher momentum term could give a positive catalyst for the network to converge However, too large momentum value could also lead the network to easily get trapped into local minima Moreover, one should consider the perfect combination of both learning rate and momentum term that might allow fast convergence rate Therefore, instead of choosing the minimum epoch reached for each simulation, one should also consider the minimum error that obtained from the simulation process

In this case, the combination of momentum term α=0.2 and the learning rate

Trang 36

18 N.A Husaini et al

0 200 400 600 800 1000 1200 1400

(b) Number of Epochs versus Various Momentum Term with Learning Rate 0.1, 0.3, 0.5, 0.7 and 0.9

Fig 2. The Effects of Learning Factors on the Network Performance

(a) Error VS Epochs for momentum term α =0.8 and learning rate η=0.9

Trang 37

A JPSN for Temperature Forecasting in Batu Pahat Region 19

(b) Error VS Epochs for momentum term α =0.2 and learning rate η=0.1

Fig 3. Error VS Epochs for the combination of α=0.8,η=0.9 and α=0.2,η=0.1

The number of higher order terms affects the learning capability and varies the

complexity of the network structures There is no upper limit to set the higher order terms Yet, it is a rarely seen for the number of network order two times greater than the number of input nodes Therefore, we gradually increased it, starting from 2ndorder up to 5th order [13] For the rest of the experiments, we used α =0.2, 1

Table 2. The Effects of Number of Higher Order Terms for JPSN with α=0.2, η=0.1

and Input = 4

ORDER 2 3 4 5 MAE 0.0635 0.0643 0.0646 0.0675 NMSE 0.7710 0.7928 0.8130 0.8885

SNR 18.7557 18.6410 18.5389 18.1574 MSE Training 0.0062 0.0064 0.0064 0.0076

MSE Testing 0.0065 0.0066 0.0068 0.0074

Epoch 1460.9 1641.1 1209.9 336.8

As there are no rigorous rules in the literature on how to determine the optimal

number of input neurons, we used trial-and-error procedure between 4 and 8 to

determine the numbers of input neurons From Table 3, it can be observed that the network performance, on the whole, start to decrease (while the error starts to increase) when a larger number of input neurons is added However, large number of neurons in the input layers is not always necessary, and it can decrease the network performance and may lead to greater execution time, which can caused overfitting

Trang 38

20 N.A Husaini et al

Table 3. The Effects of the Input Neurons for JPSN with α =0.2,η=0.1 and Input = 4

INPUT 4 5 6 7 8 MAE 0.0635 0.0632 0.0634 0.0632 0.0634 NMSE 0.7710 0.7837 0.7912 0.7888 0.8005 SNR 18.7557 18.6853 18.6504 18.6626 18.5946 MSE Training 0.0062 0.0062 0.0062 0.0062 0.0062

MSE Testing 0.0065 0.0066 0.0066 0.0066 0.0067

Epoch 1460.9 193.5 285.3 236.3 185.9

3.2.2 The Prediction of Temperature Measurement

The above discussions have shown that some network parameters may affect the network performances In conjunction with that, it is necessary to illustrate the robustness of JPSN by comparing its performance with the ordinary PSNN and the MLP Table 4 presents the best simulation results for JPSN, PSNN and MLP

Table 4. Comparison of Results for JPSN, PSNN and MLP on All Measuring Criteria

Network Model MAE NMSE SNR MSE Training MSE Testing Epoch JPSN 0.063458 0.771034 18.7557 0.006203 0.006462 1460.9 PSNN 0.063471 0.779118 18.71039 0.006205 0.006529 1211.8 MLP 0.063646 0.781514 18.69706 0.00623 0.006549 2849.9

Over all the training process, JPSN obtained the lowest MAE, which is 0.063458; while the MAE for PSNN and MLP were 0.063471 and 0.063646, respectively (refer to Fig 4) By considering the MAE, it shows how close forecasts that have been made by JPSN are to the actual output in analysing the temperature JPSN outperformed PSNN by ratio 4

1095

1 × − , and 2.9×10−3 for the MLP

Fig 4. MAE for JPSN, PSNN and MLP

Moreover, it can be seen that JPSN reached higher value of SNR (refer to Fig 5) Therefore, it can be said that the network can track the signal better than PSNN and MLP Apart from the MAE and SNR, it is verified that JPSN exhibited lower errors in both training and testing (refer to Fig 6)

Trang 39

A JPSN for Temperature Forecasting in Batu Pahat Region 21

Fig 5. SNR for JPSN, PSNN and MLP

Fig 6. MSE Training and Testing for JPSN, PSNN and MLP

The models’ performances were also evaluated by comparing their NMSE Fig 7 illustrates the NMSE on the testing data set for the three network models It shows that JPSN steadily gives lower NMSE when compared to both PSNN and MLP This

by means shows that the predicted and the actual values which were obtained by the JPSN are better than both comparable network models in terms of bias and scatter Consequently, it can be inferred that the JPSN yield more accurate results, providing the choice of network parameters are determined properly The parsimonious representation of higher order terms in JPSN assists the network to model successfully

Fig 7. NMSE for JPSN, PSNN and MLP

Trang 40

22 N.A Husaini et al

For purpose of demonstration, the earliest 10 data points (Day 1 to Day 10) were tabulated in Fig 8, which indicate the predicted values (forecast) and the actual values (target) of temperature measurement for Batu Pahat region Based on Fig 8 (Day 1), the predicted error for JPSN is 0.4024 while for PSNN and MLP are 1.2404 and 1.2324, respectively JPSN outperformed both network models by ratio :0.2449 for PSNN and ratio :0.2461 for MLP For Day 2, still, JPSN leads PSNN and MLP with ratio :0.4446 and :0.4449, correspondingly The rest of the comparisons in terms of the performance ratio is given in Table 5 From Table 5, it illustrates that JPSN exhibits minimum error for most of ten days compared to the two benchmarked models, PSNN and MLP

Fig 8. Temperature Forecast made by JPSN, PSNN and MLP on 10 Data Points

Table 5. 10 Data Points of JPSN, PSNN and MLP Temperature Forecast

Forecast Value Target Value (JPSN) Target Value (PSNN) Target Value (MLP) 26.2

27.4404 26.9032 26.3165 26.8034 26.8926 26.8309 26.7183 26.3041 26.6013 25.9385

27.4324 26.9014 26.2411 26.7992 26.9105 26.8391 26.7063 26.2483 26.57 25.8932

Fig 9 represents the error minimisation by the three network models; JPSN, PSNN and MLP on 10 Data Points Compared to the benchmarked models, the ordinary PSNN and the MLP, still, JPSN outperformed by having the least average error, 0.7006 compared to ordinary PSNN, 0.8301 and MLP, 0.8364

Ngày đăng: 23/10/2019, 15:12

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
5. Ollson, E.: What active users and designers contribute in the design process. Interacting with Computers 16, 377–400 (2004),http://www.elsevierComputerScience.com Link
13. Sharp, H., Robinson, H., Segal, J.: Integrating user centered design and software engineering: a role for extreme programming,http://www.ics.heacademy.ac.Uk/events/presentationsl376_hcie (accessed: December 2013) Link
3. Butt, S.M., Ahmad, W.F.W.: Analysis and Evaluation of Cognitive Behavior in Software Interfaces using an Expert System. International Journal 5 (2012) Khác
4. Butt, S.M., Ahmad, W.F.W., Fatimah, W.: An Overview of Software Models with Regard to the Users Involvement. International Journal of Computer Science Issues (IJCSI) 9(3(1)), 107–112 (2012) Khác
6. Koskela, A.: Software configuration management in agile methods. ESPOO, p. I-54. VTT publication 514 (2003) Khác
7. Lee, J.C., McCrickard, D.S.: Towards extreme (ly) usable software: Exploring tensions between usability and agile software development. In: Agile Conference (AGILE), pp. 59–71. IEEE (August 2007) Khác
8. Kane, D.: Finding a Place for discount usability engineering in agile development: Throwing down the gauntlet. In: Proc. Agile Development Conference (ADC 2003), p.AO-46. IEEE Press (2003) Khác
9. Fox, D., Sillito, J., Maurer, F.: Agile methods and user-centered design: How These Two methodologies are being successfully integrated in industry. In: Proc. AGILE 2008 Conference (Agile 2008), pp. 63–72. IEEE Press (2008) Khác
10. Sy, D.: Adapting usability investigations for agile user-centered design. Journal of Usability Studies 2(3), 112–132 (2007) Khác
11. Sohaib, O., Khan, K.: Integrating usability engineering and agile software development: A literature review. In: 2010 International Conference on Computer Design and Applications (ICCDA), vol. 2, p. V2-32. IEEE (June 2010) Khác
12. Najafi, M., Toyoshiba, L.: Two case studies of user experience design and agile development. In: Proc. AGILE 2008 Conference (Agile 2008), pp. 2167–2177. IEEE Press (2008) Khác

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN