Based on the relation of complexity weight to use casetransactions in the traditional UCP method, fuzzy rules are set for the purpose ofanalyzing the complexity weight, as shown in Table
Trang 1CENet2014
Trang 2on Computer Engineering and Networks
Trang 4Proceedings of the 4th
International Conference
on Computer Engineering and Networks
CENet2014
Trang 5Springer Cham Heidelberg New York Dordrecht London
Library of Congress Control Number: 2014957309
© Springer International Publishing Switzerland 2015
This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed Exempted from this legal reservation are brief excerpts
in connection with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work Duplication
of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location, in its current version, and permission for use must always be obtained from Springer Permissions for use may be obtained through RightsLink at the Copyright Clearance Center Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made The publisher makes no warranty, express or implied, with respect to the material contained herein.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Trang 6Volume I
Yunbo Chen, Hongchu Yu, and Lei Chen
Based on Fuzzy Inference 11Yue Xie, Jilian Guo, and Anwei Shen
Interconnection Circuits 19Yuling Shang and Pei Zhang
Control of a Multistepping Motor 27Wenhao Shi, Lu Shi, Fang An, Zhouchang Wu,
Zhongxiu Weng, and Lianqing Zhao
Hongling Wang, Gang Zheng, and Yueshun He
Lei Zhang, Jianyu Li, Shuangwen Chen, and Xin Jin
Environmental Performance Evaluation Model 55Xiaofei Liao, Huayue Li, Weisha Yan, and Lin Liu
on SPSO Algorithm 65Maoheng Sun and Azhi Tan
v
Trang 79 A Fast and Accurate Algorithm of Subspace Spectrum
Peak Search Based on Bisection Method 73
Yu Wang, Hong Jiang, and Donghai Li
10 A CRF-Based Method for DDoS Attack Detection 81
Yu Wang, Hong Jiang, Zonghai Liu, and Shiwen Chen
and Its Application 89Haitao Zhang, Binjun Wang, and Guangxuan Chen
Algorithm for Wireless Sensor Network 97Yujun Liu and Meng Cai
Based on the Iterative Learning Control Algorithm 107Yanfen Luo
and Its Application 115Guang Han
in Magnetotelluric Occam Inversion Algorithm 123
Yi Xiao and Yu Liu
Applications 133Jiri Krenek, Kamil Kuca, Aneta Bartuskova, Ondrej Krejcar,
Petra Maresova, and Vladimir Sobeslav
Based on GA-BP Neural Network 141Kai Guan, Zhiqiang Wei, and Bo Yin
(CSI) Feedback in MIMO Broadcast Channels 155Yuan Liu and Kuixi Chen
of the Fully Enclosed Region Upper Confidence Bound
Applied to Trees Algorithm 163Lin Wu, Ying Li, Chao Deng, Lei Chen,
Meiyu Yuan, and Hong Jiang
20 A New Linear Feature Item Weighting Algorithm 171Shiyuan Tian, Hui Zhao, Guochun Wang, and Kuan Dai
Trang 821 Trust Value of the Role Access Control Model
Based on Trust 179Xiaohui Cheng and Tong Wang
Approximate Identity Neural Networks 187Saeed Panahian Fard and Zarita Zainuddin
Based on Artificial Fish Swarm Algorithm 195Meiling Shen, Li Li, and Dan Liu
Histogram Features 201Zhiqin Zhang, Fei Huang, and Linli Tan
for Collaborative Recommendation Algorithm 209
Li Zhang, Yuqing Xue, and Shuyan Cao
Opinion Analysis 219Yun Lin, Jie Wang, and Rong Zou
Modeling 227Peipei Su
Scheduling in Cloud Computing 235Shuang Zhao, Xianli Lu, and Xuejun Li
29 Improving Database Retrieval Efficiency 245Shaomin Yue, Wanlong Li, Dong Han,
Hui Zhao, and Jinhui Cheng
by Packet-Loss Detection 253Yuan He, Minli Yao, and Xiong Xiong
31 Key Management Scheme in Cluster for WSNs 263Xiaoming Liu and Qisheng Zhao
Storage System 271Lei Yang and Shi Liu
Trang 933 LF: A Caching Strategy for Named Data Mobile
Ad Hoc Networks 279
Li Zhang, Jiayan Zhao, and Zhenlian Shi
34 Topological Characteristics of Class Collaborations 291Dong Yan and Keyong Wang
Sensor Networks 299Xiaoming Liu and Qisheng Zhao
Based on CPN Model 305Rengaowa Sa, Baolier Xilin, Yulan Zhao, and Neimule Menke
Unit Vibration Based on Radial Basis Function (RBF)
Neural Network 315Xueli An
for Massive Data 323Xin Xu, Guilin Zhang, and Wei Wu
System Based on Universal Design 331Yun Liu, Guoan Zhao, Dayong Gao, and Zengxia Ren
in Jilin Province 341Fang Xia, Bingbing Zhao, and Xiaochun Du
Using a Hybrid Method 349Zirong Li, Xiaohe Zhang, Yan Li, and Chun Liu
42 Membrane System for Decision-Making Problems 357Lisha Han, Laisheng Xiang, and Xiyu Liu
Based on RUP 365Rui Guo
Rough Set and Evidence Theory Under Incomplete
Decision System 371Zhihai Yang, Weihong Yu, Yan Chen, and Taoying Li
Based on Attribute Reduction 381Lingyun Wei, Xiaoli Zhao, and Xiaoguang Zhou
Trang 1046 Synthetic Safety Analysis: A Systematic Approach
in Combination of Fault Tree Analysis and Fuzzy
Failure Modes and Effect Analysis 389Guannan Su, Linpeng Huang, and Xiaoyu Fu
Based on Gravity Model 399Lihua Heng, Gang Chen, and Zongmin Wang
of Orthoptera in Daqinggou Nature Reserve Using SPSS 407Chunming Liu, Tao Meng, and Bingzhong Ren
49 Intelligent Diagnostics Applied Technology of Specialized
Vehicle Based on Knowledge Reasoning 415Licai Bi, Yujie Cheng, Dong Hu, and Weimin Lv
in China 425Fangzhi Liu
Drug–Target Interactions Prediction 433
Ru Zhang
52 A Complementary Predictor for Collaborative Filtering 443Min Chen, Wenxin Hu, and Jun Zheng
with Higher-Order Logic 451Guangyuan Li
Processing of Water Bag 459Hong Wang, Tao Xu, and Yahong Zhou
Based on Man-in-the-Middle Attack 467Yunze Wang and Yinying Li
Traffic Prediction 477Congcong Wang, Gaozu Wang, Xiaoxiao Zhang,
and Shuai Zhang
Trajectory for Antitank Missile 487Mengchun Zhong, Cheng Li, and Hua Li
Trang 1158 A Fast and Accurate Pupil Localization Method Using
Gray Gradient Differential and Curve Fitting 495Yuhui Lin, Zhiyi Qu, Yu Zhang, and Huiyi Han
System’s Syntactic Analysis 505Zangtai Cai
Based on EMD 513Wanjun Zhang, Minjie Niu, and Xiaoying Wu
61 Kinect-Based 3D Color Reconstruction 521
Li Yao, Guosheng Dong, and Guilan Hu
Local Binary Pattern 531Zhen Sun, Xichang Wang, and Jiang Liu
Based on Video Analysis 539
Yu Fan, Kangxiong Yu, Xiaoqing Tang, Heping Zheng,
Li Yu, and Ge Zhang
64 Face Detection Based on Landmark Localization 547Peng Liu, Songbin Li, Qiongxing Dai, and Haojiang Deng
Differential 555Guo Huang, Li Xu, Qingli Chen, and Tao Men
Enhancement 565Qingli Chen and Guo Huang
in Range Images 573Qingli Yin
Moving Objects Using Graph Cuts 583Mingjie Zhang and Baosheng Kang
69 Automatic Detection of Pharyngeal Fricatives in Cleft
Palate Speech 591Yan Xiao and Mangui Liang
Model Based on Conditional Random Fields 599Fanjin Mai, Shitong Wu, and Taoshi Cui
Trang 1271 Mobile Real-Time Monitoring System
Based On Human Action Recognition 607Lin Chai, Zhiqiang Wei, and Zhen Li
72 A Kind of Image Classification Method Study 615Guoqing Wu, Bingheng Yang, and Liang Lv
Based on the Vision of the CO2Welding 625Xiaogang Liu and Xiaowei Ji
of Froth Images 635Yanpeng Wu, Xiaoqi Peng, Yanpo Song, and Qian Jiang
Machine with Choquet Integral 643Aixia Chen, Zhiyong Liang, and Zhen Guo
Color-Depth Cameras 649Kezhen Xie, Zhen Li, and Zhiqiang Wei
Objects Detection in Traffic Video Analysis
on a Heterogeneous Platform 659Teng Li, Yong Dou, Jingfei Jiang, and Peng Qiao
for Robust Tracing 669Weiwu Ren, Xiao Chen, Xiaoming Wang, and Mingyang Liu
Based on Radar-Vision Fusion 677Xiao Chen, Weiwu Ren, Mingyang Liu, Lisheng Jin, and Yue Bai
Based on the Second-Generation Curvelet Transform 687Haihong Xue, Binjin Chen, Kun Sun, and Jianguo Yu
Neural Network Based on Root Mean Square
of Gray Scale 695Hongliang Shi, Jian Rong, and Xinmin Zhou
for PolSAR Land Cover Classification 705
Lu Liu, Dong Sun, and Junfei Shi
Trang 1383 An Algorithm for Human Face Detection in Color Images
Based on Skin Color Segmentation 713Chunqiang Zhu
System in Home 719Jiajin Zhang, Lichang Chen, Quan Gao, Zhaobo Huang,
Lin Guo, and Yanxin Yang
Volume II
Ziqian Xiao and Jingyou Chen
Recommendation 739Xuejie Zhang, Zhijian Wang, Weijian Zhang, and Fang Yang
Tool in a Cloud-Based Environment 749Shin-Jer Yang, Chung-Chih Tu, and Jyhjong Lin
on Maximum Interval Value 761Che Liu, Yunfei Zhang, Fang Yang, Wenhuan Zhou,
and Xin Lv
Countries 769Petra Maresˇova´ and Kamil Kucˇa
Supply Chain Under the Duopoly Market 777Lingyun Wei, Xiaohan Yang, and Xiaoguang Zhou
Recommendation 787Guoqiang Li, Lejian Liao, Dandan Song, Zhenling Zhang,
and Jingang Wang
Segmentation and WordNet 797Tingna Liu and Ling Jiang
93 OpenSource Automation in Cloud Computing 805Vladimir Sobeslav and Ales Komarek
Trang 1494 Utilization of Cloud Computing in Education with Focus
on Open-Source Technologies 813Vladimir Sobeslav, Josef Horalek, and Jakub Pavlik
in Cloud Computing 821Hongjiao Li, Shan Wang, Xiuxia Tian, Weimin Wei,
and Chaochao Sun
with Asymmetric Information About the Market 833Lingyun Wei, Jiafei Ling, and Xiaoguang Zhou
of K-Means with MapReduce 845Bingliang Lu and Shuchao Wei
on Web Services 853Gangguo Li, Wu Qin, Tingyi Zhou, Yang wang,
and Xiaofeng Zhu
Considering Service Security 861Shengji Yu, Hongyu Chen, and Yanping Xiang
100 Protection Circuit Design of Lithium-Ion Battery
Pack Based on STM32 Processor 871Hongtao Zhang, Fen Wu, Hang Zhou, Xiaoli Peng,
Chunhua Xiao, and Hui Xu
Module Design Based on ARM Platform 879Chunlei Song
System Based on Serial Port Communication 889Peigang Jia and Sirui He
Monitoring System 897Hui Chao and Wang Zhou
Management Information System 907Jiangsheng Zhao and Xi Huang
Trang 15105 A Short Loop Queue Design for Reduction of Power
Consumption of Instruction-Fetching Based on the Dynamic
Branch Folding Technique 917Wei Li and Jianqing Xiao
Sigma-Delta Modulator 925Jhin-Fang Huang, Jiun-Yu Wen, and Wei-Chih Chen
the Power Consumption of Embedded Processor 935Wei Li and Jianqing Xiao
Laser Machine Tool 943Zhiqin Qian, Jiawen Wang, Lizong Lin, and Qun Cao
Schnorr–Euchner Sphere Decoder Algorithm Accelerator
on Field-Programmable Gate Array 953Shijie Li, Lei Guo, Yong Dou, and Jingfei Jiang
Chip Design for Positron Emission Tomography Front-End
Application 963Wen-Cheng Lai, Jhin-Fang Huang, Kun-Jie Huang,
and Pi-Gi Yang
System Based on Universal Serial Bus Host 971Xin Li, Lingping Chen, Rentai Chen, and Shengwen Jiang
on Multiagent Modeling Approach 979Xuehong Bai, Lihu Pan, Huimin Yan, and Heqing Huang
113 Parallel Parity Scheme for Reliable Solid-State Disks 987Jianbin Liu, Hui Xu, Hongshan Nie, Hongqi Yu, and Zhiwei Li
in Unicode 995Xiaoying Chen, Jinyong Ai, and Xiaodan Guo
115 Graphical User Interface Reliability Prediction Based
on Architecture and Event Handler Interaction 1003Zhifang Yang, Sanxing Yang, Zhongxing Yu,
Beibei Yin, and Chenggang Bai
Trang 16116 An Energy-Efficient Dual-Level Cache Architecture
for Chip Multiprocessors 1011Mian Lou, Longsheng Wu, Senmao Shi, and Pengwei Lu
System 1019Daisen Wei, Xueqing Li, and Longye Tang
on Compute Unified Device Architecture 1027Yunpeng Cao and Haifeng Wang
an Expanding Element of Smart Home Concepts 1035Jan Dvorak, Ondrej Berger, and Ondrej Krejcar
Model 1043Peixin Que, Xiao Guo, and Zhen Wang
121 Modeling of Virtual Electrical Experiment 1051Yinling Zhang, Deti Ji, and Renyou Zhang
on Unexpected Threats 1059Peng Ren, Xiaoguang Gao, and Jun Chen
to Battlefield Ad Hoc Network 1071Sijia Lou, Jun He, and Wei Song
124 Research and Outlook on Wireless Channel Models 1079Yuqing Wang, Cuijie Du, Xiujuan Han, Yuxin Qin,
and Hongqi Wang
File-Sharing Networks 1087Weimin Luo, Jingbo Liu, Jiang Xiong, and Ling Wang
126 A New Type of Metropolitan Area Network 1095Chuansheng Wu, Yunqiu Shi, and Shicheng Zhao
Theory in Wireless Sensor Networks 1103Wan Qiang Han, Hai Bin Wu, and Zhi Jia Lu
Trang 17128 An Item-Based Collaborative Filtering Framework
Based on Preferences of Global Users 1113Chengchao Li, Pengpeng Zhao, Jian Wu, Jiumei Mao,
and Zhiming Cui
129 Disassortativity of Class Collaboration Networks 1121Dong Yan, Keyong Wang, and Maolin Yang
Theory and Simulation Analysis 1129Xin Jin, Jianyu Li, and Lei Zhang
131 Analysis of Network Accessibility 1139Shuijian Zhang and Ying Zhang
Na Li, Yanhui Du, and Guangxuan Chen
Chain Value-Added Service Platform 1155Hua Pan and Linfu Sun
Disruption-Tolerant Networks 1165Yankun Feng and Xiangyu Bai
Sensor Networks 1177Jian Di and Zhijun Ma
of Multi-hop Wireless Sensor Networks 1185Shaoqing Wang, Kai Cui, and Ning Zhou
Traffic Identification 1195Yanping Li and Yabin Xu
on GM-Markov Method 1207Xiaohui Cheng, Jinzhou He, and Qiliang Liang
Programming Research 1215Mingchun Zheng, Xinxin Ren, Xiao Li, Panpan Zhang,
and Xuan Liu
Trang 18140 Kalman Filter-Based Bandwidth and Round Trip Time
Estimation for Concurrent Multipath Transfer
Performance Optimization in SCTP 1225Wen Li, Wenbo Wang, Xiaojun Jing, and Wen Liu
Takashi Matsuhisa
Based on Tri-LDA Model 1245Wei Ou, Zanfu Xie, Xiping Jia, and Binbin Xie
Network Design 1255Vladimir Sobeslav and Josef Horalek
for Mobile Multihop Wireless Network 1263Feng Liu, Jianli Li, Gong Qin, and Fanhua Kong
Force Equilibrium for Wireless Sensor Networks 1273Yujian Wang and Kaiguo Qian
Based on Vulnerability Analysis 1281Yahui Li, Hongwa Yang, and Kai Xie
Products Quality and Safety Based on the Social
Network 1291Yingcheng Xu, Xiaohong Gao, Ming Lei, Huali Cai,
and Yong Su
for Wi-Fi Positioning System 1299Shiping Zhu, Kewen Sun, and Yuanfeng Du
Trang 19Algorithm Design
Trang 20A Spatiotemporal Cluster Method
for Trajectory Data
Yunbo Chen, Hongchu Yu, and Lei Chen
wide-spread, the production of large amounts of different types of trajectory data andthe extraction of useful information from mass trajectory data have emerged as hotissues in data mining This paper presents a trajectory data processing methodfeaturing simple operation, high precision, and strong practicability Forlow-precision trajectory data that are discrete but contain time information, aclustering algorithm is proposed to extract information from such data The algo-rithm can detect a point of interest (POI) in trajectory data by setting space and timethresholds Trajectory data collected from a taxi using a global positioning system
in Kunming, China, are used as experimental data We conduct an experiment todetect a POI in the collected trajectory data and carry out a visual analysis of thesespecial positions The experimental results show the effectiveness of the algorithm,which can in addition compress trajectory data
Keywords Trajectory data • Data mining • Spatiotemporal cluster
Orientation technology and information communication technology have reachedthe stage in their development where they allow for the tracking, in real time, ofdynamic objects, resulting in a huge amount of trajectory data For example, taxicompanies, bus companies, and government agencies have installed global posi-tioning systems (GPS) on vehicles to monitor and manage them To track vehiclescontinuously, the systems send location information by GPS for continuous acqui-sition to a vehicle management control center When a person’s mobile phonecommunicates with the mobile communication base station, the mobile communi-cation services will collect data on the user’s location and reconstruct her trajectory
© Springer International Publishing Switzerland 2015
W.E Wong (ed.), Proceedings of the 4th International Conference on Computer
Engineering and Networks, DOI 10.1007/978-3-319-11104-9_1
3
Trang 21based on the data In this article, time-continuous position data are calledtrajectorydata and may be used to record the trajectories of people or objects Vehiclemanagement centers and mobile communication service providers generally deletedata regularly or irregularly, and large amounts of trajectory data contain a wealth
of information
At present, with the increasing application of positioning technology, the mulation of large amounts of trajectory data provides opportunities for mininguseful information from the data, but the mining of such information presents achallenge On the basis of an in-depth analysis of the characteristics of trajectorydata, this paper proposes and implements a processing algorithm for GPS trajectorydata that only requires setting time and space thresholds to detect special positions
accu-in trajectories; accu-in addition, the algorithm can compress raw trajectory data.Some academics called, for example, trajectory data for location history data [1],time-stamped position data, and GPS traces [2,3] Studies on such data approachthe topic mainly from the following three viewpoints (1) Data expression:Hangerstrand first introduced the concept oftime trajectory and space–time prism
to analyze a human migration model [4] With time as the variable axis, people’sspace as the dependent variable axis, using migration records by governmentstatistics or artificial records, he was able to visualize human migration on athree-dimensional coordinate axis Usually, the trajectory is simply expressed as aseries of coordinate pairs: [(x1, y1, t1), ., (xn, yn, tn)] , (x1, y1) 2 space plane,t1< t2 < < ti < tn, i 2 1 n (2) Data compression: because of the continu-ously changing spatial position of a moving object, simply recording and transmit-ting all the positioning data will reduce the performance of the system, soresearchers have studied trajectory data compression algorithms with the aim ofeffectively updating the location of moving objects (3) Data application: these areprimarily applied in mobile communication network optimization and location-based services (LBSs) to provide services in connection with mobile communica-tion in moving users when a user’s location needs to be tracked and even to predictthe location of users to optimize service and reduce the network load The premise
of the LBS is to know the locations of users, so it needs to track them In 2010, at theAsian Microsoft Research Institute, Dr Xie Xin and coworkers launched a researchproject called GeoLife [5,6] This project initially carries GPS data collected daily
to conduct research from the following three points of view: to understand a user’slife trajectory by deducing travel modes; to obtain an in-depth understanding ofusers by estimating their travel experiences; to understand the location environment
by predicting the user’s points of interest (POIs) The results can be applied to thefollowing areas: life experience sharing based on GPS trajectories; commonregional travel recommendations, such as, for example, interesting places and travelexperts; personalized locations or recommendations of friends
Based on the foregoing discussion, on the one hand, most current studies focus
on specific applications to solve specific problems; on the other hand, less research
is being conducted on trajectory data processing methods and existing processingmethods because current multilevel segmentation algorithms have certain limita-tions with respect to accuracy and practicability The application analysis of
Trang 22trajectory data is inseparable from the extraction of POIs Because the coordinates
of POIs produced by user access to the same location change frequently, directanalysis of such POIs is infeasible; thus, an algorithm needs to cluster POIsextracted from trajectory data that distribute the adjacent POIs to the same cluster,which (the cluster)is carried out to replace the POI for analysis In view of this, thispaper attempts to provide a spatio-temporal clustering processing method fortrajectory data First, the method makes it possible to extract meaningful positions
or temporal events related to the object of study and to compress vast amounts
of data
The chapter is organized as follows Section 2 presents related concepts andalgorithms, Sect 3 uses the proposed algorithm in an experiment and in thevisualization analysis of GPS vehicle trajectory data, and the last section presentsconclusions and future prospects
The moving of objects in space is usually considered an function (generally for atwo-dimensional location) which reflects the changes of objects’ spatial positionover time In a particular plane coordinate system, the spatial location of an object isrepresented as a coordinate pair (x, y), and the trajectory of the object can beexpressed as a functionf(t)¼ (x, y),where f is a continuous function and t is time
In real life, we usually take samples from the coordinates of moving objects inspatiotemporal coordinates to express the approximate trajectory [7] The generalsampling methods are as follows:
Time-based sampling: by setting a sampling time interval (e.g., 30 s) in the motionprocess, the spatial location of objects are recorded once every 30 s In vehiclemonitoring, this is the majority of the sampling mode, typically using a GPS forthe position
Change-based sampling: in motion processing, when changes take place, theposition is recorded, such as the position of an object when the object movementdirection changes
Location-based sampling: when an object is close to the sensors which were placedbeforehand, the sensors will record the object’s position, such as, for example,when scanning objects in a specific location and the delivery and transfer ofgoods in logistics
Event-based sampling: the establishment of a position triggered by some event,such as a phone call, or answering when a mobile communication companylocates a customer in order to establish a communication link
The experimental data in this paper are GPS vehicle trajectory data A taxi sendsdata to a monitoring center every 15 s to monitor and manage the vehicle This type
of data acquisition is based on the first method of data sampling, which uses shortsampling intervals, features complete data, is not subject to accidental factors, and
Trang 23makes it easy to analyze people’s patterns of travel in traffic Regardless of thesampling method used, it is necessary to obtain a set of discrete points For the sake
of convenience, discrete points are represented as three-dimensional coordinates(t, x, y): the spatiotemporal location of the object, which we call the anchor point,and (x, y), which refer to the location and time Thus, the trajectory of object A can
be represented as a collection ofc¼ {(ti,xi,yi)|i2 N+}, wherei represents discretepoints sorted by time sequence number and (ti,xi,yi) are the location of theith point
of the trajectory It is worth noting that the location of the anchor point is generallysubject to error because the size is related to the method used to determine theposition
This paper proposes a method of space–time clustering to extract vehicle POIs,defined as a moving person or object, at a location for more than a limited time, tosatisfy
30 min Figure1.1shows the sampling of a trajectory generated from a person or anobject moving in space The arrows indicate the direction of movement, and theblack points denote sampling trajectory points based on people or objects in theprocess of moving in an uneven distribution of the location of the space We believethat the POI has a special meaning for the object, the red spots in the map are POIs,With further considering the geographical region around the POIs in a certainrange, draw the dotted line circle as a region of interest
Fig 1.1 Trajectory sampling of something, points of interest, and interest region sample
Trang 24For the POIs having a special meaning for the object, we designed an algorithm
to extract POIs from trajectory data The objective is to cluster the spatiotemporaldata Here, I is the anchor point number, ti the time attribute of anchor pointslabeled I, td the total time difference, s the starting point of clustering, J theclustering endpoint, andMaxD the maximum distance between the starting pointand the endpoint Ts is the time threshold, DS the space threshold, N the finalanchor point, andPOI the collection of POIs The algorithm procedure is shown inFig.1.2
In this paper, the proposed algorithm was implemented in Visual Studio 2010, withthe results visually displayed and analyzed First, the experimental data arepresented, followed by data preprocessing; then the proposed clustering algorithm
is used to analyze and process the data; finally, the results obtained by the proposedclustering algorithm are subject to visual analysis in the ArcGIS platform.Fig 1.2 Flow chart of POI extraction
Trang 251.3.1 Introduction of Data
The data in our study were provided by a taxi company in the city of Kunming,China The company installed a GPS on a vehicle with the vehicle position datatransmitted to the monitoring center every 15 s The data included the time,longitude, latitude, vehicle running speed and angle, and passenger capacity.Table 1.1displays the GPS taxi daily travel trajectory data fragments Of these,stime is the acquisition of the anchor point time, here Beijing time, accurate to thesecond; the format is years–month–day–points–seconds; latitude means latitude,andlongitude means longitude, and latitude and longitude expanded one milliontimes in the original data; speed is the instantaneous velocity, in kilometers perhour, of the taxi in operation;orientation is the direction of the taxi runtime indegrees; and state is the carrying capacity The GPS positioning precision is 3–
10 m, but the vehicle running speed and angle only have a reference value because
in most of the time, they are not accurate Experiments were carried out on a total of9,579 data points during the week of the study
Prior to analysis, the preprocessing made original data satisfy the followingprocessing requirements (1) Abnormal data elimination: typical noise may shiftpositioning data, that is, the positioning data change a great deal in the inner space
in a short period of time This kind of noise, caused by unstable GPS signals, isremoved as abnormal data prior to analysis (2) Coordinate conversion: the location
of the original data is expressed in the geographical coordinates of latitude andlongitude, which is not conducive for calculation of the distance or for integrationwith the geographic base map, so the analysis should be carried out after the unifiedcoordinate transformation of the data (3) Data visualization: a GPS track record isnot appropriate for intuitive analysis and needs to be visualized; for this reason, it isnecessary to connect the track points based on the time sequence so as to reconstructthe vehicle trajectory, as shown in Fig.1.3
Table 1.1 Taxi tracking data example
Trang 261.3.3 Analysis of POI Extraction and Visualization
After data preprocessing, the proposed algorithm can be used to cluster andvisualize the results As shown in Fig.1.4(top), POI extraction, time threshold,and space threshold should be set before clustering Obviously, Table1.2shows thecharacteristics that if the time threshold is set higher and the space threshold lower,there will be fewer POIs, and vice versa The red points in Fig.1.4show the taxirunning POI from the map, including the train station, airport, gas station, and othercommon locations Further exploration is required to determine whether there is areference value for the source of taxis transporting passengers
Fig 1.3 Visualization of GPS track record and GPS trajectory reconstruction
Fig 1.4 Extraction interface and visualization of POIs
Table 1.2 Statistics for a Taxi, 21–27 February 2011, POIs in 1 Week
Trang 27Conclusion and Prospects
This paper explored a method of clustering trajectory data using an algorithm.Experiments were carried out using real trajectory data, with the resultsshowing that this algorithm could be used effectively to cluster trajectorydata and detect the position of research objects in a meaningful manner whileeffectively compressing the trajectory data Along with the use of China’sBeidou satellite, real-time dynamic object tracking will make it easier toproduce more trajectory data Thus, the trajectory data processing method,the data mining algorithm, and the application research are necessary forLBSs Nevertheless, further research is still required to mine more data,create more processing algorithms, and develop application models; at thesame time, efforts will be made to acquire different object trajectory data foranalysis and mining
4 Ha¨gerstraand T What about people in regional science? Pap Reg Sci 1970;24(9):7–24.
5 Zheng Y, et al GeoLife2.0: a location-based social networking service Proceedings of the MDM ’09; Beijing, China; 2009 pp 357–58.
6 Xie X, et al Understanding user behavior geospatially, contextual and social media standing and usage J ACM 2009;214(3):233–41.
under-7 Zheng Y, et al Learning transportation mode from raw GPS data for geographic applications on the web Submitted to WWW 2008; Journal of the Association for Computing Machinery (JACM); 2008.185(2):247–56.
8 Wolfson O, et al Updating and querying databases that track mobile units Distributed Parallel Database 1999;7(3):257–387.
Trang 28Use Case Points Method of Software Size
Measurement Based on Fuzzy Inference
Yue Xie, Jilian Guo, and Anwei Shen
Abstract Size measurement is the key element in software development costs andschedule estimation, and the success of a software project directly relates tomeasurement accuracy This paper addresses the problem of use case complexityweight hierarchies of discontinuity in the traditional use case points (UCP) methodand proposes an improved complexity weight calculation method that utilizes fuzzytheory to analyze the complexity of use cases First, with use case transactions asinput and complexity weight as output, this paper is based on a fuzzy inferencesystem Then fuzzy rules are established based on the relationship between com-plexity weights and transactions in use cases These fuzzy rules can be used tocompute the complexity weight Studies have shown that the proposed method caneliminate discontinuity grades of use case complexity and enhance the accuracy ofUCP estimation as well
weight
With the rapid development of computer technology, software has become animportant symbol of modern information processes The increasing complexityand scale of modern software has led to a dramatic rise in costs of software andcaused software development to fall far behind schedule [1] The most importanttask in the development of software programs is to estimate the labor, cost, andrelease schedule of software, which are key factors in estimating and ascertainingthe scope of software projects As the key factor in software engineering, the sizeestimate directly relates to the success of the entire software developmentproject [2]
The use case points (UCP) method is a method for estimating a softwareproject’s scope and effort based on use case in the object-oriented development
Y Xie ( * ) • J Guo • A Shen
Aeronautics and Astronautics Engineering College, Air Force Engineering University,
710038 Xi ’an, China
e-mail: xypy2012@foxmail.com
© Springer International Publishing Switzerland 2015
W.E Wong (ed.), Proceedings of the 4th International Conference on Computer
Engineering and Networks, DOI 10.1007/978-3-319-11104-9_2
11
Trang 29method, which was proposed by Gustav Karner in 1993 It uses cases and actors thathave been identified and classified according to their complexity to calculate UCPsand then uses the relation of UCPs to effort to obtain the developmental effort ofsoftware projects required in man-hours as the main feature of the method; never-theless, because a use case is based on usage and places the user at the center of thesoftware rather than the system or design, it shows greater robustness and stabilitythan the Function Point method or the Lines of Code method A large number ofresearch papers suggest that UCP is a very effective method [3].
UCP has attracted wide research attention since its invention Some issuesrelated to UCP have been addressed in previous efforts Anda focuses on adjust-ment factors, while others highlight the discrepancies in designing use case models[4,5] Robiolo and Ochodek propose different size metrics such as transactions, TTpoints, and paths [6,7] In addition, researchers have analyzed the discontinuityproblem of use case complexity grades [8,9] In this paper we propose an improvedUCP method based on fuzzy inference that uses fuzzy theory to analyze thecomplexity weight of use cases so as to eliminate the division between differentcomplexities of discontinuity problems In this sense, it makes estimating the scope
of software projects much more realistic using UCP
In the course of calculating the unadjusted use case weight using the traditionalUCP method, the complexity of a use case is determined by use case transactions.However, the complexity hierarchy in the traditional UCP method is discontinuous,which can sometimes result in complexity weight measurements that are inaccu-rate Table2.1shows three use cases
Several typical cases are given in the table, as follows:
1 According to the traditional rules for determining use case complexity weights,Use Case_2 and Use Case_3, both with the same weight of 10, are classified asmedium, but Use Case_3 has two more transactions than Use Case_2 Thus,clearly, Use Case_3 is more complex
2 Use Case_1, which contains three transactions, is classified aslow with a weight
of 5 But if another transaction is added to Use Case_1, it will become like UseCase_2 and be classified asmedium according to the complexity rules
Analysis shows that complexity hierarchies are discontinuous in the traditionalUCP method and have a certain arbitrariness If these situations occur in the sameTable 2.1 Determined
complexity weight by
traditional UCP method
Use Case_1 Use Case_2 Use Case_3
Trang 30software project, the measurements will not conform to the real-world situation,which will lead to serious errors, especially if the number of use cases is very high.
To resolve this problem, we use a fuzzy inference method to analyze the complexityweight of use cases so as to eliminate division of the discontinuity effects
Fuzzy inference is a calculation process that obtains new fuzzy propositions as aconclusion under the condition of a given fuzzy proposition by the fuzzy logicmethod, also known as fuzzy logic inference It simulates the human ability to makereasonable decisions in an uncertain and imprecise environment It can map a giveninput space to a specific output space Fuzzy inference has a unique advantage insolving fuzzy and uncertainty problems whose inference process is similar to thethinking process of humans It can perform nonlinear mapping from input to outputand has a strong inference explanation function
Fuzzy inference can be divided into fuzzification, fuzzy logic inference, anddefuzzification [10] The fuzzy variety is the basis for establishing a fuzzy system,which analyzes problems using the complexity weight of use cases As shown in thecase that follows, considering the transactions of a use case as the input variable andthe complexity weight of the use case as the output variable, a fuzzy inferencesystem was constructed, as shown in Fig.2.1
This fuzzy inference system uses the Mamdani Controller and defines the inputvariable as small, medium, large, or extra large and defines the output variable aslow, average, high, or extra high For example, if only one transaction is in the usecase, the transaction of the use case is considered to be low With GaussianFunction as the membership function for the input and output variables, themathematical description form of the Gaussian membership function is shown in
Eq (2.1):
Fig 2.1 Fuzzy inference system of use case complexity weight
Trang 31As the core of a fuzzy system, fuzzy rules reflect the causality of input and output,similar to the conditional expression in common human language use:if .then.Fuzzy systems with a single input and a single output have fuzzy rules such as: IFx
is A Then y is B, where x is input, y is output, and A and B comprise a fuzzycollection of input and output In general, the more fuzzy subsets there are, thehigher the output precision will be, but the corresponding calculation costs willincrease In practice, we select the appropriate size of fuzzy rule based on theFig 2.2 Membership function of transaction
Fig 2.3 Membership function of complexity weight
Trang 32requirements for precision Based on the relation of complexity weight to use casetransactions in the traditional UCP method, fuzzy rules are set for the purpose ofanalyzing the complexity weight, as shown in Table2.2.
Using the fuzzy rules in Table2.3, we can obtain fuzzy rules for the complexityweight, as shown in Fig.2.4, which displays a typical case (the number of inputtransactions is 3, and the output complexity weight is 5.21) The curve graph of thefuzzy system is shown in Fig.2.5
With the fuzzy system constructed here, we analyze the special circumstances inTable2.1and obtain the adjusted complexity weight (Table2.3)
Table 2.2 Fuzzy rules Rule Transaction (input) Complexity weight (output)
Table 2.3 Comparison of complexity weight determined using the two methods
Trang 33As we can see in Table2.3, using the improved UCP the adjusted weight of UseCase_1 is 5.21, the adjusted weight of Use Case_2 is 5.96, and the adjusted weight
of Use Case_3 is 9.12 Compared with the traditional UCP method, the improvedUCP method is more reasonable for determining the complexity weight As a result,the problem of discontinuous division has been effectively solved, and the estima-tion of software size is much closer to the actual situation using the improved UCPmethod Because the fuzzy inference structure can compute the unadjusted weight
of a use case, the adjusted use case point of software projects can be computed
To reflect the effectiveness of the proposed method, the following specific softwareprograms are shown as an example We apply the improved UCP methodestablished on the basis of fuzzy inference to estimate the scale of actual software.The algorithm flow is shown in Fig.2.6(see [3] for detailed calculation steps); theanalysis results are shown in Table2.4
We can see from the data in the table that, in general, the UCP method is aneffective method of estimation It has an accuracy similar to that of the Expert
Calculate unadjusted use case weight
Calculate unadjusted use case
Environmental factors
Technical factors
Calculate use case points
Calculate effort
Fig 2.6 Algorithm flow of UCP
Trang 34method and can be used in combination with that method If the estimation error inthese two methods is large, recalculation is possible.
The estimation is very similar using the improved UCP and the traditional UCPmethods because the improved UCP method stems from the traditional UCP Threeout of four projects have smaller errors using the improved UCP method comparedwith the traditional UCP method As is clear from the four projects, the improvedUCP method has a higher accuracy than the traditional UCP method; therefore, wecan use the improved UCP method, which is based on fuzzy inference, to determinethe complexity weight of use cases in situations where UCP can be used so as toobtain greater accuracy
Conclusion
Because UCP is a novel method for measuring size in software costsand release schedule estimation, especially with respect to object-orienteddesign in software engineering, it offers some great advantages Nowadays,with more and more software using object-oriented design, the use andstudy of UCP will show a rising trend In this paper, to solve the problems
of complexity dividing discontinuons complexity in the traditional UCPmethod, an improved method was proposed for calculating the complexityweight using fuzzy theory First, a fuzzy inference system was constructedwith use case transaction as the input and complexity weight as the output.Then fuzzy inference rules were established based on the relation of com-plexity to the number of transactions in order to analyze the complexityweight applying the rules The research results show that the improvedmethod can effectively overcome the deficiency of the traditional UCPmethod with respect to analyzing the complexity weight and avoid faultyclassification; in addition, its estimation results are much closer to real-worldsituations
Table 2.4 Comparison estimation of software projects
Traditional UCP (man-hours)/error
Improved UCP (man-hours)/error Office
Trang 355 Arnold M, Pedross P Software size measurement and productivity rating in a large-scale sofrware development deparment Proceedings of the 20th International Conference on Soft- ware Engineering; IEEE Computer Society, Los Alamitos; 1998 pp 490–93.
6 Robiolo G, Orosco R Employing use cases to early estimate effort with simpler metrics Innov Syst Softw Eng 2008;4(1):31–43.
7 Robiolo G, Badano C, Orosco R Transactions and paths: two use case based metrics which improve the early effort estimation Proceedings of the International Symosium on Empirical Software Engineering and Measurement; IEEE, NJ, USA; 2009 pp 422–25.
8 Ochodek M, Nawrocki J Automatic transactions identification in use case Balancing Agility Formalism Softw Eng 2008;5082(1):55–68.
9 Moataz A, Moshood O, Jarallah A Adaptive fuzzy logic-based framework for software development effort prediction Inf Softw Technol 2005;47(1):31–48.
10 Zhou Y Method of progress metrics in software development based on use case Shanghai: Shanghai Normal University; 2006 (in Chinese).
Trang 36ATPG Algorithm for Crosstalk Delay Faults
of High-Speed Interconnection Circuits
Yuling Shang and Pei Zhang
Abstract With the use of ultra-deep submicron technologies, crosstalk has becomeone of the major causes of failure of signal integrity (SI) in high-speed circuits.Logic faults and time delays in high-speed circuits happen when crosstalk becomessevere, which leads to serious problems during the design verification and testphases in high-speed circuits In this paper, a vector generation fault test algorithmfor crosstalk delay based on the maximum aggressor model and waveform sensi-tization is proposed for analyzing the four types of crosstalk delay fault in high-speed interconnection circuits; in addition, by improving the traditional FANalgorithm, the proposed algorithm designates a victim line and maximally activatesthe corresponding aggressive line so as to generate the maximum access delay in ahigh-speed interconnection circuit induced in a worst-case scenario In this algo-rithm, both the gate delay and the line delay are taken into consideration in high-speed interconnection circuits, and two strategies, including static priority anddynamic priority, are examined to achieve a more efficient delay test The testswere verified in a standard C17 circuit, and the results show that the test vectors forcrosstalk delay faults in high-speed circuits can be detected by the proposedalgorithm
Keywords Signal Integrity (SI) • Crosstalk • Delay faults • FAN algorithm
With the fast development of design technology and the craft of integrated circuits(ICs), circuits have become more and more integrated at higher working frequen-cies Crosstalk between adjacent lines resulting from the coupling effect is muchmore likely to happen with the use of ultra-deep submicron technology Logic faultsand time delays occur when the crosstalk becomes severe, which has alreadybecome a main reason for function faults in high-speed circuits that use ultra-deep submicron technologies
Y Shang ( * ) • P Zhang
CAT7504 Laboratory School of Electronic Engineering and Automation, Guilin University
of Electronic Technology, 541004 Guangxi, China
e-mail: shang_yuling@qq.com ; 312161945@qq.com
© Springer International Publishing Switzerland 2015
W.E Wong (ed.), Proceedings of the 4th International Conference on Computer
Engineering and Networks, DOI 10.1007/978-3-319-11104-9_3
19
Trang 37The negative effects of crosstalk can be divided into two categories: glitch faultand delay fault The crosstalk delay fault is summarized by Chen et al in which,considering the signal arrival time and the ascend/descend time, the delay model isbuilt using 11 variable values to obtain the crosstalk delay ATPG algorithm [1] Amethod of generating a delay test generation taking into consideration the crosstalkeffect is studied to seek the maximum delay of circuits on the basis of a geneticalgorithm and delay simulation [2] Nonsturdy test circuit sensitization was applied
in the aforementioned study only to discover that the delay faults caused bycrosstalk are actually more related to the time delay distribution information ofthe circuit under test (CUT) Min Yinghua et al proposed the concept of waveformsensitization and indicated that nonsturdy unpredictable circuits could also bewaveform-sensitized upon the theory analysis [3] The adoption of waveformsensitization and the introduction of delay information under circuit sensitizationmay not only be much closer to the actual fact of circuit sensitization, but it mayalso reflect the delay faults caused by crosstalk Zhang Yue et al combine 9-valuelogic with time parameters to test the delay faults in a critical circuit on the basis of
a test generation algorithm of waveform-sensitized crosstalk delay faults [4] Thealgorithm only considers line delay, not gate delay In the paper, waveform sensi-tization technology is adopted considering line and gate delays and is not limited todelay fault testing in critical circuits With the improved Automatic test vectorgeneration algorithm (FAN) and its advantages, such as sole sensitization, instantinclusion, and multiple pushbacks, the maximum aggressive time can be obtained
on the basis of a Maximal Aggressor Fault model (MAF) that designates theaggressive line [5] The four types of delay fault generated by the interconnectionline crosstalk in a circuit are analyzed Replacing theD=D variables with RI, RD,
FI, FD, G1, GO, the paper comes up with a mixture of 11-value logic and a timeparameter Then two strategies, such as static priority and dynamic priority, areanalyzed to propose a new efficient generation algorithm with respect to testvectors
The negative effects of crosstalk can be divided into two categories: glitch faultsand delay faults, the latter of which is the focus of this paper The delay faulthappens when the neighbor lines jumps simultaneously If the direction is opposite,then the jump time will obviously increase, as the called crosstalk deceleration; ifthe two lines jump in the same direction, the jump time will obviously decrease, asthe called crosstalk acceleration Delay faults are classified into four types [6]:
1 In anascending delay acceleration fault, both the victim line and the aggressiveline jump forward at some point so that the jump time of the victim line in theforward direction decreases considerably RI
Trang 382 In anascending delay deceleration fault, the victim line jumps in the forwarddirection, the aggressive line jumps in the backward direction, and the jump time
of the victim line in the forward direction increases considerably RD
3 Adescending delay acceleration fault occurs when both the victim line and theaggressive line jump in the backward direction and the jump time of the victimline in the backward direction decreases considerably FI
4 Adescending delay deceleration fault occurs when the victim line jumps in thebackward direction and the aggressive line in the forward direction, and the jumptime of the victim line in the backward direction increases considerably Thedelay fault caused by crosstalk could lead to as severe as delay errors of thecircuit FD
In dealing with high-speed interconnection circuits, the algorithm proposed in thispaper, which represents an improvement over the traditional FAN algorithm [7],analyzes the four classes of crosstalk delay fault in high-speed interconnectioncircuits with the maximum aggressive model and waveform sensitization technol-ogy In consideration of gate delay information and line delay information, twostrategies, static priority and dynamic priority, are studied in search of an ATPGalgorithm of test vectors with the time parameter for the test fault
After a fault target is selected, the aggressive line is activated by the largest delay
In the generation of a test vector, this paper discusses two test vector generationstrategies and makes certain improvements over the FAN algorithm according tothe characteristics of the crosstalk fault Figure3.1shows a flow chart of the testvector generation algorithm The full process of test vector generation mainlyincludes three phases: sensitization of the victim line, determination of circuittiming information, and the course of test vector generation with the timeparameter
The steps of the flow chart for the test vector generation algorithm are as follows
3.3.2.1 Sensitization of Victim Line
First, in the course of analyzing a test circuit, a random line is chosen from the testcircuit as the fault line with the corresponding maximum aggressive assembly
Trang 39found as well; then the fault border of the fault line is determined from the testcircuit to form a fault border assembly; finally, the specified fault for the fault line ischosen from the four fault types: rising delay accelerated failure RI, rising delayreduction failure RD, falling delay accelerated failure FI, and falling delay slowfault FD.
Then, when the delay information of the test circuit has been obtained, the delayinformation is added to the circuit being tested First, a static timing analysis of thegate-level net table generated from the test circuit is conducted, which yieldsinformation about the circuit line delay and gate delay Then, both the fault linetime and the aggressive line signal time are assumed to beT
After the victim line is sensitized and the circuit timing information obtained, theprocess of generating a test vector with time is conducted, and the steps are asfollows:
Fig 3.1 Flow chart of test vector generation algorithm
Trang 401 Depending on the fault type of the fault line, first, obtain the jump information onthe aggressive line to determine the assignment of the aggressive line; thendetermine the assignment of all other input lines, in addition to the fault line onthe fault border closest to the original output; finally, insert the assignment linedirectly into the goal assembly using a dynamic priority strategy.
2 Check whether the current target set is empty; if so, go to step (3); if not, toremove a line in the set, delete it from the set and check whether the selected line
is the fan outlet If so, add it to the fan outlet target set; if not, then check whetherthe line is the original input line If it is, put it in the backcourt target assembly; ifneither, then you are dealing with a pushback process, that is, immediately pushback a gate and subtract the gate delay of the logic gate and the line delaypassing the lines in case of the current time valueT Repeat step (2) for thepushed-back line If it still does not satisfy the requirements of the fan outlet lineand the input line, keep passing the next gate and push back in the next line until
it satisfies step (3)
3 Check whether the fan-out target set is empty; if so, go to step (4); if not, take thefan-out line closest to the original outlet as a target and delete it from the fanoutlet target First, check whether the chosen fan-out line has two or moreassignment requirements, if it has only one assignment requirement, the lineshall be pushed back following step (2); if there are two or more assignmentrequirements, make a judgment for each assignment of the fan-out line; take avalue and make the validation according to the forward and reverse direction-containing process If the fault boundary does not change, disappear, or comeinto the state which contains contradiction, then the value is feasible and thustaken as the value of the outlet; then, continue pushing back with step (2) If thefault boundary does change, disappear, or come into the contain contradiction,the value is not feasible
4 Finally, check whether the fault line is the original output line; if so, that meansthe fault has spread to the original output and the fault signal can be sampledafter the sampling timet is determined With the identified assignments of eachline, obtain the remaining nonverified lines through the backward and forwardcontain in the test circuit using the static priority strategy to obtain the test vectorwith a time coefficient; if the fault line is not the original output line, then spreadthe fault forward to the next logic gate through the current one, and add the gatedelay of the logic gate and line delay of the passed line to the current time value
T The fault boundary will change at the time, while there is no need to determinethe jump information of the aggressive line Repeat the entire test vectorgeneration process starting with step (1) Spread the fault forward to the originaloutput line step by step, and then determine the remaining nonverified lines inthe test circuit; finally, a test vector with time can be obtained