1. Trang chủ
  2. » Khoa Học Tự Nhiên

InformatIon ScIence Reference Part 4 ppsx

52 169 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Network Modeling and Methods for Describing and Measuring Networks
Tác giả Ahuja, R. K., Magnanti, T. L., Orlin, J. B., Cooke, D. F., Curtin, K. M., Noronha, V., Goodchild, M. F., Grise, S., Qiu, F., Hayslett-McCall, K., Bray, T., Dueker, K. J., Butler, J. A., Evans, J. R., Minieka, E., Federal Highway Administration, Federal Transit Administration, Fletcher, D., Expinoza, J., Mackoy, R. D., Gordon, S., Spear, B., Vonderohe, A., Fohl, P., Goodchild, M. F., Church, R. L., Harary, F., Kansky, K., Koncz, N. A., Adams, T. M., Noronha, V., Church, R. L., Nyerges, T. L., Rodrigue, J., Comtois, C., Slack, B., Scarponcini, P., Sutton, J. C., Wyman, M. M., Vonderohe, A., Chou, C., Sun, F., Adams, T., Wilson, R. J.
Trường học Universidad de Chicago
Chuyên ngành Information Science
Thể loại Thesis
Năm xuất bản 2001
Thành phố Chicago
Định dạng
Số trang 52
Dung lượng 2,51 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Type Example Brief description1 Feed-forward neural network Multi-layer perceptron It consists of multiple layers of processing units that are usually interconnected in a feed-forward w

Trang 1

and methods for describing and measuring

net-works and proving properties of netnet-works are

well-developed There are a variety of network

models in GISystems, which are primarily

dif-ferentiated by the topological relationships they

maintain Network models can act as the basis for

location through the process of linear

referenc-ing Network analyses such as routing and flow

modeling have to some extent been implemented,

although there are substantial opportunities for

additional theoretical advances and diversified

application

references

Ahuja, R K., Magnanti, T L., & Orlin, J B

(1993) Network Flows: Theory, Algorithms, and

Applications Upper Saddle River, NJ:

Prentice-Hall, Inc

Cooke, D F (1998) Topology and TIGER: The

Census Bureau’s Contribution In T W Foresman

(Ed.), The History of Geographic Information

Systems Upper Saddle River, NJ: Prentice Hall.

Curtin, K M., Noronha, V., Goodchild, M F.,

& Grise, S (2001) ArcGIS Transportation Data

Model Redlands, CA: Environmental Systems

Research Institute

Curtin, K M., Qiu, F., Hayslett-McCall, K., &

Bray, T (2005) Integrating GIS and Maximal

Covering Models to Determine Optimal Police

Patrol Areas In F Wang (Ed.), Geographic

In-formation Systems and Crime Analysis Hershey:

Idea Group

Dueker, K J., & Butler, J A (2000) A geographic

information system framework for transportation

data sharing Transportation Research Part

C-Emerging Technologies, 8(1-6), 13-36.

Evans, J R., & Minieka, E (1992) Optimization

Algorithms for Networks and Graphs (2nd ed.)

New York: Marcel Dekker

Federal Highway Administration (2001)

Imple-mentation of GIS Based Highway Safety sis: Bridging the Gap (No FHWA-RD-01-039)

Analy-McLean, VA: U.S Department of tion

Transporta-Federal Transit Administration (2003) Best

Practices for Using Geographic Data in Transit:

A Location Referencing Guidebook (No

FTA-NJ-26-7044-2003.1) Washington, DC: U.S ment of Transportation

Depart-Fletcher, D., Expinoza, J., Mackoy, R D., Gordon, S., Spear, B., & Vonderohe, A (1998) The Case

for a Unified Linear Reference System URISA

Journal, 10(1).

Fohl, P., Curtin, K M., Goodchild, M F., &

Church, R L (1996) A Non-Planar, Lane-Based

Navigable Data Model for ITS Paper presented

at the International Symposium on Spatial Data Handling, Delft, The Netherlands

Harary, F (1982) Graph Theory Reading:

Ad-dison Wesley

Kansky, K (1963) Structure of transportation

net-works: relationships between network geogrpahy and regional characteristics (No 84) Chicago,

IL: University of Chicago

Koncz, N A., & Adams, T M (2002) A data model for multi-dimensional transportation ap-

plications International Journal of Geographical

Information Science, 16(6), 551-569.

Noronha, V., & Church, R L (2002) Linear

Ref-erencing and Other Forms of Location Expression for Transportation (No Task Order 3021) Santa

Barbara: Vehicle Intelligence & Transportation Analysis Laboratory, University of California.Nyerges, T L (1990) Locational Referencing and Highway Segmentation in a Geographic Informa-

tion System ITE Journal, March, 27-31.

Rodrigue, J., Comtois, C., & Slack, B (2006)

The Geography of Transport Systems London:

Routledge

Trang 2



Network Modeling

Scarponcini, P (2001) Linear reference system

for life-cycle integration Journal of Computing

in Civil Engineering, 15(1), 81-88.

Sutton, J C., & Wyman, M M (2000) Dynamic

location: an iconic model to synchronize temporal

and spatial transportation data Transportation

Research Part C-Emerging Technologies,

8(1-6), 37-52

Vonderohe, A., Chou, C., Sun, F., & Adams, T

(1997) A generic data model for linear

referenc-ing systems (No Research Results Digest Number

218) Washington D.C.: National Cooperative

Highway Research Program, Transportation

Research Board

Wilson, R J (1996) Introduction to Graph

Theory Essex: Longman.

keywords

Capacity: The largest amount of flow

permit-ted on an edge or through a vertex

Graph Theory: The mathematical discipline

related to the properties of networks

Linear Referencing: The process of

associat-ing events with a network datum

Network: A connected set of edges and

vertices

Network Design Problems: A set of

com-binatorially complex network analysis problems where routes across (or flows through) the network must be determined

Network Indices: Comparisons of network

measures designed to describe the level of nectivity, level of efficiency, level of development,

con-or shape of a netwcon-ork

Topology: The study of those properties of

net-works that are not altered by elastic deformations These properties include adjacency, incidence, connectivity, and containment

Trang 3

An artificial neural network (commonly just

neural network) is an interconnected assemblage

of artificial neurons that uses a mathematical or

computational model of theorized mind and brain

activity, attempting to parallel and simulate the

powerful capabilities for knowledge

acquisi-tion, recall, synthesis, and problem solving It originated from the concept of artificial neuron introduced by McCulloch and Pitts in 1943 Over the past six decades, artificial neural networks have evolved from the preliminary development

of artificial neuron, through the rediscovery and popularization of the back-propagation training algorithm, to the implementation of artificial neu-

Trang 4



Artificial Neural Networks

ral networks using dedicated hardware

Theoreti-cally, artificial neural networks are highly robust

in data distribution, and can handle incomplete,

noisy and ambiguous data They are well suited

for modeling complex, nonlinear phenomena

ranging from financial management,

hydrologi-cal modeling to natural hazard prediction The

purpose of the article is to introduce the basic

structure of artificial neural networks, review

their major applications in geoinformatics, and

discuss future and emerging trends

bAckground

The basic structure of an artificial neural network

involves a network of many interconnected

neu-rons These neurons are very simple processing

elements that individually handle pieces of a big

problem A neuron computes an output using an

activation function that considers the weighted

sum of all its inputs These activation functions

can have many different types but the logistic

sigmoid function is quite common:

1

f ( x )

where f(x) is the output of a neuron and x

rep-resents the weighted sum of inputs to a neuron

As suggested from Equation 1, the principles

of computation at the neuron level are quite

simple, and the power of neural computation

relies upon the use of distributed, adaptive and

nonlinear computing The distributed

comput-ing environment is realized through the massive

interconnected neurons that share the load of the

overall processing task The adaptive property

is embedded with the network by adjusting the

weights that interconnect the neurons during the

training phase The use of an activation function

in each neuron introduces the nonlinear behavior

to the network

There are many different types of neural

net-works, but most can fall into one of the five major

paradigms listed in Table 1 Each paradigm has advantages and disadvantages depending upon specific applications A detailed discussion about these paradigms can be found elsewhere (e.g., Bishop, 1995; Rojas, 1996; Haykin, 1999; and

Principe et al., 2000) This article will concentrate

upon multilayer perceptron networks due to their technological robustness and popularity (Bishop, 1995)

Figure 1 illustrates a simple multilayer ceptron neural network with a 4×5×4×1 structure This is a typical feed-forward network that al-lows the connections between neurons to flow in one direction Information flow starts from the neurons in the input layer, and then moves along weighted links to neurons in the hidden layers for processing The weights are normally determined through training Each neuron contains a nonlinear activation function that combines information from all neurons in the preceding layers.The output layer is a complex function of inputs and internal network transformations

per-The topology of a neural network is critical for neural computing to solve problems with reasonable training time and performance For any neural computing, training time is always the biggest bottleneck and thus, every effort is needed to make training effective and affordable Training time is a function of the complexity of the network topology which is ultimately deter-mined by the combination of hidden layers and neurons A trade-off is needed to balance the processing purpose of the hidden layers and the training time needed A network without a hidden layer is only able to solve a linear problem To tackle a nonlinear problem, a reasonable number

of hidden layers is needed A network with one hidden layer has the power to approximate any function provided that the number of neurons and the training time are not constrained (Hornik, 1993) But in practice, many functions are dif-ficult to approximate with one hidden layer and thus, Flood and Kartam (1994) suggested using two hidden layers as a starting point

Trang 5

No Type Example Brief description

1 Feed-forward neural

network Multi-layer perceptron It consists of multiple layers of processing units that are usually interconnected in a feed-forward way

Radial basis functions As powerful interpolation techniques, they are used to replace the

sigmoidal hidden layer transfer function in multi-layer perceptrons Kohonen self-organiz-

ing networks They use a form of unsupervised learning method to map points in an input space to coordinate in an output space.

2 Recurrent network Simple recurrent

networks Contrary to feed-forward networks, recurrent neural networks use bi-directional data flow and propagate data from later processing

stages to earlier stages Hopfield network

3 Stochastic neural

networks Boltzmann machine They introduce random variations, often viewed as a form of statis-tical sampling, into the networks

4 Modular neural

networks Committee of machine They use several small networks that cooperate or compete to solve problems

5 Other types Dynamic neural

net-works They not only deal with nonlinear multivariate behavior, but also include learning of time-dependent behavior Cascading neural

networks They begin their training without any hidden neurons When the output error reaches a predefined error threshold, the networks add

a new hidden neuron.

Neuro-fuzzy networks They are a fuzzy inference system in the body which introduces the

processes such as fuzzification, inference, aggregation and fication into a neural network.

defuzzi-Table 1 Classification of artificial neural networks (Source: Haykin, 1999)

Figure 1 A simple multilayer perceptron(MLP) neutral network with a 4 X 5 X 4 X 1 structure

Trang 6



Artificial Neural Networks

The number of neurons for the input and output

layers can be defined according to the research

problem identified in an actual application The

critical aspect is related to the choice of the number

of neurons in hidden layers and hence the number

of connection weights If there are too few

neu-rons in hidden layers, the network may be unable

to approximate very complex functions because

of insufficient degrees of freedom On the other

hand, if there are too many neurons, the network

tends to have a large number of degrees of

free-dom which may lead to overtraining and hence

poor performance in generalization (Rojas, 1996)

Thus, it is crucial to find the ‘optimum’ number of

neurons in hidden layers that adequately capture

the relationship in the training data This

optimi-zation can be achieved by using trial and error

or several systematic approaches such as pruning

and constructive algorithms (Reed, 1993)

Training is a learning process by which the

connection weights are adjusted until the network

is optimal This involves the use of training

sam-ples, an error measure and a learning algorithm

Training samples are presented to the network

with input and output data over many iterations

They should not only be large in size but also

be representative of the entire data set to ensure

sufficient generalization ability There are several

different error measures such as the mean squared

error (MSE), the mean squared relative error

(MSRE), the coefficient of efficiency (CE), and

the coefficient of determination (r 2) (Dawson and

Wilby, 2001) The MSE has been most commonly

used The overall goal of training is to optimize

errors through either a local or global learning

algorithm Local methods adjust weights of the

network by using its localized input signals and

localized first- or second- derivative of the error

function They are computationally effective for

changing the weights in a feed-forward network

but are susceptible to local minima in the

er-ror surface Global methods are able to escape

local minima in the error surface and thus can

find optimal weight configurations (Maier and

Dandy, 2000)

By far the most popular algorithm for mizing feed-forward neural networks is error

opti-back-propagation (Rumelhart et al., 1986) This

is a first-order local method It is based on the method of steepest descent, in which the descent direction is equal to the negative of the gradient of the error The drawback of this method is that its search for the optimal weight can become caught

in local minima, thus resulting in suboptimal solutions This vulnerability could increase when the step size taken in weight space becomes too small Increasing the step size can help escape lo-cal error minima, but when the step size becomes too large, training can fall into oscillatory traps (Rojas, 1996) If that happens, the algorithm will diverge and the error will increase rather than decrease

Apparently, it is difficult to find a step size that can balance high learning speed and minimiza-tion of the risk of divergence Recently, several algorithms have been introduced to help adapt step sizes during training (e.g., Maier and Dandy, 2000) In practice, however, a trial-and-error approach has often been used to optimize step size Another sensitive issue in back-propagation training is the choice of initial weights In the absence of any a priori knowledge, random values should be used for initial weights

The stop criteria for learning are very portant Training can be stopped when the total number of iterations specified or a targeted value

im-of error is reached, or when the training is at the point of diminishing returns It should be noted that using low error level is not always safe to stop the training because of possible overtraining

or overfitting When this happens, the network memorizes the training patterns, thus losing the ability to generalize A highly recommended method for stopping the training is through cross

validation (e.g., Amari et al., 1997) In doing so,

an independent data set is required for test poses, and close monitoring of the error in the training set and the test set is needed Once the error in the test set increases, the training should

Trang 7

pur-be stopped since the point of pur-best generalization

has been reached

AppLIc At Ions

Artificial neural networks are applicable when a

relationship between the independent variables

and dependent variables exists They have been

applied for such generic tasks as regression

analy-sis, time series prediction and modeling, pattern

recognition and image classification, and data

processing The applications of artificial neural

networks in geoinformatics have concentrated

on a few major areas such as pattern recognition

and image classification (Bruzzone et al., 1999),

hydrological modeling (Maier and Dandy, 2000)

and urban growth prediction (Yang, 2009) The

following paragraphs will provide a brief review

on these areas

Pattern recognition and image classification

are among the most common applications of

artificial neural networks in remote sensing, and

the documented cases overwhelmingly relied upon

the use of multi-layer perceptron networks The

major advantages of artificial neural networks over

conventional parametric statistical approaches to

image classification, such as the Euclidean,

maxi-mum likelihood (ML), and Mahalanobis distance

classifiers, are that they are distribution-free with

less severe statistical assumptions needed and that

they are suitable for data integration from various

sources (Foody, 1995) Artificial neural networks

are found to be accurate in the classification of

remotely sensed data, although improvements in

accuracies have generally been small or modest

(Campbell, 2002)

Artificial neural networks are being used

in-creasingly to predict and forecast water resource

variables such as algae concentration, nitrogen

concentration, runoff, total volume, discharge,

or flow (Maier and Dandy, 2000; Dawson and

Wilby, 2001) Most of the documented cases used

a multi-layer perceptron that was trained by using

the back-propagation algorithm Based on the results obtained so far, there is little doubt that artificial neural networks have the potential to be

a useful tool for the prediction and forecasting of water resource variables

The application of artificial neural networks for urban predictive modeling is a new but rapidly expanding area of research (Yang, 2009) Neural networks have been used to compute develop-ment probability by integrating a set of predictive variables as the core of a land transformation

model (e.g., Pijanowski et al., 2002) or a cellular

automata-based model (e.g., Yeh and Li, 2003) All the applications documented so far involved the use of the multilayer perceptron network, a grid-based modeling framework, and a Geographic Information Systems (GIS) that was loosely or tightly integrated with the network for input data preparation, modeling validation and analysis

conc Lus Ion And future trends

Based on many documented applications within recent years, the prospect of artificial neural networks in geoinformatics seems to be quite promising On the other hand, the capability of neural networks tends to be oversold as an all-inclusive ‘black box’ that is capable to formulate

an optimal solution to any problem regardless

of network architecture, system tion, or data quality Thus, this field has been characterized by inconsistent research design and poor modeling practice Several researchers recently emphasized the need to adopt a system-atic approach for effective neural network model development that considers problem conceptual-ization, data preprocessing, network architecture design, training methods, and model validation in

conceptualiza-a sequenticonceptualiza-al mode (e.g., Mconceptualiza-ailer conceptualiza-and Dconceptualiza-andy, 2000; Dawson and Wilby, 2001; Yang, 2009)

There are a few areas where further research is needed Firstly, there are many arbitrary decisions

Trang 8



Artificial Neural Networks

involved in the construction of a neural network

model, and therefore, there is a need to develop

guidance that helps identify the circumstances

under which particular approaches should be

adopted and how to optimize the parameters that

control them For this purpose, more empirical,

inter-model comparisons and rigorous assessment

of neural network performance with different

inputs, architectures, and internal parameters are

needed Secondly, data preprocessing is an area

where little guidance can be found There are

many theoretical assumptions that have not been

confirmed by empirical trials It is not clear how

different preprocessing methods could affect the

model outcome Future investigation is needed to

explore the impact of data quality and different

methods in data division, data standardization,

or data reduction Thirdly, continuing research is

needed to develop effective strategies and

prob-ing tools for minprob-ing the knowledge contained in

the connection weights of trained neural network

models for prediction purposes This can help

uncover the ‘black-box’ construction of the neural

network, thus facilitating the understanding of

the physical meanings of spatial factors and their

contribution to geoinformatics This should help

improve the success of neural network

applica-tions for problem solving in geoinformatics

references

Amari, S., Murata, N., Muller, K R., Finke, M., &

Yang, H H (1997) Asymptotic statistical theory

of overtraining and cross-validation IEEE

Trans-actions On Neural Networks, 8(5), 985-996.

Bishop, C ( 1995) Neural Networks for Pattern

Recognition (p 504) Oxford: University Press.

Bruzzone, L., Prieto, D F., & Serpico, S B (1999)

A neural-statistical approach to multitemporal and

multisource remote-sensing image classification

IEEE Transactions on Geoscience and Remote

Sensing, 37(3), 1350-1359.

Campbell, J B (2002) Introduction to Remote

Sensing (3rd ) (p 620) New York: The Guiford Press

Dawson, C W., & Wilby, R L (2001) logical modelling using artificial neural networks

Hydro-Progress in Physical Geography, 25(1), 80-108.

Flood, I., & Kartam, N (1994) Neural networks

in civil engineering.2 systems and application

Journal of Computing in Civil Engineering, 8(2),

149-162

Foody, G M (1995) Land cover classification using an artificial neural network with ancillary

information International Journal of

Geographi-cal Information Systems, 9, 527- 542.

Haykin, S (1999) Neural Networks: A

Compre-hensive Foundation (p 842) Prentice Hall.

Hornik, K (1993) Some new results on

neural-network approximation Neural Networks, 6(8),

1069-1072

Kwok, T Y., & Yeung, D Y (1997) Constructive algorithms for structure learning in feed-forward

neural networks for regression problems IEEE

Transactions On Neural Networks, 8(3),

630-645

Maier, H R., & Dandy, G C (2000) Neural networks for the prediction and forecasting of water resources variables: A review of modeling

issues and applications Environmental Modelling

& Software, 15, 101-124.

Pijanowski, B C., Brown, D., Shellito, B., & Manik, G (2002) Using neural networks and GIS

to forecast land use changes: A land

transforma-tion model Computers, Environment and Urban

Systems, 26, 553–575.

Principe, J C., Euliano, N R., & Lefebvre, W

C (2000) Neural and Adaptive Systems:

Fun-damentals Through Simulations (p 565) New

York: John Wiley & Sons

Trang 9

Reed, R (1993) Pruning algorithms - a survey

IEEE Transactions On Neural Networks, 4(5),

740-747

Rojas, R (1996) Neural Networks: A Systematic

Introduction (p 502) Springer-Verlag, Berlin.

Rumelhart, D E., Hinton, G E., & Williams, R J

(1986) Learning internal representations by error

propagation In Parallel Distributed Processing

D E Rumelhart, & J L McClelland Cambridge:

MIT Press

Yang, X (2009) Artificial neural networks

for urban modeling In Manual of Geographic

Information Systems, M Madden American

Society for Photogrammetry and Remote

Sens-ing (in press)

Yeh, A G O., & Li, X (2003) Simulation of

development alternatives using neural networks,

cellular automata, and GIS for urban planning

Photogrammetric Engineering and Remote

Sens-ing, 69(9), 1043-1052.

key ter Ms

Architecture: The structure of a neural

network including the number and connectivity

of neurons A network generally consists of an

input layer, one or more hidden layers, and an

output layer

Back-Propagation: The training algorithm for

the feed-forward, multi-layer perceptron networks which works by propagating errors back through

a network and adjusting weights in the direction opposite to the largest local gradient

Error Space: The n-dimensional surface in

which weights in a networks are adjusted by the back-propagation algorithm to minimize model error

Feed-Forward: A network in which all the

connections between neurons flow in one tion from an input layer, through hidden layers,

direc-to an output layer

Multiplayer Perceptron: The most popular

network which consists of multiple layers of terconnected processing units in a feed-forward way

in-Neuron: The basic building block of a neural

network A neuron sums the weighed inputs, processes them using an activation function, and produces an output response

Pruning Algorithm: A training algorithm

that optimizes the number of hidden layer neurons by removing or disabling unnecessary weights or neurons from a large network that is initially constructed to capture the input-output relationship

Training/Learning: The processing by which

the connection weights are adjusted until the network is optimal

Trang 10



Chapter XVII

Spatial Interpolation

Xiaojun Yang

Florida State University, USA

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.

Abstr Act

Spatial interpolation is a core component of data processing and analysis in geoinformatics The purpose

of this chapter is to discuss the concept and techniques of spatial interpolation It begins with an view of the concept and brief history of spatial interpolation Then, the chapter reviews some commonly used interpolations that are specifically designed for working with point data, including inverse distance weighting, kriging, triangulation, Thiessen polygons, radial basis functions, minimum curvature, and trend surface This is followed by a discussion on some criteria that are proposed to help select an ap- propriate interpolator; these criteria include global accuracy, local accuracy, visual pleasantness and faithfulness, sensitivity, and computational intensity Finally, future research needs and new, emerging applications are presented.

Spatial interpolation is a core component of data

processing and analysis in geographic

informa-tion systems It is also an important subject in

spatial statistics and geostatistics By definition,

spatial interpolation is the procedure of

predi-cating the value of properties from known sites

to un-sampled, missing, or obscured locations

The rationale behind interpolation is the very common observation that values at points close together in space are more likely to be similar than points further apart This observation has been formulated as the First Law of Geography (Tobler, 1970) Data sources for spatial interpola-tion are normally scattered sample points such as soil profiles, water wells, meteorological stations

or counts of species, people or market outlets

Trang 11

that are summarized by basic spatial units such

as grids or administrative areas These discrete

data are interpolated into continuous surfaces that

can be quite useful for data exploration, spatial

analysis, and environmental modeling (Yang and

Hodler, 2000) On the other hand, we often think

about some kinds of data as continuous rather

than discrete even though we can only measure

them discretely Thus, spatial interpolation

al-lows us to view and predict data over space in

an intuitive way, thereby making the real-world

decision-making process easier

The history of spatial interpolation is quite

long, and a group of optimal interpolation methods

using geostatistics can be traced to the early 1950s

when Danie G Krige, a South African mining

engineer, published his seminal work on the theory

of Kriging (Krige, 1951) Krige’s empirical work

to evaluate mineral resources was formalized in

the 1960s by French engineer Georges Matheron

(1961) By now, there are several dozens of

interpo-lators that have been designed to work with point,

line, or polygon data (Lancaster and Salkauskas,

1986; Isaaks and Srivastava, 1989; Bailey and

Gatrell, 1995) While this chapter focuses on the

methods designed for working with point data,

readers who are interested in the group of

inter-polators for line or polygon data should refer to

Hutchinson (1989), Tobler (1979), or Goodchild

and Lam (1980)

The purpose of this chapter is to introduce

the concept of spatial interpolation, review some

commonly used interpolators that are specifically

designed for point data, provide several criteria for

selecting an appropriate interpolator, and discuss

further research needs

spAt IAL Interpo LAt Ion

Methods

There is a rich pool of spatial interpolation

ap-proaches available, such as distance weighting,

fitting functions, triangulation, rectangle-based

interpolation, and neighborhood-based tion These methods vary in their assumptions, local or global perspective, deterministic or stochastic nature, and exact or approximate fit-ting Thus, they may require different types of input data and varying computation time, and most importantly, generate surfaces with various accuracy and appearance This article will focus

interpola-on several methods that have been widely used

in geographic information systems

Inverse Distance Weighting (IDW)

Inverse distance weighting (IDW) is one of the most popular interpolators that have been used

in many different fields It is a local, exact polator The weight of a sampled point value is inversely proportional to its geometric distance from the estimated value that is raised to a specific power or exponent This has been considered a direct implementation of Tobler’s First Law of Geography (Tobler, 1970) Normally, a search space or kernel is used to help find a local neigh-borhood The size of the kernel or the minimum number of sample points specified in the search can affect IDW’s performance significantly (Yang and Hodler, 2000) Every effort should be made

inter-to ensure that the estimated values are dependent upon sample points from all directions and to be free from the cluster effect Because the range

of interpolated values cannot exceed the range of observed values, it is important to position sample points to include the extremes of the field The choice of the exponent can affect the results sig-nificantly as it controls how the weighting factors decline as distance increases As the exponent approaches zero, the resultant surface approaches

a horizontal planar surface; as it increases, the output surface approaches the nearest neighbor interpolator with polygonal surfaces Overall, inverse distance weighting is a fast interpolator but its output surfaces often display a sort of

‘bull-eye’ or ‘sinkhole-like’ pattern

Trang 12



Spatial Interpolation

k riging

Kriging was developed by Georges Matheron

in 1961 and named in honor of Daniel G Krige

because of his pioneering work in 1950s As a

technique that is firmly grounded in geostatistical

theory, Kriging has been highly recommended as

an optimal method of spatial interpolation for

geo-graphic information systems (Oliver and Webster,

1990; Burrough and McDonnell, 1998)

Any estimation made by kriging has three

major components: drift or general trend of the

surface, random but spatially correlated small

de-viations from the trend, and random noise, which

are estimated independently Drift is estimated

using a mathematical equation that most closely

resembles the overall trend in the surface The

distance weights for interpolation are determined

using the variogram model that is chosen from

a set of mathematical functions describing the

spatial autocorrelation The appropriate model is

chosen by matching the shape of the curve of the

experimental variogram to the shape of the curve

of the mathematical function, either spherical,

exponential, linear, Gaussian, hole-effect,

qua-dratic, or rational quadratic The random noise is

estimated using the nugget variance, a

combina-tion of the error variance and the micro variance

or the variance of the small scale structure

Kriging can be further classified as ordinary,

universal, simple, disjunctive, indicator, or

prob-ability Ordinary Kriging assumes that the general

trend is a simple, unknown constant Trends that

vary, and parameters and covariates are unknown,

form models for Universal Kriging Whenever the

trend is completely known, whether constant or

not, it forms the model for Simple Kriging

Dis-junctive Kriging predicts the value at a specific

location by using functions of variables Indicator

Kriging predicts the probability that the estimated

value is above a predefined threshold value

Probability Kriging is quite similar to Indicator

Kriging, but it uses co-kriging in the prediction

Co-kriging refers to the models based on more

than one variable

The use of Kriging as a popular geostatistic interpolator is generally robust When it is difficult

to use the points in a neighborhood to estimate the form of variogram, the variogram model used

is not entirely appropriate, and Kriging may be inferior to other methods In addition, Kriging is quite computationally intensive

(Okabe et al., 1992) Almost all systems use the

second method, namely, Delaunay triangulation, which allows three points to form the corners of a triangle only when the circle that passes through them contains no other points Delaunay triangu-lation minimizes the longest size of any triangles, thus producing triangles that are as close to being equilateral as possible Each triangle is treated

as a plane surface The equation for each triangular facet is determined exactly from the surface property of interest at the three vertices Once the surface is defined in this way, the values for the interpolated data points can be calculated This method works best when sample points are evenly distributed Data sets that contain sparse areas result in distinct triangular facets on the output surface Triangulation is a fast interpolator, particularly suitable for very large data sets

planar-Thiessen Polygons (or Voronoi Polygons)

Thiessen polygons were independently discovered

in several fields including climatology and raphy They are named after a climatologist who used them to perform a transformation from point climate stations to watersheds Thiessen polygons are constructed around a set of points in such a

Trang 13

geog-way that the polygon boundaries are equidistant

from the neighboring points, and they estimate the

values at surrounding points from a single point

observation In other words, each location within

a polygon is closer to a contained point than to any

other points Thiessen polygons are not difficult

to construct and particularly suitable for discrete

data, such as rain gauge data The accuracy of

Thiessen polygons is a function of sample density

One major limitation with Thiessen polygons is

that they produce polygons with shapes being

unrelated to the phenomena under investigation

In addition, Thiessen Polygons are not effective

to represent continuous variables

Radial Basis Functions

The radial basis functions include a diverse group

of interpolation methods All are exact

interpola-tors, attempting to honor each data point They

solve the interpolation by constructing a set of

basis functions that define the optimal set of

weights to apply to the data points (Carlson and

Foley, 1991) Most commonly used basis functions

include: inverse multiquadric equation, multilog

function, multiquadric equation, natural cubic

spline, and thin plate spline (Golden Software,

Inc., 2002) Franke (1982) rated the multiquadric

equation as the most impressive basis function

in terms of fitting ability and visual smoothness

The multiquadric equation method was originally

proposed for topographical mapping by Hardy

(1971)

Minimum Curvature

Minimum curvature has been widely used in

geosciences It is a global interpolator in which

all points available formally participate in the

calculation of values for each estimated point

This method applies a two-dimensional cubic

spline function to fit a surface to the set of input

values The computation requires a number of

iterations to adjust the surface so that the final

result has a minimum amount of curvature The interpolated surface produced by the minimum curvature method is analogous to a thin, linearly-elastic plate passing through each of the data values

so that the displacement at these points is equal

to the observation to be satisfied (Briggs, 1974) The minimum curvature method is not an exact interpolator It is quite fast, and tends to produce smooth surfaces (Yang and Hodler, 2000)

Trend Surface

Trend surface solves the interpolation by using one

or more polynomial functions, depending upon global or local perspective Global polynomial fits a single function to the entire sample points, and creates a slowly varying surface, which may help capture some coarse-scale physical processes such as air pollution or wind direction It is a quick deterministic interpolator Local polyno-mial interpolation uses many polynomials, each within specified overlapping neighborhoods The shape, number of points to use, and the search kernel configuration can affect the interpolation performance While global polynomial interpola-tion is useful for identifying long-range trends, local polynomial can capture the short-range variation in the dataset

cr Iter IA for se Lect Ing An Interpo LAt or

Although the pool of spatial interpolation methods

is rich and some general guidelines are available,

it is often difficult to select an appropriate one for a specific application For many applications, users will have to do at least some minimum ex-periments before a final selection can be made The following five criteria are recommended to guide this selection: (1) global accuracy, (2) local accuracy, (3) visual pleasantness and faithfulness, (4) sensitivity, and (5) computational intensity

Trang 14



Spatial Interpolation

Global Accuracy

There are two well-established methods for

mea-suring global accuracy: validation and

cross-vali-dation Validation can use all data points or just a

subset Assuming that sampling process is without

error, all data points can be used to measure the

degree at which an interpolator honors the control

data But this way does not necessarily guarantee

the accuracy for unsampled points Instead of

using all data points, the entire samples can be

split into two subsets, one for interpolation (called

test subset) and the other for validation (training

subset) This way can provide an insight into the

accuracy for unsampled sites However, splitting

samples may not be realistic for small datasets

because interpolation may suffer from insufficient

training points The actual tools used for

valida-tion include statistical measures, such as residual

and root mean square error, and/or graphical

summaries, such as scatterplot and

Quantile-Quantile (QQ) plot Residual is the difference

between the known and estimated point values

Root mean square error (RMSE) is determined

by calculating the deviations of estimated points

from their known true position, summing up the

measurements, and then taking the square root

of the sum A scatter plot gives a graphical

sum-mary of predicted values versus true values, and

a QQ plot shows the quantiles of the difference

between the predicted and measured values and

the corresponding quantiles from a standard

normal distribution Cross-validation removes

each observation point, one at a time, estimates

the value for this point using the remaining data,

and then computes the residual; this procedure

is repeated for a second point, and so on

Cross-validation outputs various summary statistics of

the errors that measure the global accuracy

Local Accuracy

While global accuracy provides a bulk measure,

it provides no information on how accuracy

var-ies across the surface Yang and Hodler (2000) argued that in many cases, the relative variation of errors can be more useful than the absolute error measures When the global errors are identical, an interpolated model with evenly distributed errors

is much reliable than one with highly concentrated errors Local accuracy can be characterized with

a method proposed by Yang and Hodler (2000), which involves a sequence of steps: computation

of residual for all data points, interpolation of the residual data using an exact method (such as Kriging or Radial Basis Functions), drawing a 2D or 3D map, and analyzing the visual pattern

Visual faithfulness is defined as the closeness

of an interpolated surface to the reality (Yang and Hodler, 2000) In some applications, particularly

in the domain of scientific visualization, an analyst may appreciate much more on the visual appear-ance of the output surface than on their statistical accuracy With the increasing role of scientific visualization in geographic information systems, the measure of visual faithfulness has gained its practical significance To evaluate the level of vi-sual faithfulness, surface reality is needed to serve

as reference While locating a reference for the

Trang 15

continuous surface can be difficult or impossible

for some variables (such as temperature or noise),

analysts can focus on some specific aspects such

as surface discontinuities or extremes that can be

inferred from direct or indirect sources

Sensitivity

Interpolation methods are based on different

theoretical assumptions Once certain parameters

or conditions are altered, these interpolators

can demonstrate different statistical behavior

and visual appearance The sensitivity of an

interpolator with respect to these alterations is

critical in assessing the suitability of a method

as, preferably, the interpolator should be rather stable with respect to changes in the parameters (Franke, 1982) It is impossible to incorporate each combination of parameters in an evaluation Based on a comprehensive literature review, the sensitivity of a method with respect to the vary-ing sample size and/or search conditions for some local interpolators should be targeted (Yang and Hodler, 2000)

Computational Intensity

Computational intensity is measured as the amount of processing time needed in gridding

or triangulation Different interpolation methods

Figure 1 Three-dimensional perspectives of the models generated from the same data set with different algorithms The original surface model (i.e., USGS 7.5’ Digital Elevation Model) is shown as the basic for further comparison (Source: Yang and Hodler, 2000)

Trang 16



Spatial Interpolation

have different levels of complexity due to their

algorithm designs that can lead to quite a

varia-tion in their computavaria-tional intensity This

differ-ence can be polarized when working with large

sample sets that normally take much longer time

to process Triangulation is always a fast method

Global interpolators are generally faster than local

methods Smooth methods are normally faster

than exact methods Deterministic interpolators

are generally faster than stochastic methods (e.g.,

different types of Kriging)

conc Lus Ion And future

rese Arch

Over the past several decades, a rich pool of

spa-tial interpolation methods have been developed

in several different fields, and some have been

implemented in major geographic information

system (GIS), spatial statistics or geostatistics

software packages While software developers

tend to implement more interpolators and offer

a full range of options for each method included,

practitioners often struggle to find an appropriate

method for their specific applications due to the

lack of practical guidelines The criteria discussed

in this article should be useful to guide this

selec-tion Nevertheless, there are some challenges in

this field, and perhaps the biggest one is that there

are some arbitrary decisions involved, particularly

for Kriging and other local methods, which may

ultimately affect the performance Therefore,

further research is needed to develop guidance

that helps identify the circumstances under which

particular methods should be adopted and how to

optimize the parameters that control them For

this purpose, more empirical, inter-model

com-parisons and rigorous assessment of interpolation

performance with different variables, sample sizes

and internal parameters are needed

references

Bailey, T C., & Gatrell, A C (1995) Interactive

Spatial Data Analysis Longmans.

Briggs, I C (1974) Machine contouring using

minimum curvature Geophysics, 39(1), 39-48.

Burrough, P A., & McDonnell, R A (1998)

Principles of Geographical Information Systems,

333 Oxford University Press

Carlson, R E., & Foley, T.A (1991) Radial Basis

Interpolation Methods on Track Data Lawrence

Livermore National Laboratory, 1074238

UCRL-JC-Franke, R (1982) Scattered data interpolation:

tests of some methods Mathematics of

Computa-tion, 38(157), 181-200.

Golden Software, Inc (2002) Surfer 8: User’s

Guide, 640 Golden, Colorado.

Goodchild, M F., & Lam, N (1980) Areal terpolation: A variant of the traditional spatial

in-problem Geo- Processing, 1, 297-312.

Hardy, R L (1971) Multivariate equations of

topography and other irregular surfaces Journal

Isaaks, E H., & Srivastava, R M (1989) An

In-troduction to Applied Geostatistics, 592 Oxford

University Press

Krige, D G (1951) A statistical approach to some basic mine valuation problems on the Witwa-

tersrand Journal of Chemistry, Metallurgy, and

Mining Society of South Africa, 52(6), 119-139.

Lancaster, P., & Salkauskas, K (1986) Curve and

Surface Fitting: An Introduction, 280 Academic

Press

Trang 17

Matheron, G (1962) Traité de Géostatistique

appliquée, tome 1 (1962), tome 2 (1963) Paris:

Editions Technip

Okabe, A., Boots, B., & Sugihara, K (1992)

Spatial Tessellations, 532 New York: John Wiley

& Sons

Oliver, M A., & Webster, R (1990) Kriging: A

method of interpolation for geographic

informa-tion systems Internainforma-tional Journal of

Geographi-cal Information Systems, 4(3), 313-332.

Tobler, W R (1970) A computer movie

simulat-ing urban growth in the Detroit region Economic

Geography, 46, 234–40.

Tobler, W.R (1979) Smooth pycnophylactic

inter-polation for geographical regions Journal of the

American Statistical Association, 74, 519-30.

Yang, X., & Hodler, T (2000) Visual and

statisti-cal comparisons of surface modeling techniques

for point-based environmental data Cartography

and Geographic Information Science, 17(2),

165-175

key ter Ms

Cross Validation: A validation method in

which observations are dropped one at a time,

the value for the dropped point is estimated

us-ing the remainus-ing data, and then the residual is

computed; this procedure is repeated for a second

point, and so on Cross-validation outputs various

summary statistics of the errors that measure the

global accuracy

Geostatistics: A branch of statistical

esti-mation concentrating on the application of the theory of random functions for estimating natural phenomena

Sampling: The technique of acquiring

suf-ficient observations that can be used to obtain a satisfactory representation of the phenomenon being studies

Search: A procedure to find sample points

that will be actually used in a value estimation for a local interpolator

Semivariogram (or Variogram): A traditional

semivariogram plots one-half of the square of the differences between samples versus their distance from one another; it measures the degree of spatial autocorrelation that is used to assign weights in Kriging interpolation A semivariogram model

is one of a series of mathematical functions that are permitted for fitting the points on an experi-mental variogram

Spatial Autocorrelation: The degree of

cor-relation between a variable value and the values

of its neighbors; it can be measured with a few different methods including the use of semivar-iogram

Trang 18

Purdue University, USA

Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.

so an ordinary attribute can be extended with spatial and/or temporal dimensions flexibly An associated object query language has also been provided to support the manipulation of spatio-temporal informa- tion The design of the model as well as the query language has given rise to a uniform representation of spatial and temporal dimensions, thereby offering a new option for the development of a spatio-temporal GIS to facilitate urban/environmental change tracking and analysis.

o ver vIew

Spatio-temporal databases are a subject of

increasing interest and the research community

is dedicating considerable effort in this

direc-tion Natural as well as man-made entities can

be referenced with respect to both space and

time The integration of the spatial and temporal

components to create a seamless ral data model is a key issue that can improve

spatio-tempo-spatio-temporal data management and analysis immensely (Langran, 1992)

Numerous spatio-temporal models have been developed Notable among these are the snapshot

Trang 19

model (time-stamping layers, Armstrong 1988),

the space-time composite (time-stamping

at-tributes, Langran and Chrisman 1988), the

spa-tio-temporal object model (ST-objects, Worboys

1994), the event-based spatio-temporal data model

(ESTDM, Peuquet and Duan 1995) and the

three-domain model (Yuan 1999) The snapshot model

incorporates temporal information with spatial

data by timestamping layers that are considered

as tables or relations The space-time composite

incorporates temporal information by

timestamp-ing attributes, and usually only one aspatial

attribute is chosen in this process In contrast

to the snapshot and the space-time composite

models, Worboys’s (1994) spatio-temporal

ob-ject model includes persistent obob-ject identifiers

by stacking changed spatiotemporal objects on

top of existing ones Yuan (1999) argues that

temporal Geographic Information Systems (GIS)

lack a means to handle spatio-temporal identity

through semantic links between spatial and

tem-poral information Consequently, three views of

the spatio-temporal information, namely spatial,

temporal, and semantic, are provided and linked

to each other Peuquet and Duan’s ESTDM (1995)

employs the event as the basic notion of change in

raster geometric maps Changes are recorded using

an event list in the form of sets of changed raster

grid cells In fact, event-oriented perspectives that

capture the dynamic aspects of spatial domains

are shown to be as relevant for data modeling

as object-oriented perspectives (Worboys and

Hornsby 2004; Worboys, 2005)

Despite these significant efforts in

spatio-tem-poral data modeling, challenges still exist:

a Object attributes (e.g., lane closure on a

road) can change spatially, temporally, or

both spatially and temporally, and so

fa-cilities should be provided to model all of

these cases, i.e., spatial changes, temporal

changes, and spatio-temporal changes;

b Object attributes may change

asynchro-nously at the same, or different, locations

As an additional modeling challenge, object attributes can be of different types (e.g., speed

limit is of the integer type and pavement rial is of the string type) To overcome these

mate-challenges, a spatio-temporal object model that

exploits a special mechanism, parametric

poly-morphism, seems to provide an ideal solution

In general, using parametric polymorphism, it

is possible to create classes that operate on data without having to specify the data’s type In other words, a generic type can be formulated by lift-

ing any existing type A simple parametric class

(type) can be expressed as follows:

class CP<parameter>{

parameter a;

…};

where parameter is a type variable The type variable can be of any built-in type, which may

be used in the CP-declaration

The notion of parametric polymorphism is

not totally new, as it has been introduced in

ob-ject-oriented databases (OODBs) (Bertino et

al., 1998) This form of polymorphism allows a

function to work uniformly on a range of types that exhibit some common structure (Cardelli and Wegner, 1985) Consequently, in addition

to the basic spatial and temporal types shown in

Figure 1, three parametric classes, Spatial<T>,

Temporal<T>, and ST<T> are defined in Huang

and Yao (2003) Here, T is defined as a spatial type

that contains the distribution of all sections of T

Trang 20



Spatio-Temporal Object Modeling

(e.g., all locations of sections of pavement material

of type String), a temporal type that contains the

history of all episodes of T, and a spatial-temporal

type that contains both the distribution and

his-tory of an object (e.g., pavement material along

a road during the past six months), respectively

The parameter type T can be any built-in type

such as Integer, String, or Struct Corresponding

operations inside the parameterized types are also

provided to traverse the distribution of attributes

Thus polymorphism allows users to define the type

of any attribute as spatially varying, temporally

varying or spatial-temporally varying, thereby

supporting spatio-temporal modeling The three

parameterized types form a valuable extension

to an object model that meets modeling

require-ments and queries with respect to spatio-temporal

data Some details about these three types are

provided below

t he Spatial<T> Type

As the value of an attribute (e.g., speed, pavement

quality and number of lanes) can change over

Parametric types

space (e.g., along a linear route), a parameterized

type called Spatial<T> is defined to represent the

distribution of this attribute (Huang, 2003).

The distribution of an attribute of type T is expressed as a list of value-location pairs:{(val1, loc1), (val2, loc2), …, (valn, locn)}

where val1, …, valn are legal values of type T, and

loc1, … locn are locations associated with the tribute values This way, a Spatial<T> type adds

at-a spat-atiat-al dimension to T

t he Temporal<T> Type

Just as Spatial<T> can maintain the change of

attributes over space, Temporal<T> represents

attribute changes over time (Huang and munt, 2005)

Clara-A Temporal object is a temporally ordered

collection of value-time interval pairs:

{(val1, tm1), (val2, tm2), …, (valn, tmn)}

Figure 1 Extended spatio-temporal object types

Time-interval Temporal<T> Spatial<T>

Geometric types

ST<T>

Parametric types Geometric types

Geometry Collection Geometry

Point LineString Polygon

Points LineStrings Polygons

Extended object types

Trang 21

where val1, …, valn are legal values of type T, and

tm1, … tmn are time intervals such that tmi ∩ tmj

= ∅, and i ≠ j and 1 ≤ i, j ≤ n The parameter type

T can be any built-in type, and hence this type is

raised to a temporal type

t he ST<T> Type

ST<T> represents the change of attributes over

both space and time An ST object is modeled as

a collection of value-location-time triplets:

{(val1, loc1, tm1), (val2, loc2, tm2), …, (valn, locn,

tmn}

where val1, …, valn are legal values of type T,

loc1, … locn are line intervals; also, loci ∩ locj

= ∅, i ≠ j and 1 ≤ i, j ≤ n, tm1, … tmn are time

intervals such that tmi ∩ tmj = ∅, i ≠ j and 1 ≤ i,

j ≤ n Each pair, i.e., (vali, loci, tmi), represents a

state which associates location and time with an

object value

An example utilizing parameterized

Types

Using the Spatial<T>, Temporal <T> and ST

<T> types, a spatio-temporal class, e.g., route,

is defined as follows

class route

(extent routes key ID)

{

attribute String ID;

attribute String category;

attribute Spatial<String> pvmt_quality;

//user-defined space-varying attr

attribute Spatial<Integer> max_speed;

//user-defined space-varying attr

attribute Temporal<Integer> traffic_light;

//user-defined time-varying attr

attribute ST<Integer> lane_closure;

//user-defined space-time-varying attr

attribute ST<Integer> accident; //

user-defined space-time-varying attr

attribute Linestring shape;

};

The types of attributes pvmt_quality and

max_speed are represented as Spatial<String> and Spatial<Integer> These attributes are capable

of representing the distribution of sections along

a route by associating locations with their value

changes The attribute traffic_light is raised to

Tem-poral <Integer>, which indicates some lanes may

be closed at some time The attribute lane_closure

is raised to ST<Integer>, which associates both

location and time with a lane_closure event The same is true for the attribute accident However, the other three attributes (i.e., ID, category, and shape) remain intact Therefore, the Spatial<T> type, Temporal<T> type and ST<T> types allow users to choose the attributes to raise

spAt Io-te Mpor AL Quer y LAngu Age

A query language provides an advanced interface for users to interact with the data stored in a da-

tabase A formal interface similar to ODMG’s

Object Query Language (OQL) (Cattell, 2000),

i.e., Spatio-temporal OQL (STOQL) is designed

to support the retrieval of spatio-temporal mation

infor-STOQL extends OQL facilities to retrieve spatial, temporal and spatial-temporal informa-tion The states in a distribution or a history are

extracted through iteration in the OQL

from-clause Constraints in the where-clause can then

be applied to the value, timestamp or location of a state through corresponding operations Finally, the result is obtained by means of the projection

operation in the select-clause.

Given the above standards, STOQL provides some syntactical extensions to OQL to manipulate space-varying, time-varying and space-time-varying information represented by Spatial<T>, Temporal<T>, and ST<T>, respectively

Trang 22



Spatio-Temporal Object Modeling

In Table 1, time1 and time2 are expressions

of type Timestamp, location1 and location2 are

expressions of type Point e is an expression of type

Temporal<T>, and es is an expression denoting a

chronological state, a state within a distribution,

or a state within a distribution history

The following examples illustrate how

spatio-temporal queries related to the route class are

expressed using STOQL

Example 1 (spatial selection) Find the speed

limit from mile 2 to mile 4

In this query, variable r_qlty, ranging over the

distribution of route.pvmt_quality, represents a

pavement quality segment r_qlty.loc returns

the location of a pavement quality segment The

overlaps operator in the where-clause specifies

the condition on a pavement quality segment’s

location

Example 2 (spatial projection) Display the

loca-tion of accidents on route 1 where the maximum speed is 60 mph and pavement quality is poor

select r_acdt.loc from routes as route, route.acdt! as r_acdt,

route.pvmt_quality! as r_qlty, route.max_speed! as r_speed

where route.ID=”1” and r_speed.val =60 and

r_qlty.val =”poor” and (r_speed.loc.intersection(r_quality.loc)).contains(r_acdt.loc)

The intersection operation in the where-clause obtains the common part of r_speed.loc and r_quality.loc

Example 3 (temporal projection) Show the time

of all accidents on Route 1

select r_acdt.tm from routes as route, route.acdt! as r_acdt where route.ID=”1”

In this query, variable r_acdt, ranging over the distribution of route.acdt, represents an ac-cident r_acdt.tm returns the time of a selected accident

Table 1 Syntactical constructs in STOQL

STOQL Spatial<T> Temporal<T> ST<T> Result Type

time2] struct(start:time1, end:time2) struct(start:time1,end:time2) TimeInterval

e! e.distribution e.history e.distribution_history List

Trang 23

Example 4 (spatial and temporal join) Find the

accidents which occurred between 8:00-8:30am

on route 1 where the pavement quality is poor

select r_acdt.loc

from routes as route, route.pvmt_quality! as

r_qlty,

route.accident! as r_acdt

where Route.ID=”1” and r_acdt.tm.overlaps([8am,

8:30am]) and r_qlty.val = “poor” and

r_quality.loc.contains(r_acdt.loc)

This query joins two events through the

con-tains operator.

c onc Lus Ion And future w ork

A generic spatio-temporal object model has been

developed using parametric polymorphism This

mechanism allows any built-in type to be enhanced

into a collection type that assigns a distribution,

history or both to any typed attribute In

addi-tion, an associated spatio-temporal object query

language has also been provided to manipulate

space-time varying information

While spatio-temporal data modeling is

fun-damental to spatio-temporal data organization,

spatio-temporal data analysis is critical to

time-based spatial decision making The integration

of spatio-temporal analysis with spatio-temporal

data modeling is obviously an efficient means to

model spatio-temporal phenomena and capture

the dynamics In doing so, on the one hand, the

spatio-temporal changes can be tracked; on the

other hand, spatio-temporal data analysis has

a strong database support that facilitates data

management including retrieval of necessary

data for applying spatio-temporal mathematical

analysis models

r eferences

Armstrong, M P (1988) Temporality in spatial

databases In Proceedings of GIS/LIS’88 (San

edited by E Jul (Brussels), pp 41-66

Cardelli, L and Wegner, P (1985) On ing Types, Data Abstraction and Polymorphism

Understand-ACM Computing Surveys, 17(4), 471-523,

Cattell, R.G (ed.) (2000) The Object Data

Stan-dard: ODMG Object Model 3.0 San Diego CA:

Morgan Kaufmann Academic Press

Huang, B (2003) An object model with parametric

polymorphism for dynamic segmentation

Inter-national Journal of Geographical Information Science, 17(4), 343-360.

Huang, B., & Claramunt, C (2005) poral data model and query language for tracking

Spatiotem-land use change Transportation Research Record:

Journal of the Transportation Research Board,

Langran, G (1992) Time in Geographic

Informa-tion Systems (London: Taylor & Francis).

Langran, G., & Chrisman, N (1988) A framework

for temporal geographic information

Carto-graphica, 25, 1-14.

Peuquet, D., & Duan, N (1995) An event-based spatio-temporal data model (ESTDM) for tem-

poral analysis of geographical data International

Journal of Geographical Information Systems,

9, 7-24.

Trang 24



Spatio-Temporal Object Modeling

Worboys, M (1994) A unified model of spatial

and temporal information Computer Journal,

37, 26-34.

Worboys, M (2005) Event-oriented approaches to

geographic phenomena International Journal of

Geographical Information Science, 19(1), 1-28

Worboys, M and Hornsby, K (2004) From objects

to events: GEM, the geospatial event model In

M Egenhofer, C Freksa, and H Miller (Eds.),

Proceeding of GIScience 2004, Lecture Notes

in Computer Science, 3234, Springer, Berlin,

(pp 327-343)

Yuan, M (1999) Use of a three-domain

repre-sentation to enhance GIS support for complex

spatiotemporal queries Transactions in GIS, 3,

137-159

key t er Ms

Distribution: A list of value-location pairs

representing the spatial change

History: A list of value-time pairs

represent-ing the temporal change

Parametric Polymorphism: The ability

of writing classes that operate on data without specifying the data’s type

Parametric Type: The type that has type

parameters

Polymorphism: The ability to take several

forms In object-oriented programming, it refers

to the ability of an entity to refer at run-time to instances of various classes; the ability to call

a variety of functions using exactly the same interface

Spatial Change: The value of an attribute of

certain type (e.g., integer or string) changes at different locations

Spatio-Temporal Change: The value of an

attribute of certain type (e.g., double or string) changes at different locations and/or different times

Temporal Change: The value of an attribute

of certain type (e.g integer or string) changes at different times

Trang 25

in spatiotemporal representation, reasoning, database management, and modeling However, there is not yet a full-scale, comprehensive temporal GIS available Most temporal GIS technologies developed

so far are either still in the research phase (e.g., TEMPEST developed by Peuquet and colleagues at Pennsylvania State University in the United States) or with an emphasis on mapping (e.g., STEMgis developed by Discovery Software in the United Kingdom)

Trang 26



Challenges and Critical Issues for Temporal GIS Research and Technologies

Dynamics are central to the understanding of

physical and human geographies In a large part,

temporal GIS development is motivated by the

need to address the dynamic nature of the world

Most, if not all, temporal GIS technologies focus

on visualization and animation techniques to

communicate spatiotemporal information Like

maps for spatial data, visualization and animation

provide an excellent means to inspect and identify

changes in space and time Nevertheless,

recogniz-ing spatiotemporal distribution and change is only

one step towards an understanding of dynamics

in geographic domains The expectation of a

temporal GIS goes beyond visual inspection It

demands a comprehensive suite of data

manage-ment, analysis, and modeling functions to enable

transformation of spatiotemporal data to

informa-tion that summarizes environmental dynamics,

social dynamics, and their interactions

Extensive literature exists on the conceptual

and technological challenges in developing a

temporal GIS, such as books (Christakos et al

2002; Peuquet, 2002; Wachowicz, 1999),

collec-tions (Egenhofer & Golledge, 1998; Frank et al

2000), and articles (Dragicevic et al 2001; López

2005; O’Sullivan, 2005; Peuquet, 2001; Shaw,

Ladner & Abdelguerfi, 2002; Worboys &

Horn-sby, 2004; Yuan et al 2004) Readers are advised

to consult these and many other spatiotemporal

publications for details This chapter highlights

the critical issues and major research challenges

for conceptual and technological developments

in temporal GIS

cr It Ic AL Issues

What constitutes a temporal GIS needs to be

addressed from three perspectives: (1) database

representation and management; (2) analysis and

modeling; and (3) geovisualization and

commu-nication There were at least four commercial

“temporal GIS” available in 2005: DataLink; STEMgis; TerraSeer, and Temporal Analyst for ArcGIS In addition, there are many open-source software for spatiotemporal visualization and analysis, such as STAR, UrbanSim, SLEUTH and ArcHydro 1However, most of these systems were designed for certain application domains and only address the three temporal GIS aspects based on their identified applications Building upon all of the recent conceptual and technological advances

in temporal GIS, researchers are now well tioned to examine the big picture of temporal GIS development, address critical issues from all three perspectives, and envision the next generation of spatiotemporal information technologies

posi-database r epresentation and Management

Issues in spatiotemporal data modeling are cussed in depth in the literature, see for example Langran (1992); Peuquet (2001); Peuquet & Duan (1995); Raper, 2000) There is an apparent parallel between the GIS and database research communi-ties in the strategies of incorporating time into respective databases (Yuan, 1999) Both commu-nities commonly adopt time-stamp approaches to attach temporal data to individual tables (Gadia

dis-& Vaishnav, 1985) or layers (Beller et al 1991);

to individual tuples (Snodgrass & Ahn, 1986) or spatial objects (Langran & Chrisman, 1988); or

to individual values (Gadia & Yeung, 1988) or spatiotemporal atoms (Worboys, 1994) Figures

1 and 2 summarize the time-stamp approaches

in both communities Beyond the time-stamp approaches, researchers advocate for activity- event- or process-based approaches to integrate spatial and temporal data (Kwan, 2004; Peuquet

& Duan, 1995; Raper & Livingstone, 1995; Shaw

& Wang, 2000; Worboys, 2005; Yuan, 2001b) These are just a few samples of spatiotemporal data models proposed in the wealth of GIS lit-

Ngày đăng: 05/08/2014, 22:22

TỪ KHÓA LIÊN QUAN