1. Trang chủ
  2. » Giáo Dục - Đào Tạo

High Performance Computing in Remote Sensing - Chapter 9 pps

19 239 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 19
Dung lượng 527,73 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

After presenting the basic service architecture components, we discuss current Web service implementations, and how grid services are built on top of them to enable the design, deploymen

Trang 1

Chapter 9

An Introduction to Grids for Remote

Sensing Applications

Craig A Lee,

The Aerospace Corporation

Contents

9.1 Introduction 184

9.2 Previous Approaches 185

9.3 The Service-Oriented Architecture Concept 186

9.4 Current Approaches 188

9.4.1 Web Services 188

9.4.2 Grid Architectures 189

9.4.3 End-User Tools 194

9.5 Current Projects 195

9.5.1 Infrastructure Projects 195

9.5.2 Scientific Grid Projects 196

9.6 Future Directions 197

References 199

This chapter introduces fundamental concepts for grid computing, Web services, and service-oriented architectures We briefly discuss the drawbacks of previous ap-proaches to distributed computing and how they are addressed by service architectures After presenting the basic service architecture components, we discuss current Web service implementations, and how grid services are built on top of them to enable the design, deployment, and management of large, distributed systems After discussing best practices and emerging standards for grid infrastructure, we also discuss some end-user tools We conclude with a short review of some scientific grid projects whose science goals are directly relevant to remote sensing

Trang 2

9.1 Introduction

The concept of distributed computing has been around since the development of

net-works and many computers could interact The current notion of grid computing,

however, as a field of distributed computing, has been enabled by the pervasive avail-ability of these devices and the resources they represent In much the same way that the World Wide Web has made it easy to distribute Web content, even to PDAs and cell phones, and engage in user-oriented interactions, grid computing endeavors to make distributed computing resources easy to utilize for the spectrum of application domains [26]

Managing distributed resources for any computational purpose, however, is much more difficult than simply serving Web content In general, grids and grid users require information and monitoring services to know what machines are available, what current loads are, and where faults have occurred Grids also require scheduling capabilities, job submission tools, support for data movement between sites, and notification for job status and results When managing sets of resources, the user may need workflow management tools When managing such tasks across administrative domains, a strong security model is critical to authenticate user identities and enforce authorization policies

With such fundamental capabilities in place, it is possible to support many different styles of distributed computing Data grids will be used to manage access to massive data stores Compute grids will connect supercomputing installations to allow cou-pled scientific models to be run Task farming systems, such as SETI@Home [29], and Entropia [9], will be able to transparently distribute independent tasks across thousands of hosts Supercomputers and databases will be integrated with cell phones

to allow seamless interaction

This level of managed resource sharing will enable resource virtualization That

is to say, computing tasks and computing resources will not have to be hard-wired

in a fixed configuration to fixed machines to support a particular computing goal.

Such flexibility will also support the dynamic construction of groups of resources and

institutions into virtual organizations.

This flexibility and wide applicability to the scientific computing domain means

that grids have clear relevance to remote sensing applications From a computational

viewpoint, remote sensing could have a broad interpretation to mean both on-orbit sensors that are remote from the natural phenomena being measured, and in-situ sensors that could be in a sensor web In both cases, the sensors are remote from the main computational infrastructure that is used to acquire, disseminate, process, and understand the data

This chapter seeks to introduce grid computing technology in preparation for the chapters to follow We will briefly review previous approaches to distributed

com-puting before introducing the concept of service architectures We then introduce

current Web and grid service standards, along with some end-user tools for building grid applications This is followed by a short survey of current grid infrastructure and

Trang 3

science projects relevant to remote sensing We conclude with a discussion of future directions

9.2 Previous Approaches

The origins of the current grid computing approach can be traced to the late 1980’s and early 1990’s and the tremendous amounts of research being done on parallel programming and distributed systems Parallel computers in a variety of architectures had become commercially available, and networking hardware and software were becoming more widely deployed To effectively program these new parallel machines,

a long list of parallel programming languages and tools were being developed and

evaluated to support both shared-memory and distributed-memory machines [30].

With the commercial availability of networking hardware, it soon became obvious that networked groups of machines could be used together by one parallel code as a distributed-memory machine NOWs (network of workstations) became widely used

for parallel computation Such efforts gave rise to the notion of cluster computing,

where commodity processors are connected with commodity networks Dedicated networks with private IP addresses are used to support parallel communication, access

to files, and booting the operating system on all cluster nodes from a single OS “image” file A special, front-end machine typically provides the public interface

Of course, networks were originally designed and built to connect heterogeneous

sets of machines Indeed, the field of distributed computing deals with essentially

unbounded sets of machines where the number and location of machines may not

be explicitly known This is, in fact, a fundamental difference between clusters and grids, as a distributed computing infrastructure In a cluster, the number and location

of nodes are known and relatively fixed, whereas in a grid, this information may be relatively dynamic and have to be discovered at run-time Indeed, early distributed computing focused on basic capabilities such as algorithms for consensus, synchro-nization, and distributed termination detection, using whatever programming models were available

At that time, systems such as the Distributed Computing Environment (DCE) [44] were built to facilitate the use of groups of machines, albeit in relatively static,

well-defined, closed configurations DCE used the notion of cells of machines in which

users could run codes Different mechanisms were used for inter-cell and intra-cell communication

The Common Object Request Broker Architecture (CORBA) managed distributed systems by providing an object-oriented, client-side API that could access other ob-jects through an Object Request Broker (ORB) [43] CORBA is genuinely object-oriented and supports the key object-object-oriented properties such as encapsulation of state, inheritance, and methods that separate interfaces from implementations To manage interfaces, CORBA used the notion of an Interface Definition Language (IDL), which

could be used to produce stubs and skeletons To use a remote object, a client would

have to compile-in the required stub If an object interface changed, the client would

Trang 4

have to be recompiled Interoperability was not initially considered by the CORBA standards and many vendor ORBs were not compatible Hence, deploying a distributed system of any size required deploying the same ORB everywhere Interoperability was eventually addressed by the Internet Inter-ORB Protocol (IIOP) [42]

While CORBA provided novel capabilities at the time, some people argued that the CORBA paradigm was not sufficiently loosely coupled to manage open-ended distributed systems Interfaces were brittle, vendor ORBs were non-interoperable, and no real distinction was made between objects and object instances

At roughly the same time, the term metacomputing was being used to describe the

use of aggregate computing resources to address application requirements Research projects such as Globus [38], Legion [41], Condor [33], and UNICORE [46] were underway and beginning to provide initial capabilities using ‘home-grown’

imple-mentations The Globus user, for instance, could use the Globus Monitoring and Dis-covery Service (MDS) to find appropriate hosts The Globus Resource Access Manager (GRAM) client would then contact the Gatekeeper to do authentication, and request the local job manager to allocate and create the desired process The Globus Access to Secondary Storage (GASS) (now deprecated) could be used to read remote files Even-tually the term metacomputing was replaced by grid computing by way of the analogy

with the electrical power grid, where power is available everywhere on demand While these early grid systems also provided novel capabilities, they nonetheless had design issues, too Running a grid task using Globus was still oriented toward running a pre-staged binary identified by a known path name Legion imposed an object model on all aspects of the system and applications regardless of whether it was necessary or not The experience gained with these systems, however, was useful since they generated widespread interest and motivated further development

9.3 The Service-Oriented Architecture Concept

With the rapid growth of interest in grid computing, it quickly became clear that the most widely supported models and best practices needed to be standardized The lessons learned from these earlier approaches to distributed computing motivated the adoption and development of even more loosely coupled models that required even less a priori information about the computing environment This approach is generally

called service-oriented architectures The fundamental notion is that hosts interact via services that can be dynamically discovered at run-time.

Of course, this requires some conventions and at least one well-known service

to enable the discovery process This is illustrated inFigure 9.1 First, an available

service must register itself with a well-known registry service A client can then query

the registry to find appropriate services that match the desired criteria One or more

service handles are returned to the client, which may select among them The service

is then invoked with any necessary input data and the results are eventually returned

Trang 5

1 Register (Publish)

4 Invoke

5 Consume

3 Find

REGISTRY

2 Request

CLIENT

This fundamental interaction identifies several key components that must all be clearly defined to make this all work:

some common, base-level representation In many cases, the eXtensible Markup Language (XML) is used

transferring service-related messages between the source and destination

where-by the requester makes the request, the service acknowledges the request, and eventually returns the results, assuming no failures occur

on-line service description that captures all relevant properties of the service, e.g., the service name, what input data are required, what type of output data

is produced, the running time, the service cost, etc

unten-able that every client must know a priori of every service it needs to use Both client and service hosts may change for a variety of reasons, and being able to automatically discover and reconfigure the interactions is a key capability The clear benefit of this architectural concept is that service selection, location, and

execution do not have to be hard-wired The proper representations, descriptions, and

protocols enable services to be potentially hosted in multiple locations, discovered and utilized as necessary Service discovery enables a system to improve flexibility and fault tolerance by dynamically finding service providers when necessary Of course, a service does not have to be looked up every time it is used Once discovered,

a service handle could be cached and used many times It is also possible to compose

multiple services into a single, larger, composite service

Another extremely important concept is that service architectures enable the man-agement of shared resources We use the term resource, in its most general sense,

to mean all manners of machines, processes, networks, bandwidth, routing, files,

Trang 6

data, databases, instruments, sensors, signals, events, subsystems comprised of sets

of resources, etc Such resources can be made accessible as services in a distributed environment, which also means they can be shared among multiple clients With mul-tiple clients potentially competing for a shared resource, there must be well-defined security models and policies in place to establish client identity and determine who, for example, gets to use certain machines, what services or processes they can run there, who gets to read/write databases containing sensitive information, who gets notified about available information, etc

Being able to manage shared resources means that we can virtualize those resources.

Systems that are ‘stovepiped’ essentially have system functions that are all hard-wired

to specific machines, and the interactions among those machines are also hard-wired Hence, it is very difficult to modify or extend those systems to interact, or interoperate, with systems that they were not designed for If specific system functions are not hard-wired to certain machines, and can be scheduled on different hosts while still functioning within the larger system, we have virtualized that system function Hence,

if system functions are addressed by logical name, they can be used regardless of what

physical machine they are running on

Note that data can also be virtualized Data storage systems can become ‘global’

if the file name space is mapped across multiple physical locations Again, by using

a logical name, a file can be read or written without the client having to know where the file is physically stored In fact, files could be striped or replicated across multiple locations to enable improved performance and reliability

9.4 Current Approaches

This tremendous flexibility of virtualization is enabled by the notion of a service

architecture, which is clearly embodied by design in Web services As we shall see,

each of the key components are represented While the basic Web services stack provides the fundamental mechanism for discovery and client-server interaction, there are many larger issues associated with managing sets of distributed resources that are not addressed Such distributed resource management, however, was the original focus

of grid computing to support large-scale scientific computations After introducing the Web services stack, we shall look at how grid services build on top of them, and some of the current and emerging end-user grid tools

9.4.1 Web Services

Web services are not to be confused with Web pages, Web forms, or using a Web browser Web services essentially define how a client and service can interact with a minimum of a priori information The following Web service standards provide the necessary key capabilities [47]:

using content-oriented markup symbols that has gained wide-spread use in

Trang 7

many applications Besides providing basic structuring for attribute values, XML namespaces and schemas can be defined for specific applications

developed for the World Wide Web to move structured data from point A to B While its use for Web services is not mandatory, it is nonetheless widely used

in-teraction protocol with an XML-based message format At the top level, SOAP messages consist of an envelope with delivery information, a header, and a message body with processing instructions (We note that while SOAP was originally an acronym, it no longer is since technically it is not used for objects.)

XML-based service interface description format for interfaces, attributes, and other properties

a platform-independent framework for publishing and discovering services A WSDL service description may be published as part of a service’s registration that is provided to the client upon look-up

9.4.2 Grid Architectures

While Web services were originally motivated to provide basic client-server discovery and interaction, grids were originally motivated by the need to manage groups of machines for scientific computation Hence, from the beginning, the grid community was concerned about issues such as the scheduling of resources, advance reservations (scheduling resources ahead of time), co-scheduling (scheduling sets of resources), workflow management, virtual organizations, and a distributed security model to manage access across multiple administrative domains Since scientific computing

was the primary motivation, there was also an emphasis on high performance and managing massive data.

With the rapid emergence of Web services to address simple interactions in support

of e-commerce, however, it quickly became evident that they would provide a widely accepted, standardized infrastructure on which to build What Web services did not

initially address adequately, however, was state management, lifetime management, and notification.

State management determines how data, i.e., state, are handled across successive service invocations Clearly a client may need to have a sequence of related interac-tions with a remote service The results of any one particular service invocation may depend on results from the previous invocations This would suggest that, in general,

services may need to be stateful However, a service may be servicing multiple clients

with separate invocation streams Also, successive service invocations may, in fact, involve the interaction of two different services in two different locations Hence,

given these considerations, managing the context or session state separately from

Trang 8

the service, such that the service itself is stateless rather than stateful, has distinct

advantages such as supporting workflow management and fault tolerance

There are several design choices for how to go about this A simple avenue is to carry all session states on each service invocation message This would allow services

to be stateless, enabling better fault tolerance because multiple servers could be used since they don’t encapsulate any state However, this approach is only reasonable for applications with small data sets, such as simple e-commerce interactions For scientific applications where data sets may involve megabytes, gigabytes, or more, this is simply not scalable

Another approach is to use a service factory to create a transient service instance

that manages all states relevant to a particular client and invocation context After the initial call, a new service handle is returned to the client that is used for all subsequent interactions with the transient service While this approach may be somewhat more scalable, it means that all data are hidden in the service instance

WS-Context [48] is yet another approach where explicit context structures can be

defined that capture all relevant context and session information for a set of related services and calls While context structures can be passed by value (as part of a message), they can also be referenced by a URI (Uniform Resource Identifier) and

accessed through a separate Context Service that manages a store of context structures Contexts are created with a begin command and destroyed with a complete command.

A timeout value can also be set for a context.

A fourth, similar, approach is the Web Services Resource Framework (WSRF)

[12], where all relevant data, local or remote, can be managed as resources Resources

are accessed through a WS-Resource-qualified endpoint reference that is essentially

a ‘network-wide’ pointer to a WS-Resource Such endpoints may be returned as a reply to a service request, returned from a query to a service registry, or from a request

to a resource factory to create a new WS-Resource In fact, WSRF does not define specific mechanisms for creating resources Nonetheless, the lifetime of a resource can be explicitly managed A resource may be immediately destroyed with an explicit

destroy request message, or through a scheduled destruction at a future time.

Equally important as the management of states is the management of services them-selves Services can be manually installed on particular hosts and registered with a registry, but this can become untenable for even moderately sized systems If a process

or host should crash, identifying the problem and manually rebooting the system can

be tedious Hence, there is clearly a need to be able to dynamically install, boot, and

terminate services For this reason, the concept of service containers was developed Services are typically deployed within a container and have a specific time-to-live If

a service’s time-to-live is not occasionally extended, it will eventually be terminated

by the container, thereby reclaiming the resources (host memory and cycles) for other purposes without having to have a distributed garbage collection mechanism Event notification is an essential part of any distributed system [19] Events can

be considered to be simply small messages that have the semantics of being de-livered and acted up on as quickly as possible Hence, events are commonly used to asynchronously signal any system occurrences that have a time-sensitive nature Sim-ple, atomic events can be used to represent occurrences such as process completion,

Trang 9

failure, or heartbeats during execution Events could also be represented by attribute-value pairs with associated metadata, such as changes in a sensor attribute-value that exceeds some threshold Events could also have highly structured, compound attributes, such

as the ‘interest regions’ in a distributed simulation (Interest regions can be used by simulated entities to ‘advertise’ what types of simulated events are of interest.) Regardless of the specific representation, events have producers and consumers

In simple systems, event producers and consumers may be explicitly known to one

another and be connected point-to-point Many event systems offer topic-oriented publish/subscribe where producers (publishers) and consumers (subscribers) use a named channel to distribute events related to a well-known topic In contrast, content-oriented publish/subscribe delivers events by matching an event’s content (attributes

and values) to content-based subscriptions posted by consumers

In the context of grid service architectures, WSRF uses WS-Notification [28], which supports event publishers, consumers, topics, subscriptions, etc., for

XML-based messages Besides defining NotificationProducers and NotificationConsumers

that can exchange events, WS-Notification also supports the notion of subscriptions

to WSRF Resource Properties That is to say, a client can subscribe to a remote

(data) resource and be notified when the resource value changes In conjunction with WS-Notification, WS-Topics presents XML descriptions of topics and associated meta-data, while WS-BrokeredNotification defines an intermediary service to manage subscriptions

Along side all of these capabilities is an integral security model This is critical

in a distributed computing infrastructure that may span multiple administrative

do-mains Security requires that authentication, authorization, privacy, data integrity, and non-repudiation be enforced Authentication establishes a user’s identity, while

authorization establishes what they can do Privacy ensures that data cannot be seen and understood by unauthorized parties, while data integrity ensures that data cannot

be maliciously altered even though it may be seen (regardless of whether it is un-derstood) Non-repudiation between two partners to a transaction means that neither partner can later deny that the transaction took place

These capabilities are commonly provided by the Grid Security Infrastructure (GSI) [40] GSI uses public/private key cryptography rather than simply passwords User A’s public key can be widely distributed Any User B can use this public key to encrypt

a message to User A Only User A can decrypt the message using the private key In essence, public/private keys make it possible to digitally ‘sign’ information GSI uses

this concept to build a certificate that establishes a user’s identity A GSI certificate has

a subject name (user or object), the public key associated with that subject, the name

of the Certificate Authority (CA) that signed the certificate certifying that the public key and identity belong to the subject, and the digital signature of the named CA

GSI also provides the capability to delegate trust to a proxy using a proxy certificate,

thus allowing multiple entities to act on the user’s behalf This, in turn, enables the

capability of single sign-on where a user only has to ‘login once’ to be authenticated

to all resources that are in use Using these capabilities, we note that it is possible

to securely build virtual organizations across physical organizations by establishing one’s grid identity and role within the VO.

Trang 10

Applications & User-Level Tools

• Execution Mgmt

OGSA

• Security

• Self-Mgmt

• Info Svcs

Web Services

Operating Systems

Servers

• Data Mgmt

• Resource Mgmt

The GridShib project is extending GSI with work from the Internet2’s Shibboleth

project [24] Shibboleth is based on the Security Assertion Markup Language (SAML)

to exchange attributes between trusted organizations To use a remote resource, a user authenticates to their home institution, which, in turn, authenticates them to the

remote institution based on the user’s attributes GridShib introduces both push and pull modes for managing the exchange of attributes and certificates.

Up to this point, we have presented and discussed basic Web services and the ad-ditional fundamental capabilities that essentially extend these into grid services We

now discuss how these capabilities can be combined into a coherent service archi-tecture The key example here is the emerging standard of the Open Grid Services Architecture (OGSA) [10, 39] Figure 9.2 gives a very high-level view of how OGSA

interprets basic Web services to present a uniform user interface and service archi-tecture for the management of servers, storage, and networks OGSA provides the following broad categories of services:

r Execution Management Services

– Execution Planning

– Resource Selection and Scheduling

– Workload Management

r Data Services

– Storage Management

– Transport

– Replica Management

r Resource Management Services

– Resource Virtualization

– Reservations

– Monitoring and Control

Ngày đăng: 12/08/2014, 03:20

TỪ KHÓA LIÊN QUAN