OMNInet has also been used for research focused on studying the behavior of advanced scientific Grid applications that are closely integrated with performance optical networks, based on
Trang 1[3] C Chaudet, D Dhoutaut, and I Guerin Lassous (2005) “Performance Issues with IEEE802.11 in Ad Hoc Networking,”IEEE Communications, July, 110–116.
[4] L Villasenor, Y Ge, and L Lamont (2005) “HOLSR: A Hierarchical Proactive RoutingMechanism for Mobile Ad Hoc Networks,”IEEE Communications, July, 118–125.
[5] M Dillinger, K Madani, and N Alonistioti (2003)Software Defined Radio: Architectures, Systems, and Functions, John Wiley & Sons Ltd.
[6] S Haykin (2005) “Cognitive Radio: Brain-Empowered Wireless Communications,”IEEE Journal on Selected Areas in Communications, 23, 201–220.
[10] Bluetooth SIG, “Bluetooth Specification Version 1.1,” www.bluetooth.com
[11] Jan Beutel, Oliver Kasten, Friedemann Mattern, Kay Romer, Frank Siegemund, andLothar Thiele (2004) “Prototyping Wireless Sensor Network Applications with BTnodes,”
Proceedings of the 1st IEEE European Workshop on Wireless Networks, July 2004,
pp 323–338
[12] L Ruiz, T Braga, F Silva, H Assuncao, J Nogueira, and A Loureiro (2005) “On the Design
of a Self-Managed Wireless Sensor Network,”IEEE Communications, 43(8), 95–102.
[13] G Pottie and W Kaiser (2000) “Wireless Integrated Network Sensors,’Communications
IEEE Photonics Technology Letters, 13(1), 85–87.
[16] National LambdaRail, www.nlr.net
[17] Super Science Network, http://www.sinet.ad.jp/english/super_sinet.html
[18] Japan Gigabit Network 2, http://www.jgn.nict.go.jp/e/index.html
[19] Louisiana Optical Network Initiative, www.loni.org
[24] VV.A Aksyuk, S Arney, N.R Basavanhally, D.J Bishop, C.A Bolle, C.C Chang, R Frahm, A.Gasparyan, J.V Gates, R George, C.R Giles, J Kim, P.R Kolodner, T.M Lee, D.T Neilson,
C Nijander, C.J Nuzman, M Paczkowski, A.R Papazian, R Ryf, H Shea, and M.E Simon(2002) “238× 238 Surface Micromachined Optical Crossconnect With 2 dB MaximumLoss”, in Technical Digest OFC 2002, postdeadline paper PD-FB9
Trang 2[25] X Zheng, V Kaman, S Yuan, Y Xu, O Jerphagnon, A Keating, R.C Anderson,
H.N Poulsen, B Liu, J.R Sechrist, C Pusarla, R.Helkey, D.J Blumenthal, and J.E Bowers
(2003) “3D MEMS Photonic Crossconnect Switch Design and Performance”,IEEE Journal
on Selected Topics in Quantum Electronics, April/May, 571–578.
[26] V Kaman, X Zheng, S Yuan, J Klingshirn, C Pusarla, R.J Helkey, O Jerphagnon, and
J.E Bowers, (2005) “Cascadability of Large-Scale 3-D MEMS-Based Low-Loss Photonic
Cross-Connects,”IEEE Photonics Technology Letters, 17, 771–773.
[27] V Kaman, X Zheng, S Yuan, J Klingshirn, C Pusarla, R.J Helkey, O Jerphagnon, and
J.E Bowers (2005) “A 32× 10 Gbps DWDM Metropolitan Network Demonstration Using
Wavelength-Selective Photonic Cross-Connects and Narrow-Band EDFAs,”IEEE Photonics
Technology Letters, 17, 1977–1979.
[28] J Kim, C.J Nuzman, B Kumar, D.F Lieuwen, J.S Kraus, A Weiss, C.P Lichtenwalner,
A.R Papazian, R.E Frahm, N.R Basavanhally, D.A Ramsey, V.A Aksyuk, F Pardo,
M.E Simon, V Lifton, H.B Chan, M Haueis, a Gasparyan, H.R Shea, S Arney, C.A Bolle,
P.R Kolodner, R Ryf, D.T Neilson, and J.V Gates (2003) “1100× 1100 port
MEMS-Based Optical Crossconnect with 4-dB Maximum Loss,”IEEE Photonics Technology Letters,
15(11), 1537–1539
[29] R Helkey (2005) “Transparent Optical Networks with Large MEMS-Based Optical
Switches”, 8th International Symposium on Contemporary Photonics Technology (CPT)
[30] O Jerphagnon, R Anderson, A Chojnacki, R Helkey, W Fant, V Kaman, A Keating,
B Liu, c Pusarla, J.R Sechrist, D Xu, Y Shifu and X Zheng (2002) “Performance
and Applications of Large Port-Count and Low-Loss Photonic Cross-Connect Systems for
Optical Networks,” Proceedings of IEEE/LEOS Annual Meeting, TuV4
[31] F Dijkstra and C de Laat (2004) “Optical Exchanges,” GRIDNets conference proceedings,
October 2004
[32] O Jerphagno, D Alstaetter, G Carvalho, and C McGugan (2004) “Photonic Switches
in the Worldwide National Research and Education Networks,” Lightwave Magazine,
November
[33] M Yano, F Yamagishi, and T Tsuda (2005) “Optical MEMS for Photonic Switching –
Compact and Stable Optical Crossconnect Switches for Simple, Fast, and Flexible
Wave-length applications in recent Photonic Networks,” IEEE Journal on Selected Topics in
Quantum Electronics, 11, 383–394.
[34] R Lingampalli, J.R Sechrist, P Wills, J Chong, A Okun, C Broderick, and J Bowers
(2002) “Reliability Of 3D MEMS-Based Photonic Switching Systems, All-Optical Networks,”
Proceedings of NFOEC
[35] K.-I Kitayama, M Koga, H Morikawa, S Hara and M Kawai (2005) “Optical Burst
Switching Network Testbed in Japan, invited paper, OWC3, OFC
[36] D Blumenthal and M Masanovic (2005) “LASOR (Label Switched Optical Router):
Archi-tecture and Underlying Integration Technologies”, invited paper, We2.1.1, ECOC
[37] M Zirngibl (2006) “IRIS – Packet Switching Optical Data Router”, invited paper, OFC
[38] H Park, E.E Bwmeister, S Njorlin, and J.E Bowers (2004) “40-Gb/s Optical Buffer
Design and Simulation”, Numerical Simulation of Optoelectronic Devices Conference,
Santa Barbara, August 2004
[39] N McKeown (2005) “Packet-switching with little or no buffers”, plenary session,
Mo2.1.4, ECOC
[40] Optical Metro Network Initiative, http://www.icair.org/omninet/
Trang 4Advanced Networking Research Testbeds
a multiplicity of advanced research testbeds are being used to investigate and todevelop new architecture, methods, and technologies that will enable networks toincorporate Grid attributes to a degree not possible previously Several researchtestbeds that may be of particular interest to the Grid community are described inthis appendix
In addition, this appendix describes a number of early prototype facilities thathave implemented next-generation Grid network infrastructure and services Thesefacilities have been used for experiments and demonstrations showcasing the capa-bilities of advanced Grid network services at national and international forums Theyare also being used to support persistent advanced communication services for theGrid community
The last part of this appendix describes several advanced national networks that arecreating large distributed fabrics that implement leading-edge networking concepts
Grid Networks: Enabling Grids with Advanced Communication Technology Franco Travostino, Joe Mambretti,
Trang 5and technologies as production facilities This section also describes a tium that is designing and implementing an international facility that is capable ofsupporting next-generation Grid communication services.
In the last few years, these testbeds have produced innovative methods and nologies across a wide range of areas that will be fundamental to next-generationnetworks These testbeds are developing new techniques that allow for high levels
tech-of abstraction in network service design and implementation, for example throughnew methods of virtualization that enable functional capabilities to be designedand deployed independently of specific physical infrastructure These approachesprovide services that enable the high degree of resource sharing required by Gridenvironments, and they also contribute to the programmability and customization
of those environments
Using these techniques, network resources become basic building blocks that can
be dynamically assembled and reassembled as required to ensure the provisioning
of high-performance, deterministic services Many of these testbeds have developedinnovative architecture and methods that have demonstrated the importance ofdistributed management and control of core network resources Some of these inno-vations are currently migrating to standard bodies, to prototype deployment, and,
in a few cases, commercial development
A.2.1 OMNINET
The Optical Metro Network Initiative (OMNI) was created as a cooperative researchpartnership to investigate and develop new architecture, methods, and technologiesrequired for high-performance, dynamic metro optical networks As part of thisinitiative, the OMNInet metro-area testbed was established in Chicago in 2001 tosupport experimental research The testbed has also been extended nationally andinternationally to conduct research experiments utilizing international lightpaths and
to support demonstrations of lightpath-based services
OMNInet has been used for multiple investigative projects related to lightpathservices One area of focus has been supporting reliable Gigabit Ethernet (GE) and10-GE services on lightpaths without relying on SONET for transport Therefore, todate no SONET has been used for the testbed However, a new project currentlybeing formulated will experiment with integrating these new techniques with next-generation SONET technologies and new digital framing technology No routershave been used on this testbed; all transport is exclusively supported by layer 1and layer 2 services Through interconnections with other testbeds, experimentshave been conducted related to new techniques for layer 3, layer 2, and layer 1integration
OMNInet has also been used for research focused on studying the behavior
of advanced scientific Grid applications that are closely integrated with performance optical networks, based on dynamic lightpath provisioning and
Trang 6high-supported by reconfigurable photonics components The testbed initially was
provided with 24 10-Gbps lightpaths within a mesh configuration, interconnected by
dedicated wavelength-qualified fiber Each of four photonic nodes sites distributed
throughout the city supported 12 10-Gbps lightpaths The testbed was designed to
provide Grids with unique capabilities for dynamic lightpath provisioning, which
could be directly signaled and controlled by individual applications Using these
techniques, applications can directly configure network topologies To investigate
real data behavior on this network, as opposed to artificially generated traffic, the
testbed was extended directly into individual laboratories at research sites, using
dedicated fiber The testbed was closely integrated with Grid clusters supporting
science applications
Each core site included a Photonic Switch Node (PSN), comprising an experimental
Dense Wavelength Division Multiplexing (DWDM) photonic switch, based on two
low-loss photonic switches (supported by 2D 8×8 MEMS), an Optical Fiber Amplifier
(OFA), and a high-performance layer 2 switch The photonic switches used were
not commercial devices but unique assemblages of technologies and components,
including such variable gain optical amplifiers
OMNInet was implemented with several User–Network Interfaces (UNIs) that
collectively constitute API interfaces to allow for communications between higher
level processes and low-level optical networking resources, through service
interme-diaries These service layers constitute the OMNInet control plane architecture, which
was implemented within the framework of emerging standards for optical networking
control planes, including the ITU-T Automatically Switched Optical Networks (ASON)
standard Because ASON is a reference architecture only, without defined signaling
and routing protocols, various experimental software modules were created to
perform these functions
For example, as a service interface, Optical Dynamic Intelligent Network (ODIN)
was created to allow top-level process, including applications, to dynamically
provi-sion lightpaths (i.e., connections based on switched optical channels) over the
optical core network Messages functions used the Simple Path Control (SPC)
protocol ODIN functions include discovery, for example determining the
acces-sibility and availability of network resources that could be used to configure a
particular topology This service layer was integrated with an interface based on
the OIF optical UNI (O-UNI) standard between edge devices and optical switches
ODIN also uses, as a provisioning tool, IETF GMPLS protocols for the control
plane, Optical Network–Network Interface (O-NNI), supported by an out-of-band
signaling network, provisioned on separate fiber than that of the data plane A
special-ized process manages various optical network control plane and resource
provi-sioning processes, including dynamic proviprovi-sioning, deletion, and attribute setting
of lightpaths OMNInet was implemented with various mechanisms for protection,
reliability and restoration, including highly granulated network monitoring at all
levels, including per-wavelength optical protection through specialized software,
protocol implementation, and physical-layer impairment automatic detection and
response
OMNInet is developed and managed by Northwestern University, Nortel Research
Labs, SBC, the University of Illinois at Chicago (UIC), Argonne National Laboratory,
and CANARIE (www.icair.org/omninet)
Trang 7A.2.2 DISTRIBUTED OPTICAL TESTBED (DOT)
The Distributed Optical Testbed (DOT) is an experimental state-wide Grid testbed
in Illinois that was designed, developed, and implemented in 2002 to supporthigh-performance, resource-intensive Grid applications with distributed infrastruc-ture based on dynamic lightpath provisioning The testbed has been developinginnovative architecture and techniques for distributed heterogeneous environmentssupported by optical networks that are directly integrated with Grid environments.This approach recognizes that many high-performance applications require thedirect ad hoc assembly and reconfiguration of information technology resources,including network configurations The DOT testbed was designed to fully integrateall network components directly into a single contiguous environment capable ofproviding deterministic services No routers were used on the testbed – all serviceswere provided by individually addressable layer 2 paths at the network edge andlayer 1 paths at the network core The DOT testbed was implemented with capa-bilities for application-driven dynamic lightpath provisioning DOT provided for anintegrated combination of (a) advanced optical technologies based on leading edgephotonic components, (b) extremely high-performance capabilities, i.e., multiple10-Gbps optical channels, and (c) capabilities for direct application signaling to coreoptical components to allow for dynamic provisioning The DOT environment isunique in having these types of capabilities
DOT testbed experiments have included examining process requirements andbehaviors from the application level through mid-level processes, through computa-tional infrastructure, through control and management planes, to reconfigurable corephotonic components Specific research topics have included inter-process commu-nications, new techniques for high-performance data transport services, adaptivelightpath provisioning, optical channel-based services for high-intensity, long-termdata flows, the integration of high-performance layer 4 protocols and dynamic layer
1 provisioning, and physical impairment detection, compensation and adjustment,control and management planes, among others
The DOT testbed was established as a cooperative research project by western University, the University of Illinois at Chicago, Argonne National Laboratory,the National Center for Supercomputing Applications, the Illinois Institute of Tech-nology, and the University of Chicago (www.dotresearch.org) It was funded by theNational Science Foundation, award # 0130869
Trang 8I-WIRE layer 1 services can be directly connected to individual Grid clusters without
having to transit through intermediary devices such as routers I-WIRE provides
point-to-point layer 1 data transport services based on DWDM, enabling each organization
to have at least one 2.5-Gbps optical channel However, the majority of channels are
10 Gbps, including multiple 10 Gbps among the Illinois TeraGrid sites The Teragrid
project, for example, uses I-WIRE to provide 30 Gbps (3×10 Gbps optical channels),
among StarLight, Argonne, and NCSA I-WIRE also supports the DOT
These facilities have been developed by, and are directly managed by, the research
community I-WIRE is governed by a multiorganizational cooperative partnership,
including Argonne National Laboratory (ANL), the University of Illinois (Chicago and
Urbana campuses, including the National Center for Supercomputing Applications
(NCSA), Northwestern University, the Illinois Institute of Technology, the University
of Chicago, and others (www.i-wire.org) The I-WIRE project is supported by the
state of Illinois
A.2.4 OPTIPUTER
Established in 2003, the OptIPuter is a large-scale (national and international)
research project that is designing a fundamentally new type of distributed
cyberinfras-tructure that tightly couples computational resources over parallel optical networks
using IP The name is derived from its use ofoptical networking, I nternet protocol,
computer storage, processing, and visualization technologies The OptIPuter design
is being developed to exploit a new approach to distributed computing, one in
which the central architectural element is optical networking, not computers This
transition is based on the use of parallelism, as it was for a similar shift in
supercom-puting a decade ago However, this time the parallelism is in multiple wavelengths
of light, or lambdas, on single optical fibers, allowing the creation of creating
“super-networks.” This paradigm shift is motivating the researchers involved to understand
and develop innovative solutions for a “LambdaGrid” world The goal of this new
architecture is to enable scientists who are generating terabytes and petabytes of data
to interactively visualize, analyze, and correlate their data in real time from multiple
storage sites connected to optical networks
The OptIPuter project is reoptimizing the complete Grid stack of software
abstrac-tions, demonstrating how to “waste” bandwidth and storage in order to conserve
relatively scarce computing in this new world of inverted values Essentially, the
OptIPuter is a virtual parallel computer in which the individual processors consist
of widely distributed clusters; its memory consists of large distributed data
reposito-ries; its peripherals are very large scientific instruments, visualization displays, and/or
sensor arrays; and its backplane consists of standard IP delivered over multiple,
dynamically reconfigurable dedicated lambdas
The OptIPuter project is enabling collaborating scientists to interactively explore
massive amounts of previously uncorrelated data with a radical new architecture,
which can be integrated with many e-science shared information technology
facil-ities OptIPuter researchers are conducting large-scale, application-driven system
experiments with two data-intensive e-science efforts to ensure a useful and usable
Trang 9OptIPuter design: EarthScope, funded by the National Science Foundation (NSF),and the Biomedical Informatics Research Network (BIRN), funded by the NationalInstitutes of Health (NIH) The OptIPuter is a five-year information technologyresearch project funded by the National Science Foundation University of Cali-fornia, San Diego (UCSD), and University of Illinois at Chicago (UIC) lead theresearch team, with academic partners at Northwestern University; San Diego StateUniversity; University of Southern California/Information Sciences Institute; Univer-sity of California, Irvine; University of Texas A&M; University of Illinois at Urbana-Champaign/National Center for Supercomputing Applications; and affiliate part-ners at the US Geological Survey EROS, NASA, University of Amsterdam and SARAComputing and Network Services in The Netherlands, CANARIE in Canada, the KoreaInstitute of Science and Technology Information (KISTI) in Korea, and the NationalInstitute of Advanced Industrial Science and Technology (AIST) in Japan; and, indus-trial partners (www.optiputer.net).
A.2.5 CHEETAH
CHEETAH (Circuit-switched High-speed End-to-End Transport ArcHitecture) is aresearch testbed project established in 2004 that is investigating new methodsfor allowing high-throughput and delay-controlled data exchanges for large-scalee-science applications between distant end-hosts, over end-to-end circuits that consist
of a hybrid of Ethernet and Ethernet-over-SONET segments
The capabilities being developed by CHEETAH would supplement standardrouted Internet connectivity by providing additional capacity using circuits Part
of this research is investigating routing decision algorithms that could providefor completely bypassing routers The decision algorithms within end-hosts woulddetermine whether not whether to set up a CHEETAH circuit or rely on standardrouting
Although the architecture is being used for electronic circuit switches, it can also beapplied to all-optical circuit-switched networks End-to-end “circuits” can be imple-mented with the CHEETAH using Ethernet (GE or 10GE) signals from end-hosts toMultiservice Provisioning Platforms (MSPPs) within enterprises, which then could
be mapped to wide-area SONET circuits interconnecting distant MSPPs over-SONET (EoS) encapsulation techniques have already been implemented withinMSPPs A key goal of the CHEETAH project is to provide a capability for dynamicallyestablishing and releasing these end-to-end circuits for data applications as required.The CHEETAH project is managed by the University of Virginia and is funded bythe National Science Foundation (http://cheetah.cs.virginia.edu)
Ethernet-A.2.6 DRAGON
The DRAGON project (dynamic resource allocation via GMPLS optical networks) isanother recently established testbed that is also examining methods for dynamic,deterministic, and manageable end-to-end network transport services The projecthas implemented a GMPLS-capable optical core network in the Washington DC
Trang 10metropolitan area DRAGON has a specific focus on meeting the needs of high-end
e-science applications
The initial focus of the project is on means to dynamically control and provision
lightpaths However, this project is also attempting to develop common service
defi-nitions to allow for inter-domain service provisioning The services being developed
by DRAGON are intended as supplements to standard routing These capabilities
would provide for deterministic end-to-end multiprotocol services that can be
provi-sioned across multiple administrative domains as well as a variety of conventional
network technologies
DRAGON is developing software necessary for addressing IP control plane
end-system requirements related to providing rapid provisioning of inter-domain services
through associated policy access, scheduling, and end-system resource
implementa-tion funcimplementa-tions
The project is being managed by the Mid-Atlantic Crossroads GigaPOP (MAX), the
University of Southern California, the Information Science Institute, George Mason
University, and the University of Maryland It is funded by the National Science
Foundation (www.dragon.maxgigapop.net)
A.2.7 JAPAN GIGABIT NETWORK II (JGN II)
Japan Gigabit Network II (JGN II) is a major research and development program
which has established the largest advanced communications testbed to date The
JGN2 testbed was established in 2004 to explore advanced research concepts
related to next-generation applications, including advanced digital media using
high-definition formats, network services at layers 1 through 3, and new communications
architecture, protocols, and technologies Among the research projects supported
by the JGN2 testbed are many that are exploring advanced techniques for large-scale
science and Grid computing
The foundation of the testbed consists of dedicated optical fibers, which support
layer 1 services controlled with GMPLS, including multiple parallel 10-Gbps
light-paths, supported by DWDM Layer 2 is supported primarily with Ethernet Layer
3 consists of IPv6 implementations, with support for IPv6 multicast The testbed
was implemented nationwide, and it has multiple access points at major centers of
research
However, JGN2 also has implemented multiple international circuits, some
through the T-LEX 10G optical exchange in Tokyo, to allow for interconnection to
research testbeds world-wide, including to many Asia Pacific POPs and to the USA, for
example a 10-Gbps channel from Tokyo to Pacific Northwest GigaPoP (PNWGP) and
to the StarLight facility in Chicago The 10G path to StarLight is being used for basic
research and experiments in layer 1 and layer 2 technologies, including those related
to advanced Grid technologies The testbed is also used to demonstrate advanced
applications, services, and technologies at major national and international forums
The JGN2 project has multiple government, research laboratory, university, and
industrial partners JGN2 is sponsored by, and is operated by, the National Institute
of Information and Communications Technology (NICT) (http://www.jgn.nict.go.jp)
Trang 11A.2.8 VERTICALLY INTEGRATED OPTICAL TESTBED FOR LARGE SCALE
require-Research activities include (a) testing new optical network components andnetwork architectures, (b) interoperability studies using network technology frommultiple producers, and (c) developing and testing software used for reserving anddynamically provisioning transmission capacity at gigabps ranges and interacting withrelated networking projects in other parts of Europe and the world
To create the testbed, dedicated wavelength-qualified fiber (dark fiber) was mented on a wide-area regional testbed Equipment was implemented that couldsupport lightpaths with multiple 10 Gbps of capacity simultaneously, using DWDM.The testbed was also implemented with high-performance computational clusters,with GE and/or 10GE access paths Testbed switching technologies include SDH,
imple-GE, and 10GE along with new control and management plane technology Also, newtypes of testing and measurement methods are being explored
VIOLA was established by a consortium of industrial partners, major researchinstitutions, universities, and science researchers, including the IMK, ResearchCenter Jülich, RWTH Aachen and Fachhochschule Bonn-Rhein-Sieg, the DFN-Verein(Germany’s National Research and Education Network), Berlin, the University ofBonn, Alcatel SEL AG, Stuttgart, Research Center Jülich, Central Institute for AppliedMathematics, Jülich, Rheinische Friedrich-Wilhelms-University, Institute for Comput-erscience IV, Bonn, Siemens AG, München, and T-Systems International GmbH (TSI),Nürnberg VIOLA is sponsored by the German Federal Ministry of Education andResearch (BMBF) (www.viola-testbed.de)
A.2.9 STARPLANE
Established in 2005, the StarPlane research initiative in The Netherlands is using anational optical infrastructure that is investigating and developing new techniques foroptical provisioning One focal research area is deterministic networking, providingapplications with predictable, consistent, reliable, and repeatable services Applica-tions are provided with mechanisms that can directly control these services Anotherresearch area is the development of architecture that allows applications to assemblespecific types of protocol stacks to meet their requirements and to directly controlcore network elements, in part through low-level resource partitioning Related toboth of these functions are methods that provide for policy-based access control
To provide for a comprehensive framework for this architecture, this project isdesigning and implementing a Grid infrastructure that will be closely integrated withthese optical provisioning methods The Grid infrastructure will support specializedapplications that are capable of utilizing mechanisms provided to control network
Trang 12resources, through management plane software The infrastructure will be
imple-mented with this StarPlane management middleware as a standard library of
compo-nents, including protocols and middleware that will be advertised to applications
The testbed will be extended internationally through NetherLight for interoperable
experiments with other testbeds
StarPlane is managed by the University of Amsterdam and SURFnet It is funded
by the government of The Netherlands
A.2.10 ENLIGHTENED
The goal of the EnLIGHTened research project is establishing dynamic, adaptive,
coordinated, and optimized use of networks connecting geographically distributed
high-end computing and scientific instrumentation resources for faster real-time
problem resolution The EnLIGHTened project, established in 2005, is a
collabo-rative interdisciplinary research initiative that seeks to research the integration of
optical control planes with Grid middleware under highly dynamic requests for
heterogeneous resources Request for the coordinated resources are application,
workflow engine, and aggregated traffic driven The critical feedback loop consists
of resource monitoring for discovery, performance, and SLA compliance and
infor-mation is fed back to co-schedulers for coordinated adaptive resource allocation
and co-scheduling In this context, several network research challenges will be
theoretically analyzed, followed by proposed novel extensions to existing control
plane protocols, such as RSVP-TE, OSPF-TE, and LMP Taking full advantage of the
EnLIGHTened testbed, prototypes will be implemented and experimental testing
will be conducted on extensions that make possible national and global testbeds
Several network research topics within the above context arise with a central theme
ofrethinking of behavioral control of networks:
(1) centralized control – via management plane or middleware and
(2) distributed control – via control plane
Highlights of the network research include:
(1) network-aware distributed advanced scheduling without preemption;
(2) lightpath group reservation: (a) sequential/parallel and (b) time;
(3) dynamic restoration and congestion avoidance using intelligent deflection
routing;
(4) analyzing transport protocols for dedicated optical connections;
(5) resource optimization
This research is developing an advanced software architecture that will vertically
integrate the applications, advanced middleware, and the underlying optical network
control plane technologies The ultimate goal is to provide high-end dynamic
appli-cations with the capabilities to make highly dynamic, coordinated, adaptive, and
optimized use of globally distributed compute, storage, instrument, and network
resources Monitoring tools will be used for resource discovery and near-real-time
performance and availability of the disparate resources A better understanding of
Trang 13how best to implement this tight feedback loop between monitored resource mation and resource coordination and allocation will be one area of focus.
infor-The proposedvertical integration software architecture consists of three layers:
(1) the application layer: focused on the capabilities required for applicationabstraction and service transaction protocols;
(2) the resource management layer: coordinates application requirements withresource discovery and performance monitoring, service provisioning andreconfiguration, and fault tolerance;
(3) the provisioning layer: APIs for on-demand and in-advance resource reservation
to interface with the underlying resource control plane
The system-level mechanisms and algorithms behind the software architecture will
be studied and developed to satisfy high performance, resource sharing, resourcecoordination, and high availability
This project is exploring a number of research problems regarding data ment, load balance, scheduling, performance monitoring, service reconfiguration,and fault tolerance that are still open in this integrated environment In addition,the following key components are being developed:
manage-• advanced monitoring capabilities, which provide real-time information on theperformance and availability of compute, instrument and network resources;
• the possibility of closer interaction with the optical network control plane;
• software that provides unified services to jointly and optimally allocate and controlcompute and networking resource and enable the applications to autonomouslyadapt to the available resources;
• methods to ensure security and service level agreement regulation within thedistributed resources
Internationally, the EnLIGHTened project is a partner project to the EU’s LUCIFERcontrol plane research testbed Both projects are collaborating on control plane forGrid computing research as well as extending each other’s testbed The EnLIGHT-ened project is also collaborating with the Japanese G-Lambda research project.The EnLIGHTened project is a large collaborative interdisciplinary research andtestbed effort having partners from MCNC, RENCI, NCSU, LSU, Cisco, Calient, SURA,AT&T Research, NRL, and NLR, and is supported by the National Science Foundation
A.2.11 LAMBDA USER CONTROLLED INFRASTRUCTURE FOR EUROPEAN
Trang 14dynamic, adaptive, and optimized use of heterogeneous network infrastructures
connecting various high-end resources
This infrastructure will enhance and demonstrate solutions that facilitate vertical
and horizontal communication among applications middleware, existing network
resource provisioning systems, and the proposed Grid-GMPLS control plane The
project’s main goal is broken down into the following objectives It will:
(1) Demonstrate on-demand service delivery across access-independent
multido-main/multivendor research network testbed on a European and world-wide
scale The testbed will include (a) EU NRENs: SURFnet, CESNET, PIONIER
as well national testbeds (VIOLA, OptiCAT, UKLight), (b) GN2, GLIF and
cross-border dark fiber connectivity infrastructure, (c) GMPLS, UCLP, DRAC,
and ARGON control and management plane, and (d) multivendor equipment
environments
(2) Develop integration between application middleware and transport networks,
based on three planes
(i) A service plane will provide (a) a clear set of APIs aimed at facilitating the
development of new applications requiring a combination of Grid and
network services, (b) a set of fundamental service components (developed
as extensions to UNICORE) that implement the necessary mechanisms
allowing network and Grid resources to be exposed in an integrated
fashion, and to make reservations of those resources, and (c) policy
mech-anisms for networks participating in a global hybrid network
infrastruc-ture, allowing both network resource owners and applications such as
Grids to have a stake in the decision to allocate specific network resources
(ii) A network resource provisioning plane will provide (a) adaptation of
existing Network Resource Provisioning Systems (NRPS) to support the
framework of the project, (b) full integration and testing of
representa-tive NRPS to the testbed, and (c) implementation of interfaces between
different NRPS to allow multidomain interoperability with the testbed’s
resource reservation system
(iii) A control plane will provide (a) enhancements to the GMPLS control
plane (defined here as Grid-GMPLS, or G2MPLS) to provide optical
network resources as first-class Grid resource, (b) interworking of
Grid-GMPLS-controlled network domains with NRPS-based domains, i.e.,
interoperability between Grid-GMPLS and UCLP, DRAC, and ARGON, and
(c) development of network and application interfaces (e.g., Grid-OUNI,
Grid-APIs) for on-demand and advanced lambda reservations, enhanced
with Grid layer specific information/heuristics, such as CPU and storage
scheduling
(3) Disseminate the project experience and outcomes to the targeted actors: NRENs
and research users
This research project will rely on experimental activities on a distributed testbed
interconnecting European and world-wide optical infrastructures Specifically, the
Trang 15testbed involves European NRENs and national testbeds, as well as GÈANT2, border dark fiber and GLIF infrastructures A set of highly demanding applicationswill be adapted to prove the concept Internationally, the project is a partner project
cross-to the US EnLIGHTened research project Both projects will collaborate on oping a control plane for Grid computing research as well as extend functionalityfrom each testbed to the other
devel-The LUCIFER project is a large collaborative interdisciplinary research and testbedeffort involving 20 partners from nine countries, including (a) NRENs, includingCESNET (Czech Republic), PIONEIR (Poland), SURFnet (Netherlands; (b) nationaltestbeds, including VIOLA, OptiCAT, and UKLight; (c) commercial companies,including ADVA, Hitachi, and Nortel; (d) SMEs, including NextWorks – ConsorzioPisa Ricerche (CPR); and (d) research centers and universities, including AthensInformation Technology Institute (AIT-Greece), Fraunhofer SCAI (Germany), Fraun-hofer IMK (Germany), Fundaciò i2CAT (Spain), IBBT (Belgium), RACTI (Greece),Research Centre Jülich (Germany), University of Amsterdam (Netherlands), Univer-sity of Bonn (Germany), University of Essex (UK), University of Wales-Swansea (UK),SARA (Netherlands), and non-EU participants, including MCNC (USA), and CCT@LSU(USA) The project is funded by the European Commission
A.2.12 GLOBAL ENVIRONMENT FOR NETWORK INNOVATIONS (GENI)
At this time the US National Science Foundation and the network research community
is conceptualizing the Global Environment for Network Innovations (GENI), a facilitythat would increase the quality and quantity of experimental research results innetworking and distributed systems and would accelerate the transition of thoseresults into products and services A key goal of this initiative is to support researchthat would lead to a major transition from the current Internet to one that hassignificantly more capabilities and also provides for much greater reliability andsecurity
The research community is developing a proposed design for GENI, whichwould be a large-scale (national and international) persistent research testbed Itsdesign would allow for multiple research projects to be conducted simultaneously.The testbed design would be modularized so that various experiments could beconducted at various network layers without interference
For physical infrastructure, GENI will use multiple resource components (termedthe “GENI substrate”), including not only those generally associated with networkssuch as circuits, transport, transit equipment, and wireless devices, but also othertypes of resources, such as computational clusters and storage devices A softwaremanagement framework will overlay these substrate resources This overlay will allowfor the partitioning and use of the underlying resources in various combinations Thisapproach is similar to the one being developed by the Grid community Furthermore,this infrastructure will contain attributes that are common to Grid environments – itwill be programmable, virtualizable, accessible by multiple communities, and highlymodular
This initiative is being supported by the NSF Directorate for Computer and mation Science and Engineering (CISE) (www.geni.net)
Trang 16Infor-A.2.13 DEPARTMENT OF ENERGY ULTRASCIENCE NET
The US Department of Energy’s (DOE) UltraScience Net is an experimental,
national-scale network research testbed The UltraScience Net, which has been developed by
the DOE Office of Science, was designed and implemented to prototype the
archi-tecture, services, and technologies that will be required by multiple next-generation
science projects These science project requirements have shaped the design and
capabilities of the testbed Although many of these science projects are data
inten-sive and, therefore, require extremely high-performance, high-volume services, the
design includes considerations of multiple other requirements, including signaling
for capacity on demand The UltraScience Net was implemented to prototype a
future DOE advanced network and to assist that organization transition toward that
network It is expected to support highly distributed terascale and petascale
appli-cations
The UltraScience Net recognizes the important of multilayer networking and
inte-gration among layers Among the capabilities of the testbed are those that enable
signaling for capacity on demand, for dedicated channels, including for end-to-end
full lightpaths and subsegments Experimenting with such nonrouted services is a
special focus of the testbed research Dynamic and scheduled provisioning as well
as management functions are implemented through an out-of-band channel
Ultra-Science Net research activities include those focused on protocols, middleware,
dynamic resource assembly, integration with instrumentation, and considerations
of integration with edge resources, including mass storage devices and
computa-tional clusters Experiments are conducted on the testbed using specific applications
(www.csm.ornl.gov/ultranet)
The advanced networking research community has formed partnerships with
advanced data-intensive application developers, especially those involved with global
science projects, to design and implement specialized facilities that provide services,
including Grid network services, which are required by such applications Several of
these initiatives are described in the following sections
A.3.1 STARLIGHT
Operational since 2001, the Chicago-based international StarLight facility (Science
Technology and Research Light-Illuminated Gigabit High-performance Transport)
was designed and developedby the research community for the research community.
The facility is both a research testbed and an early production prototype of a
next-generation communication services exchange StarLight has been termed “the optical
STAR TAP,” because it evolved from the earlier Science Technology and Research
Transit Access Point
StarLight supports many of the world’s largest and most resource-intensive science
and engineering research and development initiatives StarLight serves as a proving
ground for innovative architectures and services, enabling researchers to advance