1. Trang chủ
  2. » Công Nghệ Thông Tin

Tài liệu Deploying Cisco Wide Area Application Services potx

649 405 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Deploying Cisco Wide Area Application Services
Tác giả Joel Christner, Zach Seils, Nancy Jin
Trường học Cisco Systems, Inc.
Chuyên ngành Networking / Network Technologies
Thể loại sách hướng dẫn
Năm xuất bản 2010
Thành phố Indianapolis
Định dạng
Số trang 649
Dung lượng 10,68 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Before StorSimple, Joel was a technical leader in the Application Delivery Business Unit ADBU at Cisco Systems, Inc., driving the long-term product strategy, system architecture, andsolu

Trang 3

Deploying Cisco Wide Area Application Services, Second Edition

Joel Christner, Zach Seils, Nancy Jin

Copyright© 2010 Cisco Systems, Inc

Printed in the United States of America

First Printing January 2010

Library of Congress Cataloging-in-Publication data is on file

ISBN-13: 978-1-58705-912-4

ISBN-10: 1-58705-912-6

Warning and Disclaimer

This book is designed to provide information about deploying Cisco Wide Area Application Services(WAAS) Every effort has been made to make this book as complete and as accurate as possible, but nowarranty or fitness is implied

The information is provided on an “as is” basis The authors, Cisco Press, and Cisco Systems, Inc shall haveneither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book or from the use of the discs or programs that may accompany it.The opinions expressed in this book belong to the author and are not necessarily those of Cisco Systems, Inc

Trang 4

Trademark Acknowledgments

All terms mentioned in this book that are known to be trademarks or service marks have been

appropriate-ly capitalized Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this information Use of

a term in this book should not be regarded as affecting the validity of any trademark or service mark

Corporate and Government Sales

The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or

spe-cial sales, which may include electronic versions and/or custom covers and content particular to your

busi-ness, training goals, marketing focus, and branding interests For more information, please contact: U.S.

Corporate and Government Sales 1-800-382-3419 corpsales@pearsontechgroup.com

For sales outside the United States please contact: International Sales international@pearsoned.com

Feedback Information

At Cisco Press, our goal is to create in-depth technical books of the highest quality and value Each book

is crafted with care and precision, undergoing rigorous development that involves the unique expertise of

members from the professional technical community

Readers’ feedback is a natural continuation of this process If you have any comments regarding how we could

improve the quality of this book, or otherwise alter it to better suit your needs, you can contact us through

email at feedback@ciscopress.com Please make sure to include the book title and ISBN in your message

We greatly appreciate your assistance

Publisher: Paul Boger Cisco Representative: Erik Ullanderson

Associate Publisher: Dave Dusthimer Cisco Press Program Manager: Anand Sundaram

Executive Editor: Mary Beth Ray Copy Editor/Proofreader: Deadline Driven Publishing

Managing Editor: Patrick Kanouse Technical Editors: Jim French, Jeevan Sharma

Senior Development Editor: Christopher Cleveland Indexer: Angie Bess

Project Editor: Ginny Bess Munroe

Editorial Assistant: Vanessa Evans

Cover Designer: Sandra Schroeder

Book Designer: Louisa Adair

Composition: Mark Shirar

Cisco has more than 200 offices worldwide Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices.

CCDE, CCENT, Cisco Eos, Cisco HealthPresence, the Cisco logo, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, DCE, and Welcome to the Human Network are trademarks; Changing the

Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the

Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step,

Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers,

Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and

the WebEx logo are registered trademarks of Cisco Systems, Inc and/or its affiliates in the United States and certain other countries.

All other trademarks mentioned in this document or website are the property of their respective owners The use of the word partner does not imply a partnership relationship between Cisco and any other company (0812R)

Americas Headquarters Cisco Systems, Inc.

Trang 5

About the Authors

Joel Christner, CCIE No 15311, is a distinguished engineer at StorSimple, Inc Before

StorSimple, Joel was a technical leader in the Application Delivery Business Unit (ADBU)

at Cisco Systems, Inc., driving the long-term product strategy, system architecture, andsolution architecture for the Cisco Wide Area Application Services (WAAS) product andthe Cisco broader application delivery solution Previously, Joel was director of productmanagement for Reconnex Corporation (acquired by McAfee), the industry leader in dataloss prevention (DLP) solutions Prior to joining Reconnex, Joel was the senior manager oftechnical marketing for ADBU at Cisco Systems, Inc, and a key contributor to the WAASproduct line, helping shape the system architecture, craft the product requirements, andenable a global sales team to sell and support the product in a hyper-competitive market

Joel is co-author of the first edition of this book and also co-author of Application

Acceleration and WAN Optimization Fundamentals (Cisco Press) with Ted Grevers, Jr,

which outlines architecture and relevance for WAN optimization and application tion technologies in today’s dynamic IT organizations

accelera-Zach Seils, CCIE No 7861, is a technical leader in the Application Delivery Business

Unit (ADBU) at Cisco Systems, Inc Zach is currently focused on developing the ture and network integration aspects of next-generation WAN optimization and applica-tion acceleration platforms In addition, Zach is frequently engaged with partners andinternal Cisco engineers worldwide to advise on the design, implementation, and trou-bleshooting of Cisco WAAS Previously, Zach was a technical leader in the Cisco

architec-Advanced Services Data Center Networking Practice, where he served as a subject matterexpert in Application Networking Services for the largest Enterprise and Service Providercustomers at Cisco Zach is co-author of the first edition of this book and was also a

technical reviewer of Application Acceleration and WAN Optimization Fundamentals

(Cisco Press) by Joel Christner and Ted Grevers, Jr

Nancy Jin is a senior technical marketing engineer in the Application Delivery Business

Unit (ADBU) at Cisco Systems, Inc where she helps develop requirements for productfeatures, drive sales enablement, and manage technical training development for the CiscoWAAS product family Before Cisco, Nancy held senior systems engineering positionswith well-known network and managed service providers, including InterNAP NetworkServices, Telstra USA, Sigma Networks, and MCI Worldcom

Trang 6

About the Technical Reviewers

Jim French resides in New Jersey He has more than 15 years of experience in

informa-tion technologies A 12-year veteran of Cisco, Jim has been in the posiinforma-tion of

distin-guished system engineer since early 2003 and holds CCIE and CISSP certifications Since

joining Cisco, he has focused on routing, switching, voice, video, security, storage,

con-tent networking, application delivery, and desktop virtualization Primarily, Jim has

helped customers decrease their upfront capital investments in application infrastructure,

reduce application operational costs, speed application time to market, increase

applica-tion touch points (interacapplica-tions), increase applicaapplica-tion availability, and improve applicaapplica-tion

performance Working internally with Cisco marketing and engineering, Jim is

instrumen-tal in driving new features, acquisitions, and architectures into Cisco solutions to make

customers successful Prior to joining Cisco, Jim received a BSEE degree from Rutgers

University College of Engineering in 1987 and later went on to obtain an MBA from

Rutgers Graduate School of Management in 1994 In his spare time, Jim enjoys spending

time with family, friends, running, racquetball, basketball, soccer, traveling, coaching

youth recreation sports, and fathering his amazing son Brian

Jeevan Sharma, CCIE No 11529, is a technical marketing engineer at Cisco He works

with Application Delivery Business Unit (ADBU) Jeevan has more than 9 years of

experi-ence at Cisco and 13 years of overall Information Technology experiexperi-ence Since joining

Cisco, he has held various technical roles in which he has worked extensively with Cisco

customers, partners, and system engineers worldwide on their network designs, and the

implementation and troubleshooting of Cisco products Working with engineering and

product management at Cisco, he has been focused on systems and solutions testing, new

feature development and product enhancements to improve the quality of Cisco

prod-ucts, and solutions for customers Prior to Cisco, Jeevan worked at CMC Limited and

HCL Technologies, where he spent time with customers on their network design and

sys-tems integration In his spare time, Jeevan enjoys family and friends, tennis, hiking, and

traveling

Trang 7

This book is dedicated to my beautiful wife Christina, our family, and to our Lord andSavior Jesus Christ; through Him all things are possible

—Joel Christner

This book is dedicated to my love You have opened my eyes and heart and soul to things

I never knew were possible I am honored that you have let me in your life I can neverthank you enough for these things Your unfaltering love, caring heart, and beautifulsmile are what inspires me to keep going day after day I love you

—Zach Seils

This book is dedicated to my most supportive family To my husband Steve, my parents,and parents-in-law, thank you for always being there for me To my lovely sons Max andLeo, I love you!

—Nancy Jin

Trang 8

From Joel Christner: To Christina, my beautiful, loving, and patient wife—thank you I

promise I won’t write another book for a little while This time, I mean it I know you’ve

heard THAT before

I’d like to express my deepest appreciation to you, the reader, for taking the time to read

this book Zach, Nancy, and I are honored to have been given the opportunity to earn a

spot in your personal library, and we look forward to your feedback

To Zach and Nancy, for being such great co-authors and good friends Your expertise and

ability to clearly articulate complex technical concepts are unmatched, and I’m thankful

to have been given the opportunity to collaborate with you Many thanks to Jim French

and Jeevan Sharma, our technical reviewers Your attention to detail and focus helped

keep our material accurate and concise It was a pleasure working with you on this

book—and at Cisco

A tremendous thank you to the production team at Cisco Press—your guidance has been

great, and Zach, Nancy, and I appreciate you keeping us on track and focused

From Zach Seils: To my love, I could not have finished this project without your constant

encouragement Thank you To Rowan, Evan, and Jeeper, I love you guys more than you

will ever know

To the technical reviewers Jim French and Jeevan Sharma, Thanks for all your hard work

to make this edition of the book a top-notch technical reference I know that the quality

of this project increased significantly due to your contributions

I’d like to give special thanks to my co-authors Joel and Nancy, thanks for making this

project happen and your patience throughout the writing process

Thanks to the Cisco Press team for your patience and support throughout this project

From Nancy Jin: My most sincere appreciation goes to Joel Christner, who introduced

me to this wonderful opportunity It is a great honor to work with such a talented team

Thank you, Jim French and Jeevan Sharma, for doing such great a job as the technical

reviewers Thank you Cisco Press for working on this project with us

Trang 9

Contents at a Glance

Foreword xixIntroduction xxChapter 1 Introduction to Cisco Wide Area Application Services (WAAS) 1Chapter 2 Cisco WAAS Architecture, Hardware, and Sizing 49

Chapter 3 Planning, Discovery, and Analysis 77

Chapter 4 Network Integration and Interception 107

Chapter 5 Branch Office Network Integration 153

Chapter 6 Data Center Network Integration 203

Chapter 7 System and Device Management 249

Chapter 8 Configuring WAN Optimization 319

Chapter 9 Configuring Application Acceleration 401

Chapter 10 Branch Office Virtualization 473

Chapter 11 Case Studies 511

Appendix A WAAS Quickstart Guide 547

Appendix B Troubleshooting Guide 569

Appendix C 4.0/4.1 CLI Mapping 595

Index 599

Trang 10

Foreword xixIntroduction xx

Chapter 1 Introduction to Cisco Wide Area Application Services (WAAS) 1

Understanding Application Performance Barriers 3

Layer 4 Through Layer 7 4

Latency 7 Bandwidth Inefficiencies 10 Throughput Limitations 11

Network Infrastructure 12

Bandwidth Constraints 12 Network Latency 15 Loss and Congestion 19

Introduction to Cisco WAAS 21

WAN Optimization 23

Data Redundancy Elimination 25 Persistent LZ Compression 30 Transport Flow Optimization 30 Secure Sockets Layer (SSL) Optimization 31

Application Acceleration 33

Object and Metadata Caching 36 Prepositioning 38

Read-Ahead 39 Write-Behind 40 Multiplexing 41

Other Features 42Branch Virtualization 45

The WAAS Effect 46

Summary 48

Chapter 2 Cisco WAAS Architecture, Hardware, and Sizing 49

Cisco WAAS Product Architecture 49

Disk Encryption 50Central Management Subsystem 51Interface Manager 51

Monitoring Facilities and Alarms 52Network Interception and Bypass Manager 52

Trang 11

Application Traffic Policy Engine 53Virtual Blades 55

Hardware Family 55Router-Integrated Network Modules 56

NME-WAE Model 302 57 NME-WAE Model 502 57 NME-WAE Model 522 58

Appliances 58

WAVE Model 274 59 WAVE Model 474 59 WAE Model 512 60 WAVE Model 574 60 WAE Model 612 60 WAE Model 674 61 WAE Model 7341 61 WAE Model 7371 61

Licensing 61Performance and Scalability Metrics 62Device Memory 63

Disk Capacity 64Number of Optimized TCP Connections 65WAN Bandwidth and LAN Throughput 70Number of Peers and Fan-Out 71

Number of Devices Managed 73Replication Acceleration 74Virtual Blades 75

Summary 76

Chapter 3 Planning, Discovery, and Analysis 77

Planning Overview 77Planning Overview Checklist 78Requirements Collection and Analysis 78Site Information 80

Site Types 80User Population 81Physical Environment 81Site Information Checklist 82

Trang 12

Network Infrastructure 82

WAN Topology 82

Remote Office Topology 85

Data Center Topology 86

Traffic Flows 87

Network Infrastructure Checklist 89

Application Characteristics 90

Application Requirements Checklist 91

Application Optimizer Requirements 91

CIFS Accelerator 91

Advanced Features 92

File Services Utilization 93

File Services Requirements Checklist 93

Trang 13

Security Requirements 103Security Requirements Checklist 105Virtualization Requirements 105Virtualization Requirements Checklist 106Summary 106

Chapter 4 Network Integration and Interception 107

Interface Connectivity 107Link Aggregation Using PortChannel 111

PortChannel Configuration 112

Using the Standby Interface Feature 115

Standby Interface Configuration 116

Interception Techniques and Protocols 119Web Cache Communication Protocol 119

WCCP Overview 120 Service Groups 120

Forwarding and Return Methods 123

Load Distribution 125 Failure Detection 126 Flow Protection 128 Graceful Shutdown 128 Scalability 129

Redirect Lists 129 Service Group Placement 130 WCCP Configuration 131 Hardware-Based Platforms 136

Policy-Based Routing 137Inline Interception 139Content Switching 143

Application Control Engine 144

Egress Methods 145Directed Mode 149Network Integration Best Practices 150Summary 152

Chapter 5 Branch Office Network Integration 153

In-Path Deployment 153Nonredundant Branch Office 154Redundant Branch Office 158

Trang 14

Serial Inline Clustering 162Off-Path Deployment 163

Small to Medium-Sized Nonredundant Branch Office 163

Enhanced Network Module (NME-WAE) 170 Two-Arm Deployment 171

Large Nonredundant Branch Office 174Off-Path Redundant Topology 181

Small to Medium-Sized Redundant Branch Office 181 Large Redundant Branch Office 190

Policy-Based Routing Interception 196Cisco IOS Firewall Integration 199Summary 201

Chapter 6 Data Center Network Integration 203

Data Center Placement 203

Summary 247

Chapter 7 System and Device Management 249

System and Device Management Overview 250

Initial Setup Wizard 250CLI 260

CM Overview 261Centralized Management System Service 266Device Registration and Groups 269

Device Activation 270Device Groups 271Provisioned Management 273

Role-Based Access Control 274Integration with Centralized Authentication 278

Windows Authentication 280 TACACS+ Authentication 286

Trang 15

RADIUS Authentication 288

Device Configuration, Monitoring, and Management 289Alarms, Monitoring, and Reporting 290

Managing Alarms 290 Monitoring Charts 291 Managing Reports 295

SNMP, Syslog, and System Logs 296Upgrading and Downgrading Software 302Backup and Restore of CM Database 305Programmatic Interfaces and the XML-API 308

Vendors Supporting the XML-API 309 Data Accessible via the XML-API 310 Simple Method of Accessing XML-API Data 313

Summary 317

Chapter 8 Configuring WAN Optimization 319

Cisco WAAS WAN Optimization Capabilities 319Transport Flow Optimization 320

Data Redundancy Elimination 322Persistent LZ Compression 324Automatic Discovery 324Directed Mode 327Configuring WAN Optimization 329Configuring Licenses 329Enabling and Disabling Features 331TFO Blacklist Operation 333Directed Mode 338

Adaptive and Static TCP Buffering 339Replication Acceleration 345

Application Traffic Policy 347Application Groups 348Traffic Classifiers 352Policy Maps 358Negotiating Policies 365EndPoint Mapper Classification 366Monitoring and Reporting 370

Automatic Discovery Statistics 370

Trang 16

Connection Statistics and Details 373WAN Optimization Statistics 380

Network Profiling 380 Understanding WAAS Performance Improvement 386 Understanding Device and System Performance and Scalability Metrics 388

Executive Reports 393

Integration with Third-Party Visibility Systems 393

WAN Optimization Monitoring with XML-API 394 Application Response Time Monitoring 394

Summary 399

Chapter 9 Configuring Application Acceleration 401

Application Acceleration Overview 401

CIFS Acceleration 403Windows Print Acceleration 407NFS Acceleration 408

MAPI Acceleration 409HTTP Acceleration 411SSL Acceleration 412Video Acceleration 414Enabling Acceleration Features 415

Additional Video Settings 423

Configuring SSL Acceleration 425

Configuring Disk Encryption 426Managing the Secure Store 430Configuring SSL Accelerated Services 432

Using the CM GUI to Configure SSL 433 Using the CLI to Configure SSL 438

Configuring Preposition 447

Acceleration Monitoring and Reporting 453

Acceleration Monitoring Using Device CLI 453Acceleration Monitoring Using CM GUI 460Acceleration Monitoring with XML-API 463

CIFSStats 463 SSLStats 466 VideoStats 467

Trang 17

HttpStats 467 MapiStats 468 NfsStats 470

Summary 471

Chapter 10 Branch Office Virtualization 473

Branch Office Virtualization Overview 473Overview of Virtual Blades 475

Management of Virtual Blades 476Virtual Blade Hardware Emulation 476Virtualization Capable WAAS Platforms 477Creating Virtual Blades 478

Guest OS Boot Image 482Configuring Virtual Blade Resources 484

Virtual Blade Interface Bridging Considerations 489

Starting Virtual Blades 493Virtual Blade Console Access 495Stopping Virtual Blades 496Changing Virtual Blade Boot Sequence 497Managing Virtual Blades 500

Backup and Restore of Virtual Blades 501Monitoring and Troubleshooting Virtual Blades 503Monitoring Virtual Blades 503

Alarms and Error Messages 505Troubleshooting Common Issues with Virtual Blades 506

Failure to Boot 506 Blue Screen of Death 507 Hang Conditions 508

Summary 509

Chapter 11 Case Studies 511

Common Requirements 511Existing WAN Topology 511Remote Site Profile A 512Profile A Site Requirements 513Site Network Topology 513WAE Placement and Interception 513

Trang 18

WAE Configuration Details 513WAN Router Configuration Details 516LAN Switch Configuration Details 517Remote Site Profile B 519

Profile B Site Requirements 519Site Network Topology 520WAE Placement and Interception 520WAE Configuration Details 520WAN Router Configuration Details 522Remote Site Profile C 524

Profile C Site Requirements 524Site Network Topology 525WAE Placement and Interception 525WAE Configuration Details 526WAN Router 1 Configuration Details 528WAN Router 2 Configuration Details 530Data Center Profile 532

Data Center Site Requirements 533Site Network Topology 533WAE Placement and Interception 533WAE Configuration Details 534Data Center Switch 1 Configuration Details 537Data Center Switch 2 Configuration Details 540Application Traffic Policy 544

Summary 545

Appendix A WAAS Quickstart Guide 547

Appendix B Troubleshooting Guide 569

Appendix C 4.0/4.1 CLI Mapping 595

Index 599

Trang 19

Icons Used in This Book

Command Syntax Conventions

The conventions used to present command syntax in this book are the same conventionsused in the IOS Command Reference The Command Reference describes these conven-tions as follows:

Boldface indicates commands and keywords that are entered literally as shown In

actual configuration examples and output (not general command syntax), boldface

indicates commands that are manually input by the user (such as a show command).

Italic indicates arguments for which you supply actual values.

■ Vertical bars (|) separate alternative, mutually exclusive elements

■ Square brackets ([ ]) indicate an optional element

■ Braces ({ }) indicate a required choice

■ Braces within brackets ([{ }]) indicate a required choice within an optional element

Connection

File Server

IP Phone Relational

Module

Wide-Area

Application Engine

Application Control Engine

Trang 20

I am pleased to write the foreword to the second edition of Deploying Cisco Wide Area

Application Services (WAAS) Over the past few years, WAN Optimization technology

has become a standard component of enterprise networks The benefits accruing from the

use of the technology for server consolidation, simplified IT management, and

improve-ment of the efficiency of information sharing and network utilization have earned it a

place at the top of customers’ buying priorities

At Cisco, we have made several innovations to our award-winning WAAS solution that

continues to expand the benefits it offers our customers These include the use of

virtual-ization technology—that is, Virtual Blades (VB)—to rapidly deploy a network service

“anytime, anywhere,” and a variety of application specific acceleration techniques that we

developed in collaboration with the leading application vendors

At Cisco, we believe that WAN optimization technology needs to be closely integrated

with the routing/VPN architecture of the enterprise network so that customers can

bene-fit from a single, optimized, shared network fabric that delivers all applications: voice,

video, and data

The authors combine experience from their work with thousands of customers who have

deployed large installations of WAAS with a deep knowledge of enterprise and service

provider network design, IOS, application-aware networking technologies, and WAAS to

provide a comprehensive set of best practices for customer success I strongly

recom-mend customers who are interested in WAN optimization and particularly Cisco WAAS

to read this volume It will help you accelerate your understanding of the solution and the

benefits you can accrue

George Kurian

Vice President and General Manager, Application Networking and Switching

Cisco Systems, Inc

Trang 21

IT organizations are realizing the benefits of infrastructure consolidation and tion—cost savings, operational savings, better posture toward disaster recovery—and thechallenges associated Consolidating infrastructure increases the distance between theremote office worker and the tools they need to ensure productivity—applications,servers, content, and more Application acceleration and WAN optimization solutionssuch as Cisco Wide Area Application Services (WAAS) bridge the divide between consol-idation and performance to enable a high-performance consolidated infrastructure

virtualiza-This book is the second edition of Deploying Cisco Wide Area Application Services,

and updates the content to reflect the innovations that have been introduced in version4.1.3 of the Cisco Wide Area Application Services (WAAS) solution, whereas the firstedition was written to version 4.0.13 Along with coverage of the key components of theCisco WAAS solution, this edition expands on the concepts introduced in the first edi-tion to provide a more complete understanding of the solution’s capabilities, how to usethem effectively, and how to manage them This edition expands upon the first edition toinclude coverage for new solution components including application-specific accelerationtechniques, hardware form factors, virtualization, application performance management(APM), monitoring and reporting enhancements, and workflow enhancements Additionaltechnical reference material is provided in the appendices to help familiarize users of ver-sion 4.0 with changes that have occurred in the command-line interface (CLI) with theintroduction of the 4.1 release A quickstart guide is provided to help users quicklydeploy in a lab or production pilot environment in order to quantify the benefits of thesolution A troubleshooting guide can also be found at the end which helps associate dif-ficulties encountered with potential steps for problem resolution

Goals and Methods

The goal of this book is to familiarize you with the concepts and fundamentals of sizingand deploying Cisco WAAS in your environment The book provides a technical intro-duction to the product, followed by deployment sizing guidelines, through integrationtechniques, and configuration of major components and subsystems The intent of thebook is to provide you with the knowledge that you need to ensure a successful deploy-ment of Cisco WAAS in your environment, including configuration tips, pointers, andnotes that will guide you through the process

Who Should Read This Book?

This book is written for anyone who is responsible for the design and deployment of CiscoWAAS in their network environment The text assumes the reader has a basic knowledge ofdata networking, specifically TCP/IP and basic routing and switching technologies

As the WAAS technology continues to evolve, the content in this book will provide asolid framework to build on Mastering the topics in this book will ensure that you canapproach any WAAS design project with confidence

Trang 22

How This Book Is Organized

Although this book could be read cover to cover, it is designed to be flexible and allow

you to easily move between chapters and sections of chapters to cover just the material

that you need to work with Although each of the chapters builds upon the foundation

laid by previous chapters, enough background information is provided in each chapter to

allow it to be a standalone reference work in and of itself Chapter 1 provides a technical

examination of the Cisco WAAS product and its core capabilities, along with use cases

and the “why you care” about each of the solution components Chapters 2 through 10

are the core chapters and, although they can be covered in any order, it is recommended

that they be covered sequentially for continuity Chapter 11 provides a series of use cases

for the Cisco WAAS product family, which can also provide insight into how other

cus-tomers use this technology to meet their business infrastructure requirements

Appendices are provided to help augment and also summarize what is discussed in the

core chapters Following is a description of each chapter:

Chapter 1, “Introduction to Cisco Wide Area Application Services (WAAS):” This

chapter provides a technical examination and overview of Cisco WAAS and its core

components

Chapter 2, “Cisco WAAS Architecture, Hardware, and Sizing:” This chapter

dis-cusses the Cisco WAAS appliance and router-integrated network module hardware

family, positioning of each of the platforms, and system specifications that impact the

design of a solution relative to the performance and scalability of each component

Chapter 3, “Planning, Discovery, and Analysis:” Planning is a critical part to any

successful WAAS deployment Spending ample time at the beginning of the project

to understand the requirements, including those imposed by the existing network

environment, is critical for a successful deployment Chapter 3 gives you a head start

by outlining the key topic areas that should be taken into consideration as you are

planning your WAAS deployment

Chapter 4, “Network Integration and Interception:” This chapter provides an

in-depth review of the network integration and interception capabilities of Cisco

WAAS The topics discussed in Chapter 4 form the foundation for the design

discus-sions in subsequent chapters

Chapter 5, “Branch Office Network Integration:” This chapter provides a detailed

discussion of the different design options for deploying Cisco WAAS in the branch

office environment Several design options are discussed, including detailed

configu-ration examples

Chapter 6, “Data Center Network Integration:” This chapter examines the key

design considerations for deploying WAAS in the data center Sample design models

and configuration examples are provided throughout the chapter Best practices

rec-ommendations for scaling to support hundreds or thousands of remote sites are also

included

Trang 23

Chapter 7, “System and Device Management:” This chapter walks you through the

initial deployment of the Central Manager and each of the accelerator WAASdevices, including the setup script, registration, federated management, and use ofmanagement techniques such as device groups This chapter also provides a detailedunderstanding of integration with centralized authentication and authorization,alarm management, an introduction to the monitoring and reporting facilities of the

CM, CM database maintenance (including backup and recovery), and the XML-API

Chapter 8, “Configuring WAN Optimization:” This chapter guides you through the

WAN optimization framework provided by Cisco WAAS, including each of the mization techniques and the Application Traffic Policy manager This chapter alsoexamines the configuration of optimization policies, verification that policies areapplied correctly, and an examination of statistics and reports

opti-■ Chapter 9, “Configuring Application Acceleration:” This chapter focuses on the

application acceleration components of Cisco WAAS, including configuration, cation, and how the components interact This chapter also looks closely at howthese components leverage the underlying WAN optimization framework, how theyare managed, and an examination of statistics and reports

verifi-■ Chapter 10, “Branch Office Virtualization:” This chapter examines the

virtualiza-tion capabilities provided by certain Cisco WAAS appliance devices, including figuration, management, and monitoring

con-■ Chapter 11, “Case Studies:” This chapter brings together various topics discussed in

the previous chapters through several case studies The case studies presented focus

on real-world deployment examples, a discussion of the key design considerations,options, and final device-level configurations

Appendix A, “WAAS Quickstart Guide:” Appendix A provides a quickstart guide

to help you quickly deploy WAAS in a proof-of-concept lab or production pilot

Appendix B, “Troubleshooting Guide:” Appendix B provides a troubleshooting

guide, which helps you isolate and correct commonly encountered issues

Appendix C, “4.0/4.1 CLI Mapping:” Appendix C provides a CLI mapping quick

reference to help identify CLI commands that have changed between the 4.0 and 4.1versions

Trang 24

Introduction to Cisco Wide Area Application Services (WAAS)

IT organizations struggle with two opposing challenges: to provide high levels of

applica-tion performance for an increasingly distributed workforce and to consolidate costly

infrastructure to streamline management, improve data protection, and contain costs

Separating the growing remote workforce from the location that IT desires to deploy

infrastructure is the wide-area network (WAN), which introduces significant delay, packet

loss, congestion, and bandwidth limitations, impeding a users’ abilities to interact with

applications and the data they need in a high-performance manner conducive to

produc-tivity These opposing challenges place IT organizations in a difficult position as they

must make tradeoffs between performance and cost, as shown in Figure 1-1

Higher cost, but better performance

for remote office users

Lower cost, but worse performancefor remote office users

PrimaryData CenterRemote Offices Distribution ofResources

Data Center Consolidation

Regional Offices

SecondaryData Center

Home Offices

Figure 1-1 Tradeoffs Between Performance and Cost

Cisco Wide Area Application Services (WAAS) is a solution designed to bridge the divide

between application performance and infrastructure consolidation in WAN environments

Leveraging appliances, router modules, or software deployed at both ends of a WAN

connection and employing robust optimizations at multiple layers, Cisco WAAS is able

to ensure high-performance access for remote workers who access distant application

Trang 25

Cisco WAAS

Data Center Remote Office

Mobile

Cisco WAAS Mobile Server

Figure 1-2 Cisco WAAS Solution Architecture

infrastructure and information, including file services, e-mail, the Web, intranet andportal applications, and data protection By mitigating the performance-limiting factors

of the WAN, Cisco WAAS not only improves performance, but also positions IT zations to better consolidate distributed infrastructure to better control costs and ensure

organi-a stronger position toworgani-ard dorgani-atorgani-a protection organi-and compliorgani-ance Coupled with providingperformance-improving techniques to enable consolidation of branch office infrastruc-ture into the data center, Cisco WAAS provides an extensive platform for branch officevirtualization, enabling IT organizations to deploy or retain applications and services inthe branch office in a more cost-effective manner

Figure 1-2 shows the deployment architecture for the Cisco WAAS solution

The purpose of this book is to discuss the Cisco WAAS solution in depth, including athorough examination of how to design and deploy Cisco WAAS in today’s challengingenterprise networks This chapter provides an introduction to the performance barriers thatare created by the WAN and a technical introduction to Cisco WAAS and its capabilities.This chapter also examines the software architecture of Cisco WAAS and outlines howeach of the fundamental optimization components overcomes those application perform-ance barriers Additionally, this chapter examines the virtualization capabilities provided

by Cisco WAAS to enable branch infrastructure consolidation while allowing applicationsthat must be deployed in the branch office to remain deployed in the branch office

The chapter ends with a discussion of how Cisco WAAS fits into a network-based tecture of optimization technologies and how these technologies can be deployed in con-junction with Cisco WAAS to provide a holistic solution for improving application per-formance over the WAN This book was written according to version 4.1.3 of the CiscoWAAS solution, whereas the first edition was written according to version 4.0.13

archi-Although this book provides thorough coverage of Cisco WAAS, it does not provide

Trang 26

thorough coverage of Cisco WAAS Mobile, which is the software client deployed on

lap-tops and desklap-tops that provides similar functionality However, many of the same

princi-ples that apply to Cisco WAAS Mobile are similar to those described in this book as it

relates to performance challenges and overcoming them

Understanding Application Performance Barriers

Before examining how Cisco WAAS overcomes performance challenges created by

network conditions found in the WAN, it is important to have an understanding of

what conditions are found in the WAN and how they impact application performance

Applications today are becoming increasingly robust and complex compared to

applica-tions of ten years ago—making them more sensitive to network condiapplica-tions—and it is

cer-tain that this trend will continue The first performance-limiting factors to examine are

those that are present in the application stack on the endpoints (sender and receiver) The

second set of performance-limiting factors, which are examined later in this section, are

those the network causes Figure 1-3 shows a high-level overview of these challenges,

each of which is discussed in this section

Round Trip Time (RTT) ~ Many Milliseconds

WAN

Limited bandwidth, congestion, and packet loss,and network oversubscription

Trang 27

Layer 4 Through Layer 7

Server application instances primarily interact with user application instances at the cation layer of the Open Systems Interconnection (OSI) model At this layer, applicationlayer control and data messages are exchanged to perform functions based on the bprocess or transaction being performed For instance, a user might ‘GET’ an object stored

appli-on a web server using HTTP, or perform write operatiappli-ons against a file stored appli-on a fileserver in the data center Interaction at this layer is complex because the number of oper-ations that can be performed over a proprietary protocol or even a standards-based proto-col can be literally in the hundreds or thousands This is generally a direct result of thecomplexity of the application itself and is commonly caused by the need for end-to-endstate management between the client and the server to ensure that operations completesuccessfully—or can be undone if the transaction or any of its steps happens to fail This

leads to a high degree of overhead in the form of chatter—which, as you see later, can

significantly impact performance in environments with high latency As the chatterincreases, the efficiency of the protocol decreases due to the amount of data and timespent on the network devoted to nonproductive process increases Consider the follow-ing examples:

■ A user accessing his portal homepage on the company intranet might require thedownload of an applet to the local machine, which after it is downloaded, uses someform of middleware or web services to exchange control messages to populate thedashboard with individual objects, which are each generally fetched sequentially withmetadata about each object exchanged and examined beforehand

■ A user processing transactions on an online order processing system might causeseveral requests against the server application to allow the browser to appropriatelyrender all of the elements—including images and text—contained in the construc-tion of the page

■ A user interactively working with a file on a file server causes numerous control anddata requests to be exchanged with the server to manage the authenticity and author-ization of the user, file metadata, and the file itself Further, after the server has deter-mined the user is able to access the file in a certain capacity, the interactive opera-tions against the file are typically performed using small block sizes and have atendency to jump around the file erratically

Between the application layers on a given pair of nodes exists a hierarchical structure oflayers between the server application instance and user application instance, which alsoadd complexity—and performance constraints—above and beyond the overhead pro-duced by the application layer chatter described previously For instance, data that is

to be transmitted between application instances might pass through a shared (and prenegotiated) presentation layer This layer might be present depending on the application,because many applications have built-in semantics around data representation that enablethe application to not require a distinct presentation layer In such cases, the presentationlayer is handled in the application layer directly When a discrete presentation layer existsfor an application, it becomes responsible for ensuring that the data conforms to a specific

Trang 28

structure, such as ASCII, Extended Binary Coded Decimal Interchange Code (EBCDIC)

or Extensible Markup Language (XML) If such a layer exists, data might need to be

ren-dered prior to being handed off to the transport layer for delivery over an established

ses-sion or over the network directly, or prior to being delivered to the application layer for

processing The presentation layer would also take responsibility for ensuring that

appli-cation messages conform to the appropriate format If the appliappli-cation messages do not

conform to the appropriate format, the presentation layer would be responsible for

noti-fying the peer that the message structure was incorrect

From the presentation layer, the data might be delivered to a session layer, which is

responsible for establishing an overlay session between two endpoints Session layer

pro-tocols are commonly found in applications that are considered stateful, that is,

transac-tions are performed in a nonatomic manner and in a particular sequence or order This

means that a sequence of exchanges is necessary to complete an operation, and a failure

of any sequence causes the entire transaction to fail In such scenarios, all exchanges up

to the failed exchange for the same operation must be performed again Session layer

protocols commonly provide operation-layer error correction on behalf of the

applica-tion, that is, should a part of an operation fail, the session layer can manage the next

attempt on behalf of the application layer to offload it transparently so that the user is

not impacted This is in stark contrast with stateless applications, where each transaction

or piece of a transaction is atomic and recovered directly by the application In other

words, all the details necessary to complete an operation or a portion of an operation are

fully contained in a single exchange If an exchange fails in a stateless application, it can

simply be attempted again by the application without the burden of having to attempt an

entire sequence of operations

Session layer protocols provide applications with the capability to manage checkpoints

and recovery of upper-layer protocol (ULP) message exchanges, which occur at a

transac-tional or procedural layer as compared to the transport of raw segments (which are

chunks of data transmitted by a transport protocol such as the Transmission Control

Protocol [TCP], which is discussed later) Similar to the presentation layer, many

applica-tions might have built-in semantics around session management and might not use a

dis-crete session layer However, some applications—commonly those that use some form of

remote procedure calls (RPC)—do require a discrete session layer When present, the

ses-sion layer manages the exchange of data through the underlying transport protocol based

on the state of the checkpoints and of the current session between the two

communicat-ing nodes When the session layer is not present, applications have direct access to the

underlying connection that exists between sender and receiver and thus must own the

burden of session and state management

Whether the data to be exchanged between a user application instance and server

appli-cation instance requires the use of a presentation layer or session layer, data to be

trans-mitted across an internetwork between two endpoints is generally handled by a transport

protocol

The transport protocol is primarily responsible for data delivery and data multiplexing It

provides facilities that transmit data from a local socket (that is, an endpoint on the

trans-mitter, generally referenced by an IP address, port number, and protocol) to a socket on a

Trang 29

remote node over an internetwork This is commonly called end-to-end delivery, as data

is taken from a socket (generally handled as a file descriptor in the application) on thetransmitting node and marshaled across the network to a socket on the receiving node.Commonly used transport layer protocols include TCP, User Datagram Protocol (UDP),and Stream Control Transmission Protocol (SCTP) Along with data delivery and multi-plexing, the transport protocol is commonly responsible for providing guaranteed deliveryand adaptation to changing network conditions, such as bandwidth changes or conges-tion Some transport protocols, such as UDP, do not provide such capabilities

Applications that leverage UDP either implement their own means of guaranteed delivery

or congestion control, or these capabilities simply are not required for the application Fortransport protocols that do provide guaranteed delivery, additional capabilities are almostalways implemented to provide for loss recovery (retransmission when data is lost due tocongestion or other reasons), sharing of the available network capacity with other com-municating nodes (fairness through congestion avoidance), and opportunistically search-ing for additional bandwidth to improve throughput (also part of congestion avoidance).The components mentioned previously, including transport, session, presentation, andapplication layers, represent a grouping of services that dictate how application data isexchanged between disparate nodes end-to-end These components are commonly calledLayer 4 through Layer 7 services, or L4–7 services, or application networking services(ANS) L4–7 services rely on the foundational packet services provided by lower layersfor routing and delivery to the endpoint, which includes the network layer, data link layer,and physical layer With the exception of network latency caused by distance and thespeed of light, L4–7 services generally add the largest amount of operational latency thatimpacts the performance of an application This is due to the tremendous amount of pro-cessing and overhead that must take place to accomplish the following:

■ Move data into and out of local and remote socket buffers (transport layer)

■ Maintain long-lived sessions through tedious exchange of state messages and action management between nodes (session layer)

trans-■ Ensure that message data conforms to representation requirements as data movesinto and out of the application itself (presentation layer)

■ Exchange application control and data messages based on the task being performed(application layer)

Figure 1-4 shows the logical flow of application data through the various layers of theOSI model as information is exchanged between two communicating nodes

The performance challenges caused by L4–7 can generally be classified into the ing categories:

follow-■ Latency

■ Bandwidth inefficiencies

■ Throughput limitations

Trang 30

CIFS Redirector <> CIFS Server

ASCII SMB Session UID 19932 Socket 10.12.10.6:3825<>10.12.4.2:445

IP Network

Figure 1-4 Layer 4–7 Performance Challenges

Many applications do not exhibit performance problems due to these conditions because

they were designed for and operated in LAN environments; however, when applications

are operated in a WAN environment, virtually any application is negatively impacted

from a performance perspective, as most were not designed with the WAN in mind

These performance-limiting factors—latency, bandwidth inefficiencies, and throughput

limitations—are examined in the following three sections

Latency

L4–7 latency is a culmination of the processing delays introduced by each of the four

upper layers involved in managing the exchange of application data from node to node:

application, presentation, session, and transport It should be noted that, although

signifi-cant, the latency added in a single message exchange by L4–7 processing in the node

itself is typically minimal compared to latency found in the WAN itself However, the

chatter found in the applications and protocols might demand that information be

exchanged multiple times over that network This means that the latency impact is

multi-plied and leads to a downward spiral in application performance and responsiveness

Application (Layer 7) layer latency is defined as the processing delay of an application

protocol that is generally exhibited when applications have a send-and-wait type of

behavior, that is, a high degree of chatter, where messages must execute in sequence and

are not parallelized

Presentation layer (Layer 6) latency is defined as the amount of latency incurred by

ensuring data conforms to the appropriate representation and managing data that is not

correctly conformed or cannot be correctly conformed

Trang 31

Session layer (Layer 5) latency is defined as the delay caused by the exchange or ment of state-related messages between communicating endpoints For applications andprotocols where a session layer protocol is used, such messages may be required beforeany usable application data is transmitted, even in between exchanges of usable applica-tion data

manage-Transport layer (Layer 4) latency is defined as the delay in moving data from socketbuffers (the memory allocated to a socket, for either data to transmit or received data) inone node to the other This can be caused by delays in receiving message acknowledge-ments, lost segments and the retransmissions that follow, and inadequately sized buffersthat lead to the inability of a sender to send or a receiver to receive

One of many examples that highlight pieces of the aforementioned latency elements can

be observed when a user accesses a file on a file server using the Common Internet FileSystem (CIFS) protocol, which is predominant in environments with Microsoft Windowsclients and Microsoft Windows servers, or network-attached storage (NAS) devices thatare being accessed by Microsoft Windows clients In such a case, the client and server

must exchange a series of small administrative messages prior to any file data being sent

to a user, and these messages continue periodically as the user works with the file being

accessed to manage state When productive messages (those containing actual data) are

sent, small message block sizes are used, thereby limiting throughput Every message that

is exchanged utilizes a discrete session that has been established between the client andserver, which in turn uses TCP as a transport protocol In essence, every upper-layer mes-sage exchange is bounded in throughput by session management, small message sizes,inefficient protocol design, packet loss and retransmissions, and delays in receivingacknowledgements

For instance, the client must first establish a TCP connection to the server, which involves

a three-way handshake between the client and server After the TCP connection has beenestablished, the client must then establish an end-to-end session with the server, whichinvolves the session layer (which also dictates the dialect of the protocol used betweenthe two) The session layer establishes a virtual channel between the workstation andserver, performing validation of user authenticity against an authority, such as a domaincontroller With the session established, the client then fetches a list of available sharedresources and attempts to connect to that resource, which requires that the client’sauthorization to access that resource be examined against security policies, such asaccess control entries based on the user’s identity or group membership After the user isauthenticated and authorized, a series of messages are exchanged to examine and traversethe directory structure of this resource while gathering the necessary metadata of eachitem in the directory to display the contents to the user After the user identifies a file ofinterest and chooses to open that file, a series of lock requests must be sent against vari-ous portions of the file (based on file type) in an attempt to gain access to the file Afteraccess to the file has been granted, file input/output (I/O) requests (such as read, write, orseek) are exchanged between the user and the server to allow the user to interactivelywork with the file

Each of the messages described here requires that a small amount of data be exchangedover the network and that each be acknowledged by the recipient, causing operational

Trang 32

latency that might go unnoticed in a local-area network (LAN) environment However,

the operational latency described previously can cause a significant performance barrier

in environments where the application operates over a WAN where a high amount of

latency is present, as each exchange occurs over the high latency WAN, thereby creating

a multiplicative latency effect

Figure 1-5 shows an example of how application layer latency alone in a WAN

environ-ment can significantly impede the response time and overall performance perceived by a

user In this example, the one-way latency is 100 ms, leading to a situation where only 3

KB of data is exchanged in 600 ms of time, or 5 KB of data in 1 s of time (representing a

maximum throughput of 40 kbps) This example assumes that the user has already

estab-lished a TCP connection, estabestab-lished a session, authenticated, authorized, and

successful-ly opened the file It also assumes there is no packet loss or other form of congestion

encountered, and there are no other performance-limiting situations present

Note that although the presentation, session, and transport layers do indeed add latency,

it is commonly negligible in comparison to latency caused by the application layer

requir-ing that multiple message exchanges occur before any productive data is transmitted It

should also be noted that the transport layer performance is subject to the amount of

perceived latency in the network due to the following factors, all of which can impact the

capability of a node to transmit or receive at high rates of throughput:

■ Delays in receiving acknowledgements

■ Retransmission delays that are a result of packet loss

■ Undersized buffers

■ Server oversubscription or overloading

Performance limitations encountered at a lower layer impact the performance of the

upper layers; for instance, a performance limitation that impacts TCP directly impacts the

GET 1 KB, Offset 132 KBDATA

GET 1 KB, Offset 133 KBDATA

GET 1 KB, Offset 134 KBDATA

Trang 33

performance of any application operating at Layers 5–7 that uses TCP The section,

“Network Infrastructure,” in this chapter examines the impact of network latency onapplication performance—including the transport layer

Bandwidth Inefficiencies

The lack of available network bandwidth (discussed in the “Network Infrastructure”section in this chapter) coupled with application layer inefficiencies create an application-performance barrier Although network bandwidth is generally not a limiting factor in aLAN environment, this is unfortunately not the case in the WAN Bandwidth inefficien-cies create performance barriers when an application is inefficient in the way information

is exchanged between two communicating nodes and bandwidth is constrained Forinstance, assume that ten users are in a remote office that is connected to the corporatecampus network by way of a T1 line (1.544 Mbps) If these users use an e-mail server(such as Microsoft Exchange Server) in the corporate campus or data center network, and

an e-mail message with a 1-MB attachment is sent to each of these users, the e-mail sage is transferred over the WAN once for each user when they synchronize their Inbox,

mes-or ten times total In this example, a simple 1-MB attachment causes 10 MB mes-or mmes-ore ofWAN traffic Such situations can massively congest enterprise WANs, and similar situa-tions can be found frequently, including the following examples (to cite just a few):

■ Redundant e-mail attachments being downloaded over the WAN from email serversmultiple times by multiple users over a period of time

■ An email with an attachment being sent by one user in a branch office to one ormore users in the same branch office when the email server is in a distant part ofthe network

■ Multiple copies of the same file stored on distant file servers being accessed over theWAN by multiple users over a period of time from the same branch office

■ A user in a branch office accessing a file on a file server, and then emailing a copy ofthat same file to people throughout the organization

■ Multiple copies of the same web object stored on distant intranet portals or tion servers being accessed over the WAN by multiple users over a period of timefrom the same branch office

Additionally, the data contained in objects being accessed across the gamut of tions used by remote office users likely contain a significant amount of redundancy whencompared to other objects accessed using other applications For instance, one user mightsend an e-mail attachment to another user over the corporate WAN, whereas another useraccesses that same file (or a different version of that file) using a file server protocol overthe WAN such as CIFS Aside from the obvious security (firewall or intrusion detectionsystems and intrusion prevention systems [IDS/IPS]) and resource provisioning technolo-gies (such as quality of service [QoS] and performance routing), the packet network itself

Trang 34

applica-has historically operated in a manner independent of the applications that rely on the

net-work This means that characteristics of the data being transferred were generally not

con-sidered, examined, or leveraged by the network while routing information throughout the

corporate network from node to node

Some applications and protocols have since added semantics that help to minimize the

bandwidth inefficiencies of applications operating in WAN environments For instance,

the web browsers of today have built-in client-side object caching capabilities Objects

from Internet sites and intranet applications that are transferred over the WAN commonly

have metadata associated with them (found in message headers) that provide information

to the client web browser that enable it to make a determination on whether the object in

question can be safely cached for later reuse should that object be requested again

By employing a client-side cache in such applications, the repeated transmission of

objects can be mitigated when the same user requests the same object using the same

application when the object has not changed, which helps minimize the amount of

band-width consumed by the application and the latency perceived by the user Instead, this

object can be fetched from the local cache in the user’s browser and used for the

opera-tion in quesopera-tion, thereby eliminating the need to transfer that object over the network

Although this improves performance for that particular user, this information goes

com-pletely unused when a different user attempts to access that same object from the same

server and same web page, as the application cache is wholly contained on each

individ-ual client and not shared across multiple users That is, a cached object on one user’s

browser is not able to be used by a browser on another user’s computer

Application-level caching is isolated not only to the user that cached the object, but also

to the application in that user’s workstation that handled the object This means that

although the user’s browser has a particular object cached, a different application has no

means of leveraging that cached object, and a user on another workstation accessing the

same object on the same server has no means of leveraging that cached object Similarly,

one web browser on a workstation has no way to take advantage of cached objects that

were fetched using a different web browser, even if done on the same machine The lack

of information awareness in the network—coupled with inefficient and otherwise

unintel-ligent transfer of data—can lead to performance limitations for virtually the entire WAN

Throughput Limitations

Like bandwidth inefficiencies, throughput limitations can significantly hinder

perform-ance A throughput limitation refers to the inability of an application to take advantage of

the network that is available to it and is commonly a direct result of latency and

band-width inefficiencies That is, as application latency in a send-and-wait application

increas-es, the amount of time that is spent waiting for an acknowledgement or a response from

the peer directly translates into time where the application is unable to do any further

useful work Although many applications allow certain operations to be handled in a

par-allel or asynchronous manner—that is, not blocked by a send-and-wait message

exchange—many operations that are critical to data integrity, security, and coherency

must be handled in a serial manner In such cases, these operations are not parallelized,

Trang 35

and before subsequent messages can be handled, these critical operations must be pleted in a satisfactory manner.

com-Similarly, bandwidth inefficiency can be directly correlated to throughput limitationsassociated with a given application As the amount of data exchanged increases, the prob-ability of encountering congestion also increases—not only in the network, but also inthe presentation, session, and transport layer buffers With congestion comes packet losscaused by buffer exhaustion due to lack of memory to store the data, which leads toretransmission of data between nodes (if encountered in the network) or repeated deliv-ery of application data to lower layers (if encountered at or above the transport layer).Although the previous three sections focused primarily on latency, bandwidth inefficien-cies, and throughput limitations as application layer performance challenges, the itemsdiscussed in the next section, “Network Infrastucture,” can also have a substantial impact

on application layer performance The next section focuses primarily on the networkinfrastructure aspects that impact end-to-end performance and discusses how these nega-tively impact performance

Network Infrastructure

The network that exists between two communicating nodes can also create a tremendousnumber of application-performance barriers In many cases, the challenges found in L4–7are exacerbated by the challenges that manifest in the network infrastructure Forinstance, the impact of application layer latency is multiplied when network infrastruc-ture latency increases The impact of application layer bandwidth inefficiencies are com-pounded when the amount of available bandwidth in the network is not sufficient Packetloss in the network has an adverse effect on application performance as transport proto-cols or the applications themselves react to loss events to normalize connection through-put around the available network capacity and retransmit data that was supposed to bedelivered to the node on the other end of the network Such events cause backpressureall the way up the application stack on both sender and recipient and have the capability

in some cases to bring performance nearly to a halt Serialization and queuing delays inintermediary networking devices, while typically negligible in comparison to other fac-tors, can also introduce latency between communicating nodes

This section focuses specifically on the issues that are present in the network ture that negatively impact application performance and examines how these issues canworsen the performance limitations caused by L4–7 challenges discussed previously.These issues include bandwidth constraints, network latency, and packet loss (commonlycaused by network congestion)

Trang 36

most cases, the bandwidth capacity on the LAN is not a limitation from an

application-performance perspective, but in certain cases, application application-performance can be directly

impacted by LAN bandwidth WAN bandwidth, on the other hand, is not increasing as

rapidly as LAN bandwidth, and the price of bandwidth in the WAN is significantly

high-er than the price of bandwidth in the LAN This is largely because WAN bandwidth is

commonly provided as a service from a carrier or service provider, and the connections

must traverse a “cloud” of network locations to connect two geographically distant

net-works As these connections are commonly connecting networks over long distances, the

cost to deploy the infrastructure is much higher, and that cost is transferred directly to

the company taking advantage of the service Furthermore, virtually every carrier has

deployed its network infrastructure in such a way that it can provide service for multiple

customers concurrently to minimize costs Most carriers have done a substantial amount

of research into what levels of oversubscription in the core of their network are tolerable

to their customers, with the primary exception being dedicated circuits provided by the

provider to the subscriber where the bandwidth is guaranteed

Alternatively, organizations can deploy their own infrastructure—at a significant cost

Needless to say, the cost to deploy connectivity in a geographically distributed manner is

much more than the cost to deploy connectivity in a relatively well-contained geography,

and the price relative to bandwidth is much higher as the distance being covered increases

The most common WAN circuits found today are an order of magnitude smaller in

band-width capacity than what can be deployed in today’s enterprise LAN environments The

most common WAN link found in today’s remote office and branch office environment is

the T1 (1.544 Mbps), which is roughly 1/64 the capacity of a Fast Ethernet connection

and roughly 1/664 the capacity of a Gigabit Ethernet connection, which is commonplace

in today’s network environments Digital Subscriber Line (DSL), Asymmetric Digital

Subscriber Line (ADSL), and Ethernet to the branch are also quickly gaining popularity,

offering much higher levels of bandwidth than the traditional T1 and in many cases, at a

lower price point

When examining application performance in WAN environments, it is important to note

the bandwidth disparity that exists between LAN and WAN environments, as the WAN is

what connects the many geographically distributed locations Such a bandwidth disparity

makes environments where nodes are on disparate LANs and separated by a WAN

sus-ceptible to a tremendous amount of oversubscription In these cases, the amount of

bandwidth that can be used for service is tremendously smaller than the amount of

band-width capacity found on either of the LAN segments connecting the devices that are

attempting to communicate This problem is exacerbated by the fact that there are

monly tens, hundreds, or even in some cases thousands of nodes that are trying to

com-pete for this precious and expensive WAN bandwidth When the amount of traffic on the

LAN awaiting service over the WAN increases beyond the capacity of the WAN itself,

the link is said to be oversubscribed, and the probability of packet loss increases rapidly

Figure 1-6 provides an example of the oversubscription found in a simple WAN

environ-ment with two locations, each with multiple nodes attached to the LAN via Fast Ethernet

(100 Mbps), contending for available bandwidth on a T1 (1.544 Mbps) In this example,

Trang 37

IP Network

Up to 536:1 Oversubscription 4:1 Oversubscription 67:1 Oversubscription

2:1 Oversubscription

Figure 1-6 Bandwidth Oversubscription in a WAN Environment

the location with the server is also connected to the WAN via a T1, the potential forexceeding 500:1 oversubscription is realized, and the probability of encountering a sub-stantial amount of packet loss is high

When oversubscription is encountered, traffic that is competing for available WAN width must be queued to the extent allowed by the intermediary network devices, includ-ing routers The queuing and scheduling disciplines applied on those intermediary net-work devices can be directly influenced by a configured policy for control and band-width allocation (such QoS) In any case, if queues become exhausted (full) on theseintermediary network devices (cannot queue additional packets), packets must be

band-dropped, because there is no memory available in the device to temporarily store the datawhile it is waiting to be serviced Loss of packets likely impacts the application’s ability

to achieve higher levels of throughput and, in the case of a connection-oriented transportprotocol, causes the communicating nodes to adjust their rates of transmission to a levelthat allows them to use only their fair share of the available bandwidth or to be withinthe capacity limits of the network

As an example, consider a user transmitting a file to a distant server by way of the FileTransfer Protocol (FTP) The user is attached to a Fast Ethernet LAN, as is the server, but aT1 WAN separates the two locations The maximum achievable throughput between thetwo for this particular file transfer is limited by the T1, because it is the slowest link in thepath of communication Thus, the application throughput (assuming 100 percent efficien-

cy and no packet loss) would be limited to roughly 1.544 Mbps (megabits per second), or

193 kBps (kilobytes per second) Given that packet loss is imminent, and no transportprotocol is 100 percent efficient, it is likely that the user would see approximately 90 per-cent of line-rate in terms of application throughput, or roughly 1.39 Mbps (174 kBps)

Trang 38

In this example, the user’s FTP application continues to send data to TCP, which attempts

to transmit on behalf of the application As the WAN connection becomes full, packets

need to be queued on the WAN router until WAN capacity becomes free As the arrival

rate of packets is likely an order of magnitude higher than the service rate (based on the

throughput of the WAN), the router queue becomes exhausted quickly and packets are

dropped by the router As TCP on the sender’s computer detects that packets have been

lost (not acknowledged by the recipient because they were dropped in the network), TCP

continually adjusts its transmission rate such that it continually normalizes around the

available network capacity while also managing the retransmission of lost data This

process of adjusting transmission rate according to available bandwidth capacity is called

normalization; it is an ongoing process for most transport protocols including TCP.

Taking the example one step further, if two users performed the same test (FTP transfer

over a T1), the router queues (assuming no QoS policy favoring one user over the other)

become exhausted even more quickly as both connections attempt to take advantage of

available bandwidth As the router queues become exhausted and packets begin to drop,

TCP on either user machine reacts to the detection of the lost packet and adjusts its

throughput accordingly The net result is that both nodes—assuming the same TCP

implementation was used and other factors were consistent (same round-trip distance

between the senders and recipients, CPU, memory, to name a few)—detect packet loss at

an equivalent rate and adjust throughput in a similar manner As TCP is considered a

transport protocol that provides fairness to other TCP nodes attempting to consume

some amount of bandwidth, both nodes would rapidly normalize—or converge—to a

point where they were sharing the bandwidth fairly, and connection throughput would

oscillate around this point of convergence (roughly 50 percent of 1.39 Mbps, or

695 kbps, which equals 86.8 kBps) This example is simplistic in that it assumes there is

no packet loss or latency found in the WAN, that both endpoints are identical in terms of

their characteristics, and that all packet loss is due to exhaustion of router queues The

impact of transport protocols is examined as part of the discussions on network latency,

loss, and congestion in the following sections

Network Latency

The example at the end of the previous section did not take into account network latency

It considered only bandwidth constraints due to network oversubscription Network

latency, another performance “problem child,” is the amount of time taken for data to

traverse a network in between two communicating devices Network latency is

consid-ered the “silent killer” of application performance, as most network administrators have

simply tried (and failed) to circumvent application-performance problems by adding

bandwidth to the network Due to the latency found in the network, additional

band-width might never be used, and performance improvements might not be realized Put

simply, network latency can have a significant effect on the maximum amount of network

capacity that can be consumed by two communicating nodes—even if there is a

substan-tial amount of unused bandwidth available

In a campus LAN, latency is generally under 1 millisecond (ms), meaning the amount of

time for data transmitted by a node to be received by the recipient is less than 1 ms Of

Trang 39

course, this number might increase based on how geographically dispersed the campusLAN is and on what levels of utilization and oversubscription are encountered As utiliza-tion and oversubscription increase, the probability of packets being queued for anextended period of time increases, thereby likely causing an increase in latency.

In a WAN, latency is generally measured in tens or hundreds of milliseconds, much higherthan what is found in the LAN Latency is caused by the fact that it takes some amount

of time for light or electrons to transmit from one point and arrive at another, commonly

called a propagation delay This propagation delay can be measured by dividing the

speed at which light or electrons are able to travel by the distance that they are traveling.For instance, light (transmitted over fiber optic networks) travels at approximately 2 * 108

meters per second, or roughly 66 percent of the speed of light traveling through space.The speed at which electrons traverse a conductive medium is much slower Although thisseems extremely fast on the surface, when stretched over a great distance, the latency can

be quite noticeable For instance, in a best case yet unrealistic scenario involving a fiberoptic network spanning 3000 miles (4.8 million meters) with a single fiber run, the dis-tance between New York and San Francisco, it takes roughly 24.1 ms in one direction forlight to traverse the network from one end to the other In a perfect world, you couldequate this to approximately the amount of time it takes a packet to traverse the networkfrom one end to the other This of course assumes that there are no serialization delays,loss, or congestion in the network, all of which can quickly increase the perceivedlatency Assuming that the time to transmit a segment of data over that same link in thereverse direction was the same, it takes at least 48.2 ms for a transmitting node to receive

an acknowledgment for a segment that was sent, assuming the processing time spent onthe recipient to receive the data, process it, and return an acknowledgement was inconse-quential When you factor in delays associated with segmenting, packetization, serializa-tion delay, and framing on the sender side, along with processing and response times onthe recipient side, the amount of perceived latency can quickly increase

Figure 1-7 shows how latency in its simplest form can impact the performance of atelephone conversation, which is analogous to two nodes communicating over an inter-network In this example, there is one second of one-way latency, or two seconds ofround-trip latency

The reason network latency has an impact on application performance is two-fold First,network latency introduces delays that impact mechanisms that control rate of transmis-sion For instance, connection-oriented, guaranteed-delivery transport protocols, such asTCP, use a sliding-window mechanism to track what transmitted data has been success-fully received by a peer and how much additional data can be sent

As data is received, acknowledgments are generated by the recipient and sent to thesender, which not only notifies the sender that the data is received, but also relieves win-dow capacity on the sender so that the sender can transmit more data if there is datawaiting to be transmitted Transport protocol control messages, such as acknowledge-ments, are exchanged between nodes on the network, so any latency found in thenetwork also impacts the rate at which these control messages can be exchanged

Trang 40

Figure 1-7 Challenges of Network Latency

As the length of time increases that a sender has to wait for a recipient’s

acknowledge-ment to a segacknowledge-ment that was sent, the amount of time taken to relieve the sender’s sliding

window equally increases

As the latency increases, the ability of the sender to fully utilize the available bandwidth

might decrease, simply because of how long it takes to receive acknowledgements from

the recipient Overall, network latency impacts the rate at which data can be drained from

a sender’s transmission buffer into the network toward the recipient This has a cascading

effect in that buffers allocated to the transport protocol can become full, which causes

backpressure on the upper layers (including the application itself), directly affecting the

rate of delivery of application layer data into the transport protocol, which is discussed

later in this section

Latency not only delays the receipt of data and the subsequent receipt of the

acknowl-edgment for that data, but also can be so large that it actually renders a node unable to

leverage all the available bandwidth As described earlier, some amount of time is

required to transport light or electrons from one point to another It can be said that

dur-ing that period of time (in a perfect world), light or electrons propagate at a consistent

rate through the medium from one point to another Light pulses or electrons are

trans-mitted according to a synchronized or unsynchronized clock between the two endpoints

in such a way that many pulses of light or many electrons can be traversing the medium

at any point in time, all in the sequence that they were initially transmitted As a medium

can contain multiple light pulses or multiple electrons (at any one point in time), network

links can be said to have some amount of capacity—that is, the quantity of light pulses

or electrons propagating through the medium at one point in time is going to be greater

than one and can actually be measured to have tangible amount When considering that

these pulses of light or electrons are merely signals that when interpreted in groups

Ngày đăng: 21/02/2014, 19:20

TỪ KHÓA LIÊN QUAN