1. Trang chủ
  2. » Giáo Dục - Đào Tạo

Cisco networkers 2009 session BRKCOM 2986 unified data center architecture integrating of unified computing system technology DDU

41 29 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 41
Dung lượng 7,83 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

UCS ManagerEmbedded– manages entire system UCS 6100 Series Fabric Interconnect 20 Port 10Gb FCoE – UCS-6120 40 Port 10Gb FCoE – UCS-6140 UCS Fabric Extender – UCS 2100 Series Remote line

Trang 1

Unified Data Center Architecture:

Integrating Unified Compute System

Technology

BRKCOM-2986

marregoc@cisco.com

Trang 3

UCS Manager

Embedded– manages entire system

UCS 6100 Series Fabric Interconnect

20 Port 10Gb FCoE – UCS-6120

40 Port 10Gb FCoE – UCS-6140

UCS Fabric Extender – UCS 2100 Series

Remote line card

UCS 5100 Series Blade Server Chassis

Flexible bay configurations

UCS B-Series Blade Server

Industry-standard architecture

UCS Virtual Adapters

Choice of multiple adapters

UCS Building Blocks

Trang 4

Legend

Catalyst 6500 Multilayer Switch

Nexus Multilayer Switch

Generic Cisco Multilayer Switch

Catalyst 6500 L2 Switch

Nexus L2 Switch

Generic Virtual Switch

Virtual Switching System

ASA ACE Service Module

Nexus Virtual DC Switch

(Multilayer)

Nexus Virtual DC Switch (L2)

Virtual Blade Switch

Nexus 1000V (VEM) Nexus 1000V (VSM)

Nexus 2000 (Fabric Extender)

VM VM

VM VM VM VM

MDS 9500 Director Switch MDS

Fabric Switch

Embedded VEM with VMs

UCS Blade Chassis

Nexus 5K with VSM UCS Fabric Interconnect

IP

IP Storage

Trang 5

Blade Chassis Rack Capacity & Server Density System Capacity & Density

Chassis External Connectivity Chassis Internal Connectivity

Trang 6

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Blade Chassis – UCS 5108 Details

Front to back Airflow

Trang 7

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Blade Chassis – UCS 5108 Details

Additional Chassis Details:

 Size: 10.5” (6U) x 18.5” x 32”

 Total Power Consumption

▪ Nominal Estimates**

Half-width Servers: 1.5 – 3.5 kW Full-width Servers: 1.5 – 3.5 kW

 Airflow: front to back

UCS 5108 Chassis Characteristics

 8 Blade Slots

 8 half-width servers or;

 4 full-width servers

 Up to two Fabric Extenders

 Both concurrently active

 Redundant and Hot Swappable

Trang 8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Chassis

Blade Slots Mezz Card Ports

Blades, Slots and Mezz Cards

 Half-width Blade – B200-M1

▪ One mezz card two ports per server

▪ Each Port to different FEX

 Full-width Blade – B250-M1

▪ Two mezz cards four ports per server

▪ Two: from each mezz card to each FEX

Slots and Mezz Cards

 Each Slot Takes a Single Mezz Card

 Each Mezz Card Has 2 Ports

▪ Each port connect to one FEX

 Total of 8 Mezz Cards Per Chassis

Half-width Full-width

Blades, Slots, and Mezz Cards

Trang 9

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

UCS 2100 Series Fabric Extender Details

UCS 2104 XP Fabric Extender Port Usage

 Two Fabric Extenders per Chassis

 4 x 10GE Uplinks per Fabric Extender

 Fabric Extender Connectivity

 1, 2 or 4 uplinks per Fabric Extender

 All uplinks connect to one Fabric Interconnect

 Port Combination Across Fabric Extenders

▪ FEX Port count must match per enclosure

▪ Any port on FEX could be utilized

Fabric Extender Capabilities

 Managed as part of UC System

 802.1q Trunking and FCoE Capabilities

 Uplink Traffic Distribution

▪ Selected when blades are inserted

▪ Slot assignment occurs at power up

▪ All slot traffic is assigned to a single uplink

 Other Logic

▪ Monitor and control of environmentals

▪ Blade insertion/removal events

Fabric Extenders Fabric Extenders Ports

Trang 10

UCS 6100 Series Fabric Interconnect Details

Fabric Interconnect Overview

 Management of Compute System

 Network Connectivity

 to/from Compute Nodes

 to/from LAN/SAN Environments

 Types of Fabric Interconnect

▪ UCS-6120XP – 1U

20 Fixed 10GE/FCoE ports & 1 expansion slot

▪ UCS-6140XP – 2U

40 Fixed 10GE/FCoE ports & 2 expansion slots

Fabric Interconnect Details

 Fixed Ports: FEX or uplink connectivity

 Expansion Slot Ports: uplink connectivity only

Fabric Extenders

Fabric Interconnects

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Trang 11

Rack Capacity and Cabling Density

Power – what is realistic!

 6 Foot Rack - 42U usable: Up to 7 chassis

 7 Foot Rack - 44U usable: Up to 7 chassis

Trang 12

UCS Power - Nominal Estimates

UCS Power Specifics

Enclosure – Front View

Enclosure – Rear View

Fabric Extender

Power Supply

Full Width Blade

Half Width Blade

Mezz Card

Trang 13

Server Density & Uplinks/Bandwidth per Rack

Blades per Rack

Overall Rack Density

Depends on Power Per Rack

Power Configuration per Enclosure

Trang 14

UCS – Compute Node Density

Unified Compute System Density

 Unified Compute System Density Factors:

▪ Blade Server Type

▪ Chassis Server Density

▪ Fabric Interconnect density

▪ Uplinks from Fabric Extenders

▪ Bandwidth per compute node

▪ Network Oversubscription

Fabric Extenders

Fabric Interconnects

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

Bandwidth vs Oversubscription

 Bandwidth:

 Traffic load a server needs to support

 Specified by server and/or application engineer

 Oversubscription:

 A measure of network capacity

 Designed by network engineer

 Multi-homed servers

▪ More IO ports may not mean more bandwidth

▪ Depends on active-active vs active-standby

Trang 15

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

UCS – Compute Node Density

Total number of blades per Unified Compute System

Chassis Uplink Capacity

 Influenced by blade bandwidth requirements

 Influenced by bandwidth per blade type

 Half-width:10GE per blade, 4 uplinks per FEX

 Full-width: 20GE per blade, 4 uplinks per FEX

 Influenced by Oversubscription

 East - West Traffic Subscription: is 1:1

 North – South Traffic Subscription:

Needs to be Engineered – Interconnect Uplinks

Fabric Extenders

Fabric Interconnects

Unified Compute Chassis

Trang 16

UCS Compute Density and Bandwidth Capacity

 # of Servers determined by FI Density and #of uplinks per chassis

 Server Density Ranges: 20 – 320

▪ Half-width: 40 – 320 Full-width: 20 – 160

 Bandwidth Ranges: 2.5 – 20 Gbps

▪ Half-width: 2.5 – 10Gbps Full-width: 5 – 20Gbps

 Max Chassis per UCS: 5 – 40

Uplinks per FEX 1 x 10GE 2 x 10GE 4 x 10GE

UCS-6120 # of Chassis

B/W per Chassis

20 20G

10 40G

5 80G

Half-width Blades # of Blades

B/W per Blade

160 2.5 Gbps

20 40G

10 80G

Half-width Blades # of Blades

B/W per Blade

320 2.5 Gbps

Trang 17

CX1 Cabling – 5 meters

 5 meters ~ 16 feet

 Using 2 Uplinks per Fabric Extender

 3 chassis per rack

4 Fabric Interconnects (UCS-6120)

 2 x 20 10GE Ports for chassis connectivity

 4x10GE & 4x4G-FC North facing

Server Count: 20 chassis x 8 = 160 servers

UCS Fabric Interconnect Location

Horizontal Cabling

USR & Fiber –100 meters

 100 meters ~ 330 feet

 Using 2 Uplinks per fabric extender

 3 chassis per rack

4 Fabric Interconnects (UCS-6120)

 2 x 20 10GE Ports for chassis connectivity

 2 4x10GE & 4x4G-FC North Facing

Server Count: 20 x 8 = 160 half-width servers

Alternatives to Interconnect Placement and Cabling

Location depends on Cabling and Density (Interconnect and Enclosure)

 CX1: 1 – 5 meters: Interconnect may be placed in centralized location – mid row*

 USR: 100 meters: Interconnect may be placed at the end of the row* near cross connect

 SR: 300 meters: Interconnect may be placed near multi-row* interconnect

* Row length calculations are based on 24” wide racks/cabinets

2 2

2 2

80

2

16

Trang 18

Unified Compute System External Connectivity

POD

SAN Fabric

Storage Arrays

Interconnect Connectivity Point

 L3/L2 Boundary in all cases

 Nexus 7000 & Catalyst 6500

Two Physical Identical Fabrics

Fabric Interconnect: 4G FC attached

 NPV Mode

Interconnect Connectivity Point

 Based on Core-Edge vs Edge-Core-Edge models

Trang 19

Internal Connectivity

Mezz Card Ports and Blades Connectivity

Mezz Card Port Usage

 Each slot connects to each fabric extender

 Each slot supports a dual-port mezz card

 Slots and blades:

 One slot, one mezz card, one half-width blade

 Two Slots, two mezz cards, full-width blade

 Slot Mezz Port is assigned Fabric Extender

 Based on available uplinks in sequence

 Port Redundancy based least physical connections

Fabric Extender Connectivity

 Fabric Extender Ports

 Connect to single Fabric Interconnect

 Each port is independently used

 Do not form a port channel

 Each port is a trunk

 Traffic distribution is based on slot at bring up time

 Slot Mezz Port map to connected Fabric Extender

 FCS: in a sequential fashion – port pinning

 Post FCS: a port could have dedicated uplink

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

s2 s4 s6 s8

s2 s4 s6 s8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8

blade1 blade2 blade3 blade4 blade5 blade6 blade7 blade8

Trang 20

eth0 eth1 eth0 eth1 eth0 eth1 eth0 eth1

Half width Full width

Internal Connectivity

Mezz Cards and Virtual Interfaces Connectivity

vEth vEth vEth vEth vEth vEth

eth0 eth1 eth0 eth1

vnic1 vnic2 vnic1 vnic2

General Connectivity

 Port channels

 Are not formed between mezz ports

 Are not formed across mezz cards

 Backup Interfaces

 Mezz port backup within mezz card

 Redundancy: Depends on Mezz card

 Interface Redundancy

 vnic redundancy done across mezz card

ports

Blade Connectivity

 Full width – 2 mezz cards 4 ports

 vnics mapped to any port

 vhbas are round robin mapped to fabric *

 Half width – 1 mezz card 2 ports

 vnics mapped to any port

 vhbas mapped to 1 port – not redundant*

 vhba mapping is roundrobin

*Host multipahing sw required for redundancy

Trang 22

The Unified DC Architecture

L2

L3

VM VM

VM VM VM VM

VM VM

VM VM

A

Aggregation : Typical L3/L2 boundary DC

aggregation point for uplink and DC services offering key features: VPC, VDC, 10GE density and 1st point of migration to 40GE and 100GE

Access : Classic network layer providing

non-blocking paths to servers & IP storage devices through VPC It leverages Distributed Access Fabric Model (DAF) to centralize config &

mgmt and ease horizontal cabling demands related to 1G and 10GE server environments

Virtual Access : A virtual layer of network

intelligence offering access layer-like controls

to extend traditional visibility, flexibility and mgmt into virtual server environments Virtual network switches bring access layer switching capabilities to virtual servers without burden of topology control plane protocols Virtual

Adapters provide granular control over virtual and physical server IO resources

L3

POD

Core : L3 boundary to the DC network

Functional point for route summarization, the injection of default routes and termination of segmented virtual transport networks

VM VM

VM VM VM

VM VM

VM VM VM

B

Rack 1 Rack x

NEXUS 2000

NEXUS 7000 VPC NEXUS 5000

NEXUS 7000 VPC

-NEXUS 7000

Service

Appliances

Catalyst 6500

Service Modules

Unified Compute System

VM VM VM VM VM

VM VM VM VM

NEXUS 1000v

Trang 23

A Unified Compute Pod

A Modular, Predictable Virtualized Compute Environment

POD: Modular Repeatable Compute Environment w/ Predictable Scalability & Deterministic Functions

The POD Concept: applies to distinct application environments and through a modular

approach to building the physical, network and compute infrastructure in a predictable

and repeatable manner It allows organizations to plan the rollout of distinct compute

environment as needed in a shared physical data center using a pay as you go model

General Purpose POD

 Classic client server applications

 Multi-tier Applications: web, app, DB

 Low to High Density Compute Environments

 Include stateful services

Unified Compute Pod

SAN Edge B SAN Edge

Trang 24

Physical Infrastructure and Network Topology

Mapping the Physical to the Logical

Server Rack Network Rack

Zone DC

DAF Rack

ToR POD

DAF POD

UCS Blade Rack

Trang 25

Row Row

Access Layer Network Model

End of Row, Top of Rack, and Blade Switches

End of Row Top of Rack Blade Switches

EoR & Blades ToR & Blades

GE Access

DAF & 1U Servers DAF & Blades

What it used to be…

UCS Fabric Extender to Fabric Interconnect

Trang 26

Distributed Access Fabric in Blade Environment

Distributed Access Fabric - DAF

Network Rack

DAF Blade Rack

Cross Connect Rack

Fabric

Instance

Fabric Interconnect Horizontal

Cabling

Vertical

Cabling

Why Distributed Access Fabric?

All chassis are managed by Fabric Interconnect: single config point, single monitoring point

Fabric instances per chassis present at rack level – reduced management point

Fabric instances are extensions of the fabric Interconnect: they are fabric Extenders

Simplifies cabling infrastructure: Horizontal cabling choice: “what is available” Fiber of copper

CX1 cabling for brownfield installations – Fabric Interconnect Centrally located

USR for greenfield installations – Fabric Interconnect at End of Row near the cross connect

Vertical cabling: just an in-rack patch cable

Trang 27

Network Equipment Distribution

EoR, ToR, Blade Switches, and Distributed Access Fabric

Network

Fabric &

Location

Modular Switch at the end of

a row of server racks

Low RU, lower port density switch per server rack

Switches Integrated in to blade enclosures per server racks

Access fabric on top of rack & access switch on end of row

Fiber: access to aggregation

Copper: server to ToR switch Fiber: ToR to aggregation

Copper or Fiber: access to aggregation

Copper of fiber: In rack patch Fiber: access fabric to fabric switch

Trang 28

EHM UIO Environments

Trang 29

UCS and Network Environments

Core

Storage Arrays

Fabric Interconnect

Fabric Extenders

LAN & SAN Deployment Considerations

 Deployment Options

 Uplinks per Fabric Extender

Fan-out, Oversubscription & Bandwidth

 Uplinks per Fabric Interconnect

Flavor of Expansion Module

 Fabric Interconnect Connectivity Point

Fabric Interconnect is the Access Layer

Should connect to L2/L3 Boundary switch

Unified Compute System Unified Compute System

Other Considerations Details

 Mezz Cards: CNAs

 Up to 10G FC (FCoE)

 Enet traffic could take full capacity if needed

 Fabric Interconnect FC Uplinks

 4G FC: 8 or 4 Ports and N-port channels

 Uplinks per Fabric Interconnect

2 or 4 FC uplinks per Fabric

2 or 3 10GE uplinks to each upstream switch

Trang 30

UCS Fabric Interconnect Operation Modes

Switch Mode & End-Host-Mode

 Follow STP design best practices

 Provides L2 Switching Functionality

 Uplink Capacity based on Port-Channels

 Switch Looks Like a Host

 Switch Still Performs L2 Switching

 MAC addresses Active on 1 link at a time

 MAC addresses pinned to uplinks

 Local MAC learning – not on uplinks

 Forms Loop-free Topology

 UCS mgr Syncs Fabric Interconnects

 Uplink Capacity

 6140: 12-way (12 uplinks)

 6120: 6-way (6 uplinks)

Trang 31

UCS on Ethernet Environments

Fabric Interconnect & Network Topologies

Classic STP Topologies

Fabric Interconnect

 Switch runs STP

 participates on STP topology

 Follow STP design best practices

Upstream Devices: Any L3/L2 boundary switch

A combination of VPC/VSS and End-Host-Mode provides optimized network environments by reducing

MAC scaling constraints, increasing available bandwidth and lowering processing load on agg switches

Trang 32

UCS on Ethernet Environments

Connectivity Point

L2 L3

L2

Aggregation

POD

Fabric Interconnect to ToR Access Switches

 Fabric Interconnect in End-Host Mode – No STP

 Leverage Total Uplink Capacity: 60G or 120 G Capacity

 L2 topology remains 2-tier

Scalability

 6140 pair: 10 chassis = 80 compute nodes – 3.3:1 subscription

 5040 pair: 6 UCS systems = 480 compute nodes – 15:1 subscription

 7018 pair: 13 5040 pairs: 6,240 compute node

 Enclosure: 80 GE – Compute Node: 10GE Attached

POD

Switch Mode

Rack 1 Rack N

3.3:1 4.5:1

Fabric Interconnect to Aggregation Switches

 Fabric Interconnect in switch mode

 Leverage Total Uplink Capacity: 60G or 80G Capacity

 L2 topology remains 2-tier

Scalability

 6140 pair: 10 chassis = 80 compute nodes – 5:1 subscription

 7018 pair: 14 6140 pairs: 1,120 compute nodes

 Enclosure 80GE – Compute node: 10GE Attached

Ngày đăng: 27/10/2019, 21:31

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm