1. Trang chủ
  2. » Công Nghệ Thông Tin

Tài liệu Module 2: Concepts of Server Clusters doc

54 298 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Concepts of Server Clusters
Tác giả April Andrien, Priscilla Johnston, Diana Jahrling, Jack Creasey, Jeff Johnson, James Cochran, Lorrin Smith-Bates, Andrea Heuston, Lynette Skinner, Elizabeth Reese, Bill Jones, Miracle Davis, Julie Challenger, Irene Barnett, Eric Wagoner, Eric R. Myers, Robertson Lee, David Mahlmann, Scott Serna, Rick Terek, John Williams, Laura King, Kathy Hershey, Bo Galford, Sid Benavente, Ken Rosen, David Bramble, Julie Truax, Dean Murray, Robert Stewart
Trường học Microsoft Corporation
Chuyên ngành Server Clusters
Thể loại Module
Năm xuất bản 2000
Thành phố Redmond
Định dạng
Số trang 54
Dung lượng 1,44 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Contents Overview 1 Introduction to Server Clusters 2 Multimedia: Microsoft Windows 2000 Key Concepts of a Server Cluster 9 Demonstration: Cluster Concepts 26 Choosing a Server Clus

Trang 1

Contents

Overview 1

Introduction to Server Clusters 2

Multimedia: Microsoft Windows 2000

Key Concepts of a Server Cluster 9

Demonstration: Cluster Concepts 26

Choosing a Server Cluster Configuration 27

Applications and Services on Server

Clusters 36

Review 44

Module 2: Concepts of Server Clusters

Trang 2

to represent any real individual, company, product, or event, unless otherwise noted Complying with all applicable copyright laws is the responsibility of the user No part of this document may

be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without the express written permission of Microsoft Corporation If, however, your only means of access is electronic, permission to print one copy is hereby granted

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property

 2000 Microsoft Corporation All rights reserved

Microsoft, Active Directory, BackOffice, Jscript, PowerPoint, Visual Basic, Visual Studio, Win32, Windows, Windows NT are either registered trademarks or trademarks of Microsoft Corporation

in the U.S.A and/or other countries

Other product and company names mentioned herein may be the trademarks of their respective owners

Program Manager: Don Thompson

Product Manager: Greg Bulette

Instructional Designers: April Andrien, Priscilla Johnston, Diana Jahrling

Subject Matter Experts: Jack Creasey, Jeff Johnson

Technical Contributor: James Cochran

Classroom Automation: Lorrin Smith-Bates

Graphic Designer: Andrea Heuston (Artitudes Layout & Design)

Editing Manager: Lynette Skinner

Editor: Elizabeth Reese

Copy Editor: Bill Jones (S&T Consulting)

Production Manager: Miracle Davis

Build Manager: Julie Challenger

Print Production: Irene Barnett (S&T Consulting)

CD Production: Eric Wagoner

Test Manager: Eric R Myers

Test Lead: Robertson Lee (Volt Technical)

Creative Director: David Mahlmann

Media Consultation: Scott Serna

Illustration: Andrea Heuston (Artitudes Layout & Design)

Localization Manager: Rick Terek

Operations Coordinator: John Williams

Manufacturing Support: Laura King; Kathy Hershey

Lead Product Manager, Release Management: Bo Galford

Lead Technology Manager: Sid Benavente

Lead Product Manager, Content Development: Ken Rosen

Group Manager, Courseware Infrastructure: David Bramble

Group Product Manager, Content Development: Julie Truax

Director, Training & Certification Courseware Development: Dean Murray

General Manager: Robert Stewart

Trang 3

Instructor Notes

This module provides students with a brief overview of the different types of server clusters and their key benefits of availability and scalability A short video gives an overview of how Cluster service functions, and introduces the key terms and concepts, which are explained in the Key Concepts of a Server Cluster section of the module Students are then introduced to four different cluster configuration options The last section explains how both cluster-aware and generic services and applications run in a server cluster, including an explanation of how to identify performance limitations, which are caused by

these resources

After completing this module, students will be able to:

 Explain the features of clustering technologies

 Define the key terms and concepts of a server cluster

 Choose a server cluster configuration

 Describe how Cluster service supports applications and services

Materials and Preparation

This section provides the materials and preparation tasks that you need to teach this module

Required Materials

To teach this module, you need the following materials:

 Microsoft® PowerPoint® file 2087A_02.ppt

 Servercluster.avi file on the Instructor CD

Preparation Tasks

To prepare for this module, you should:

 Read the materials for this module and anticipate questions students may ask

 Preview the servercluster.avi and the review questions and prepare additional questions as necessary

 Practice the demonstration

 Study the review questions and prepare alternative answers for discussion

 Read the Appendix

Presentation:

90 Minutes

Lab:

00 Minutes

Trang 4

Demonstration

This section provides demonstration procedures that will not fit in the margin notes or are not appropriate for the student notes

Cluster Concepts

 To prepare for the demonstration

1 Run the demonstration enough times so you can perform the demonstration without referring to the material

2 Classroom setup must be complete

3 The Terminal Services client needs to be installed on the London computer

4 The Cluster Administrator needs to be installed on the London computer

In this demonstration, you will reinforce the concepts of server clusters and show the students different name resolution capabilities for clients accessing resources from the cluster

Demonstration 1

 To start Cluster Administrator from the Run command and view the Cluster Group Owner

1 On the Start menu, click Run

2 In the Run command dialog box, type Cluadmin.exe -noreconnect

3 Cluster Administrator opens, and in the Open Connection to Cluster dialog box, type MyCluster and then click Open

4 Show the students the different groups and resources

5 Point out that the two servers running the cluster are named NodeA and

NodeB

6 Expand Groups, and point out the owner of Cluster Group Leave Cluster

Administrator open

Demonstration 2

 To create a public folder share from a Terminal Services session

1 On the Start menu, point to Programs, point to Administrative Tools, and then click Terminal Services Connections

2 Right-click Terminal Services Connections, and then click Add new

connection…

3 In the Add New Connection dialog box, fill out the following information and then click OK

4 Server name or IP address: NodeA

5 Connection name: NodeA

6 Perform the previous step and replace NodeA with NodeB

7 Right-click the Node that is the owner of Cluster Group, and then click

Connect

8 In the Log On to Windows dialog box fill out the following information and click OK

Trang 5

9 User Name: Administrator@nwtraders.msft

10 Password: password

11 On the desktop, double-click My Computer

12 In My Computer, double-click drive W:

13 On drive W menu, click File, select New, and then click Folder

14 Name the folder Public

15 Close Terminal Services connections MMC

Demonstration 3

 To create a File Share resource

1 From Cluster Administrator, expand Groups, and then click Cluster

Group

2 Right-click Cluster Group, select New, and then click Resource

3 In the New Resource dialog box fill out the following and then click Next

Name: Public Share

Description: Public Share on MyCluster

Resource type: File Share

Group: Cluster Group

4 In the Possible Owners dialog box, click Next

5 In the Dependencies dialog box, add the following Resource

dependencies, and then click Next

Comment: Public File Share on MyCluster

7 Click OK to confirm that the resource was created successfully

8 Right-click Public Share, and then click Bring Online

Demonstration 4

 To test WINS Name Resolution for the Public Share

1 On the Start menu, click Run

2 In the Run dialog box, type \\mycluster\public

3 In Microsoft Windows® Explorer, view the contents of the public folder

4 From Windows Explorer click File, select New, and then click Folder

5 Name the folder Sales

6 Close Windows Explorer

Trang 6

Demonstration 5

 To test DNS Name Resolution

1 On the Start menu, click Run

2 In the Run dialog box, type \\mycluster.nwtraders.msft\public

3 When Windows Explorer opens, view the contents of the public folder

Demonstration 6

 To publish a Shared Folder in Microsoft Active Directory directory service

1 On the Start menu, point to Programs, then point to Administrative

Tools, and then click on Active Directory Users and Computers

2 On the Tree tab, expand nwtraders.msft

3 Right-click Users, select New, and then click Shared Folder

4 In the New Object – Shared Folder dialog box, fill out the following and then click OK

Name: Public Share on MyCluster

Network path (\\server\share): \\mycluster\public or

\\mycluster.nwtraders.msft\public

5 Close Active Directory Users and Computers

6 On your desktop, double-click My Network Places

7 In My Network Places, double-click Entire Network

8 In the Entire Network window, click entire contents on the left side of the

screen

9 In the Entire Network window, double-click Directory

10 In the Directory window, double-click nwtraders

11 In the ntds://nwtraders.msft window, double-click Users

12 In the ntds://nwtraders.msft/Users window, double-click Public

13 Windows Explorer opens the contents of the public share on mycluster

Demonstration 7

 To demonstrate a failover of the Public Share

1 On the Start menu, point to Programs, point to Administrative Tools, and then click Cluster Administrator

2 If prompted to connect to a cluster, type MyCluster and then click Open

3 In Cluster Administrator, expand Groups, right-click Cluster Group, and then click Move Group

4 Show the students how the owner is changing from NodeX to NodeY, X being the original node controlling the Cluster Group and Y being the node

that will take control of the Cluster Group

Trang 7

Demonstration 8

 To test WINS Name Resolution after failover

1 On the Start menu, click Run

2 In the Run dialog box, type \\mycluster\public

3 Windows Explorer opens and you can view the contents of the public folder

Demonstration 9

 To test DNS Name Resolution after failover

1 On the Start menu, click Run

2 In the Run dialog box, type \\mycluster.mwtraders.msft\public

3 Windows Explorer opens and you can view the contents of the public folder

Demonstration 10

 To test Active Directory Shared Folders after failover

1 On your desktop, double-click My Network Places

2 In My Network Places, double-click Entire Network

3 In the Entire Network window, click entire contents from the left side of

the screen

4 In the Entire Network window, double-click Directory

5 In the Directory window, double-click nwtraders

6 In the ntds://nwtraders.msft window, double-click Users

7 In the ntds://nwtraders.msft/Users window, double-click Public

8 Windows Explorer opens the contents of the public share on mycluster

Trang 8

Multimedia Presentation

This section provides multimedia presentation procedures that do not fit in the margin notes or are not appropriate for the student notes

Microsoft Windows 2000 Cluster Service

 To prepare for the multimedia presentation

1 Preview the video and note where the information covered appears in the module (both in the list of definitions and in the greater detail pages that follow)

2 Add questions about the video and server clusters that may be especially relevant to your audience

3 Make sure that you control the questions and discussions so that students do not expect the animation to be the equivalent of the entire module contents Its purpose is to provide a broad overview to orient students to the materials that will follow

Module Strategy

Use the following strategy to present this module:

 Introduction to Server Clusters The intent of this introduction is to give students a history of server cluster techniques and explain the differences between the model that Cluster service uses and two other options It also provides an opportunity to emphasize the key benefits of server clusters: availability and scalability

• Clustering Techniques: Note the difference between a shared everything model and a shared nothing model, and how Cluster service utilizes the shared nothing model

• Availability and Scalability: Students should understand the differences between availability and scalability and how Cluster service improves availability and scalability

 Multimedia: Introduction to Microsoft Windows 2000 Cluster Service Emphasize the shared nothing model and how it relates to the video, and how the application’s data is stored on the cluster disk

Trang 9

 Key Concepts of a Server Cluster The list of key concepts that opens this section is designed to provide a brief description of the concepts that students will need to know to successfully install and administer a server cluster The information in this section is foundational to the rest of the Cluster service portion of the course Take time to process questions and check for understanding

Cluster Disks The cluster disks, also known as shared disks, are based

on a shared nothing model Only one node at a time has access to the disk

Quorum Resource The quorum is a vital part of Cluster service

Students need to understand what is stored in the quorum, how the nodes interact with the quorum through arbitration, and the data a node can get after a restart from the quorum recovery logs

Cluster Communications Cluster service communicates with clients

over a public network, between nodes over a private network and can use a mixed network for a private network failover

Groups and Resources Students need to understand that groups are a

logical collection of resources You can add many resources to a group You can take resources offline but they may have dependencies that will also bring other resources offline

Resource Dependency Trees Consultants often use diagrams of

dependency trees to help their customers understand the concept of dependencies and how important they are to Cluster service

Virtual Servers A virtual server consists of a virtual IP address and a

virtual name resource Clients gain access to cluster resources through virtual servers Students need to understand the relationship between the resources and the virtual servers

Virtual Server Name Resolution Clients access a virtual server as if it

were a physical server on the network Stress the importance of proper name resolution so that clients can always access the virtual server no matter which node is controlling the virtual server

Failover and Failback The key concept to keep applications and

resources available is the ability for Cluster service to fail over a group from one node to another

 Demonstration: Cluster Concepts This demonstration reinforces the concepts presented to the students in this section Key points of the demo are name resolution to a virtual server and failover of a resource

Trang 10

 Choosing a Server Cluster Configuration The table on the first page of this section provides a reference point for the descriptions of the four configurations which follow Ask students for examples of how they might use each of the configurations in their environments, or why they would not

Active/Passive Configuration Only one node is doing work in the

cluster The other node is waiting for the first node to fail

Active/Active Configuration Both nodes are performing work in the

cluster, but have the capacity to handle all of the resources in case one node fails

Hybrid Configuration If a node is doing work outside the cluster, it is

referred to as a hybrid configuration in either an active/active or active/passive configuration

Single Node Virtual Server This configuration is ideal for server

consolidation because one physical server can control many virtual servers

 Applications and Services on Server Clusters This section covers the information that students will need to decide which applications and services they will run in a server cluster Make sure students understand the difference between cluster-aware and cluster-unaware applications and services File and print shares benefit especially from the failover feature of Cluster service The material on identifying performance limitations is not intended to be a complete planning guide for allocating resources in a server cluster, but should be explained as an issue that students will need to consider when installing Cluster service and adding services and applications to existing server clusters

Applications Students need to know the difference between

cluster-aware and cluster-uncluster-aware applications To run on Cluster service, you must configure cluster-unaware applications as generic resource types

Services The services that come with Microsoft Windows 2000 that can

run on a server cluster are DFS, DHCP, and WINS Cluster-aware and cluster-unaware services have the same characteristics as covered in the page on applications

File and Print Shares An excellent use for Cluster service is for

highly-available file and print shares

Identifying Performance Limitations Students need to understand that

the dynamics of a node’s performance could change depending on what groups the node controls

Lab Setup

There are no lab setup requirements that affect replication or customization

Trang 11

Overview

 Introduction to Server Clusters

 Key Concepts of a Server Cluster

 Choosing a Server Cluster Configuration

 Applications and Services on Server Clusters

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

This module provides an explanation of server cluster terms and key concepts Topics include considerations for choosing cluster configuration options and determining which applications and services will be included in the server cluster Information that is unique to the installation of Microsoft® Cluster service is covered, such as naming and addressing conventions and how resources and groups function within a server cluster

After completing this module, you will be able to:

 Explain the features of clustering technologies

 Define the key terms and concepts of a server cluster

 Choose a server cluster configuration

 Describe how Cluster service supports applications and services

In this topic we will talk

about the features and key

concepts of server clusters

Trang 12

 Introduction to Server Clusters

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

A server cluster is a group of computers and storage devices that work together and can be accessed by clients as a single system The individual computers in the cluster are referred to as nodes, and they act together to provide automatic recovery from failure of clustered services and applications

There are two types of network communications in a server cluster The nodes communicate with each other over a high performance, reliable network, and share one or more common storage devices Clients communicate to logical servers, referred to as virtual servers, to gain access to grouped resources, such

as file or print shares, services such as Windows Internet Name Service (WINS), and applications like Microsoft Exchange Server

When a client connects to the virtual server, the server routes the request to the node controlling the requested resource, service, or application If the

controlling node fails, any clustered services or applications running on the failed node will restart on a surviving designated node

There are three types of clustering techniques commonly used: shared everything, mirrored servers, and shared nothing Microsoft Cluster Service uses the shared nothing model

You can configure server clusters to address both availability and scalability issues The failover capability of Microsoft Cluster Service makes resources more available than in a non-clustered environment It is also an economical way to scale up when you need greater performance

Topic Objective

To introduce the concept

and benefits of clustering

technologies

Lead-in

A server cluster is a group

of computers and storage

devices that work together

and are accessed by clients

as a single system

Trang 13

Clustering Techniques

 Mirrored Servers

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

There are a variety of cluster implementation models that are used widely in the computer industry Common models are shared everything, mirrored servers, and shared nothing It is possible for a cluster to support both the shared everything model and the shared nothing model Typically, applications that require only limited shared access to data work best in the shared everything model Applications that require maximum scalability will benefit from the shared nothing cluster model

Shared Everything Model

In the shared everything, or shared device model, software running on any computer in the cluster can gain access to any hardware resource connected to any computer in the cluster (for example, a hard drive, random access memory (RAM), and CPU)

The shared everything server clusters permit every server to access every disk Allowing access to all of the disks originally required expensive cabling and switches, plus specialized software and applications If two applications require access to the same data, much like a symmetric multiprocessor (SMP)

computer, the cluster must synchronize access to the data In most shared device cluster implementations, a component called a Distributed Lock Manager (DLM) is used to handle this synchronization

The Distributed Lock Manager (DLM)

The Distributed Lock Manager (DLM) is a service running on the cluster that keeps track resources within the cluster If multiple systems or applications attempt to reference a single resource, the DLM recognizes and resolves the conflict However, using a DLM introduces a certain amount of overhead into the system in the form of additional message traffic between nodes of the cluster in addition to the performance loss due to serialized access to hardware resources Shared everything clustering also has inherent limits on scalability, because DLM contention grows geometrically as you add servers to the cluster

Topic Objective

To identify the differences

between three server cluster

models

Lead-in

There are three commonly

used server cluster models

Trang 14

Mirrored Servers

An alternative to the shared everything and shared nothing models is to run software that copies the operating system and the data to a backup server This technique mirrors every change from one server to a copy of the data on at least one other server This technique is commonly used when the locations of the servers are too far apart for the other cluster solutions The data is kept on a backup server at a disaster recovery site and is synchronized with a primary server

However, a mirrored server solution cannot deliver the scalability benefits of clusters Mirrored servers may never deliver as high a level of availability and manageability as shared-disk clustering, because there is always a finite amount

of time during the mirroring operation in which the data at both servers is not identical

Shared Nothing Model

The shared nothing model, also known as the partitioned data model, is designed to avoid the overhead of the DLM in the shared everything model In this model, each node of the cluster owns a subset of the hardware resources that make up the cluster As a result, only one node can own and access a hardware resource at a time A shared-nothing cluster has software that can transfer ownership to another node in the event of a failure The other node takes ownership of the hardware resource so that the cluster can still access it

The shared nothing model is asymmetric The cluster workload is broken down

into functionally separate units of work that different systems performed in an independent manner For example, Microsoft SQL Server™ may run on one node at the same time as Exchange is running on the other

In this model, requests from client applications are automatically routed to the system that owns the resource This routing extends to server applications that are running on a cluster For example, if a cluster application such as Internet Information Services (IIS) needs to access a SQL Server database on another node, the node it is running on passes the request for the data to the other node Remote procedure call (RPC) provides the connectivity between processes that are running on different nodes

A shared nothing cluster provides the same high level of availability as a shared everything cluster and potentially higher scalability, because it does not have the inherent bottleneck of a DLM An added advantage is that it works with standard applications because there are no special disk access requirements Examples of shared nothing clustering solutions include Tandem NonStop, Informix Online/XPS, and Microsoft Windows 2000 Cluster service

Cluster service uses the shared nothing model By default, Cluster service does not allow simultaneous access from both nodes to the shared disks or any resource Cluster service can support the shared device model as long as the application supplies a DLM

Note

Trang 15

Availability and Scalability

 Availability

Services

 Scalability

Computers to the Cluster

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

Microsoft Cluster service makes resources, such as services and applications, more available by providing for restart and failover of the resource Another benefit of Cluster service is that it provides greater scalability of the resource because you can separate applications and services to run on different servers

Availability

When a system or component in the cluster fails, the cluster software responds

by dispersing the work from the failed system to the remaining systems in the cluster

Cluster service improves the availability of client/server applications by increasing the availability of server resources Using Cluster service, you can set up applications on multiple nodes in a cluster If one node fails, the applications on the failed node are available on the other node Throughout this process, client communications with applications usually continue with little or

no interruption In most cases, the interruption in service is detected in seconds, and services can be available again in less than a minute (depending on how long it takes to restart the application)

Clustering provides high availability with static load balancing, but it is not a fault tolerant solution Fault tolerant solutions offer error-free, nonstop availability, usually by keeping a backup of the primary system This backup system remains idle and unused until a failure occurs, which makes this an expensive solution

Topic Objective

To describe the two key

benefits of Cluster service

Lead-in

The failover capability of

Cluster service makes

resources more available

than in a nonclustered

environment It is also an

economic way to scale up

when you need greater

performance

Trang 16

Scalability

When the overall load exceeds the capabilities of the systems in the cluster, instead of replacing an existing computer with a new one with greater capacity, you can add additional hardware components to increase the node’s

performance, while maintaining availability of applications that are running on the cluster Using Microsoft clustering technology, it is possible to

incrementally add smaller, standard systems to the cluster as needed to meet overall processing power requirements

Clusters are highly scalable; you can add CPU, input/output (I/O), storage, and application resources incrementally to efficiently expand capacity A highly scalable solution creates reliable access to system resources and data, and protects your investment in both hardware and software resources Server

clusters are affordable because they can be built with commodity hardware

(high-volume components that are relatively inexpensive)

Trang 17

Multimedia: Microsoft Windows 2000 Cluster Service

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

In this video you will learn the basic functionality of Cluster service At the end

of this video, you should be able to answer the following questions

What is a node?

Two or more servers that are connected by a shared bus, which is running Cluster service

_ _ Where is the application data stored?

On the cluster disk

_ _

Topic Objective

To introduce the animation

which depicts the functions

and terms of server clusters

Lead-in

This video shows an

overview of Cluster service

Trang 18

What is a private network used for in a server cluster?

Provides intracluster communications between the nodes, which are called heartbeats

_ _ What happens when an application fails?

Cluster service tries to restart the application on the same node If the application fails to restart, control of the resource is automatically transferred to the other node

_ _

Trang 19

 Key Concepts of a Server Cluster

Client

Private Network

Private Network

Server Cluster

Quorum Disk 1

A Group of Resources

Virtual Server

Virtual Server

Disk 1

Print Share File Share

Public Network

Public Network

Node A

Node B

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

Server cluster architecture consists of physical cluster components and logical

cluster resources Microsoft Cluster service is the software that manages all of

the cluster-specific activity

Physical components provide data storage and processing for the logical cluster resources Physical components are nodes, cluster disks, and communication networks Logical cluster resources are groups of resources, such as Internet Protocol (IP) addresses and virtual server names, and services such as WINS Clients interact with the logical cluster resources

Nodes

Nodes are the units of management for the server cluster They are also referred

to as systems and the terms are used interchangeably A node can be online or offline, depending on whether it is currently in communication with the other cluster nodes

Windows 2000 Advanced Server supports two node server clusters Windows 2000 Datacenter supports four node server clusters

Cluster Disks

Cluster disks are shared hard drives to which both server cluster nodes attach by

means of a shared bus You store data for file and print shares, applications, resources, and services on the shared disks

Topic Objective

To identify the key concepts

of a server cluster

Lead-in

A server cluster has

physical components and

logical resources

Delivery Tip

This page is intended to

give a brief overview with a

visual illustration of the key

concepts of server clusters

With the exception of nodes,

each item is covered in

greater detail in the

following pages

Note

Trang 20

Quorum Resource

The quorum resource plays a vital role in allowing a node to form a cluster and

in maintaining consistency of the cluster configuration for all nodes The quorum resource holds the cluster management data and recovery log, and arbitrates between nodes to determine which node controls the cluster The quorum resource resides on a shared disk It is best to use a dedicated cluster disk for the quorum resource, so that it will not be affected by the failover policies of other resources, or by the space that other applications require It is recommended that the quorum be on a disk partition of at least 500 MB

Cluster Communications

A server cluster communicates on a public, private, or mixed network The

public network is used for client access to the cluster The private network is used for intracluster communications, also referred to as node-to-node communications The mixed network can be used for either type of cluster communications

One of the types of communications on the private network monitors the health

of each node in the cluster Each node periodically exchanges IP packets with the other node in the cluster to determine if both nodes are operational This

process is referred to as sending heartbeats

Resources

Resources are the basic unit that Cluster service manages Examples of

resources are physical hardware devices, such as disk drives, or logical items, such as IP addresses, network names, applications, and services A cluster

resource can run only on a single node at any time, and is identified as online

when it is available for a client to use

Groups

Groups are a collection of resources that Cluster service manages as a single

unit for configuration purposes Operations that are performed on groups, such

as taking groups offline or moving them to another node, affect all of the resources that are contained within that group Ideally, a group will contain all

of the elements that are needed to run a specific application, and for client systems to connect to the application

Virtual Servers

Virtual servers have server names that appear as physical servers to clients

Cluster service uses a physical server to host one or more virtual servers Each virtual server has an IP address and a network name that are published to clients

on the network Users access applications or services on virtual servers in the same way that they would if the application or service were on a physical server

Failover and Failback

Failover is the process of moving a group of resources from one node to

another in case of a failure of a node, or one of the resources in the group

Failback is the process of returning a group of resources to the node on which it

was running before the failover occurred

Trang 21

Cluster Disks

Node A

Node B

Disk 4 Disk 3 Disk 2 Disk 1 Quorum

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

Each node must have a connection to a shared storage area where shared cluster data, such as configuration data, is stored This shared storage area is referred to

as the cluster disk The cluster can gain access to a cluster disk through a Small Computer System Interface (SCSI) bus or a Fibre Channel bus In addition, services and applications that the cluster provides should keep shared data, such

as Web pages, on the cluster disk on the shared bus

Cluster service is based on the shared nothing model of clustering The shared nothing model allows the Windows 2000 cluster file system model to support the native NTFS file system, rather than requiring a dedicated cluster file system

The cluster disks must be NTFS and basic disks

A single cluster member controls each file system partition at any instant in time However, because a node places a SCSI reserve on a cluster disk rather than a partition, the same node must own all of the partitions on the same physical disk at any given time Each node can reserve a separate disk on the same shared bus, so you can divide the cluster disks on the bus between the nodes in the cluster

For high-end configurations, you can achieve additional I/O scaling through distributed striping technology such as RAID 5 Using distributed striping technology means that below a file system partition on a single node, that partition can actually be a stripe set whose physical disks span multiple disks Such striping must be hardware RAID Cluster service does not support any software fault tolerant RAID arrays

Topic Objective

To explain the use of cluster

disks for storing shared

cluster data

Lead-in

The nodes in a cluster

access data from a shared

storage area

Delivery Tip

SCSI and Fibre Channel

comparisons are covered in

Course 2087A, Module 3,

“Preparing for Cluster

Service Installation.”

Note

Trang 22

Quorum Resource

 Data Storage

 Arbitration

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

Each cluster has a special resource known as the quorum resource You specify

an initial location for the quorum resource when you install the first node of a cluster You can use the cluster administration tools to change the quorum location to a different storage resource

The quorum resource contains cluster configuration files and provides two vital functions: data storage and arbitration Only one node at a time controls the quorum Upon startup of the cluster, Cluster service uses the quorum resource recovery logs for node updates

For example: If Node B is offline and Node A makes a change to the cluster, the change is saved in the registry of Node A and also to the cluster

configuration files on the quorum If Node A goes offline and Node B starts, Node B will be updated from the cluster configuration files on the quorum

Data Storage

The quorum resource is vital to the successful operation of a cluster because it stores cluster management data, such as the configuration database and recovery logs for changes that are made to cluster data It must be available when you form the cluster, and whenever you change the configuration database All of the nodes of the cluster have access to the quorum resource by means of the owning node

To ensure the availability of the cluster, it is recommended that the quorum be on a Redundant Array of Independent Disks (RAID) 5 array

Topic Objective

To describe the function of

the quorum resource in a

server cluster

Lead-in

The quorum resource is

unique to server clusters

Note

Trang 23

Arbitration

The Cluster service uses the quorum resource to decide which node owns the cluster Arbitration refers to the decision-making function of the quorum resource if both cluster nodes independently try to take control of the cluster Consider the following situation in a two-node cluster The networks that are providing communication between Nodes A and B fail Each node assumes that the other node has failed, and attempts to operate the cluster as the remaining node Arbitration determines which node owns the quorum The node that does not own the quorum must take its resources offline The node that controls the quorum resource then brings all of the cluster resources online

Quorum Ownership

Only one node can control the quorum When a node restarts, Cluster service determines whether the owner of the quorum is online If there is no owner of the quorum, Cluster service assigns ownership to the starting node If Cluster service finds that another node is online and owns the quorum resource, it will join the starting node to the cluster, and will not assign the ownership of the quorum to this node

Updates for Nodes Coming Online

When a node that has been offline rejoins a cluster, Cluster service must update the node's private copy of the cluster database with any changes it may not have received while it was offline When a node rejoins a cluster, Cluster service can retrieve the data from the other active nodes However, when both nodes are offline, the first node to come online will form a cluster and will need to retrieve any possible changes Cluster service uses the recovery logs of the quorum resource to update any changes to the node's cluster database

Do not modify the access permissions on the disk that contains the quorum resource Cluster service must have full access to the quorum log Cluster service uses the quorum log file to write all of the cluster state and configuration changes that cannot be updated if the other node is offline For this reason, you should never restrict either node’s access to the folder \MSCS

on the quorum disk which contains the quorum log

Caution

Trang 24

Cluster Communications

 Private Network

 Public Network

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

It is strongly recommended that a cluster have more than one network connection A single network connection threatens the cluster with a single point of failure There are three options for network configurations, private, public, and mixed Each network configuration requires its own dedicated network card

Private Network

Cluster nodes need to be consistently in communication over a network to ensure that both nodes are online Cluster service can utilize a private network that is separate from client communications Once a connection is configured as

a private network it can only be used for internal cluster communications, and is known as a private network or interconnect The private network will be the default route for node-to-node communication The cluster cannot use a private network for client-to-node communication

You should plan to have at

least two network

connections for cluster

communications

Delivery Tip

Heartbeats and other types

of cluster communications

are covered in detail in

Course 2087A, Module 3,

“Preparing for Cluster

Service Installation.”

Trang 25

Mixed Network

Another configuration option is to create a network that is used for both private

and public communication This is called a mixed network Using a mixed

network does not change the recommendation for two networks

The recommended configuration for server clusters is a dedicated private network for node-to-node communication and a mixed network The mixed network acts as a backup connection for node-to-node communication should the private network fail This configuration avoids having any single point of network failure

Important

Trang 26

Groups and Resources

\\Server1 10.0.0.4 File Share Printer Share

\\Server1 10.0.0.4 File Share Printer Share

\\Cluster1 10.0.0.3 Logical Disk

\\Cluster1 10.0.0.3 Logical Disk

\\Server2 10.0.0.6 Application

\\Server2 10.0.0.6 Application

Logical Disk Group 1

Group 2

Group 3

***************************** ILLEGAL FOR NON - TRAINER USE ******************************

A Microsoft clustered solution can contain many resources For administrative purposes, you can logically assign resources to groups Some examples of resources are applications, services, disks, file shares, print shares, Transmission Control Protocol/Internet Protocol (TCP/IP) addresses, and network names You may create multiple groups within the cluster so that you can distribute resources among nodes in the cluster The ability to distribute groups independently allows more than one cluster node to handle the workload

Groups

A group can contain many resources, but can only belong to one physical disk

A physical disk can contain many groups Any node in the cluster can own and manage groups of resources

A group can be online on only one node at any time All resources in a group will therefore move between nodes as a unit Groups are the basic units of failover and failback The node that is hosting a group must have sufficient capacity to run all of the resources in that group

If you wish to set up several server applications, for example SQL Server, Exchange, and IIS, to run on the same cluster, you should consider having one group for each application, complete with their own virtual server Otherwise, if all of the applications are in the same group they have to run on the same node

at the same time, so no load distribution across the cluster is possible

In the event of a failure within a group, the cluster software transfers the entire group of resources to a remaining node in the cluster The network name, address, and other resources for the moved group remain within the group after the transfer Therefore, clients on the network may still access the same resources by the same network name and IP address

relationship between groups

and resources in a server

cluster

Trang 27

Resources

A resource represents certain functionality that is offered on the cluster It may

be physical, for example a hard disk, or logical, for example an IP address Resources are the basic management and failure units of Cluster service

Resources may, under control of Cluster service, migrate to another node as part of a group failover If Cluster service detects that a single resource has failed on a node, it may then move the whole group to the other node

Cluster service uses resource monitors to track the status of the resources Cluster service will attempt to restart or migrate resources when they fail or when one of the resources that they depend on fails

Resource States

Cluster service uses five resource states to manage the health of the cluster resources

The resource states are as follows:

 Offline – A resource is unavailable for use by a client or another resource

 Online – A resource is available for use by a client or another resource

 Online Pending – The resource is in the process of being brought online

 Offline Pending – The resource is in the process of being brought offline

 Failed – The service has tried to bring the resource online but it will not start

Resource state changes can occur either manually (when you use the administration tools to make a state transition) or automatically (during the failover process) When a group is failed over, Cluster service alters the states

of each resource according to their dependencies on the other resources in the group

Ngày đăng: 24/01/2014, 19:20

TỪ KHÓA LIÊN QUAN

w