12 Application-Level Failover and Disaster Recover y in a Hyper-V Environment All storage drivers must be digitally signed and certified for use with Windows Server 2008.. Many storage d
Trang 112 Application-Level Failover and Disaster Recover y in a Hyper-V Environment
All storage drivers must be digitally signed and certified for use with Windows Server
2008 Many storage devices certified for Windows Server 2003 may not work with
Windows Server 2008 and either simply cannot be used for failover cluster shared storage
or may require a firmware and driver upgrade to be supported One main reason for this is
the fact that all failover shared storage must comply with SCSI-3 architecture model
SAM-2 This includes any and all legacy and SAS controllers, fiber host bus adapters, and iSCSI
hardware- and software-based initiators and targets If the cluster attempts to perform an
action on a LUN or shared disk and the attempt causes an interruption in communication
to the other nodes in the cluster or any other system connected to the shared storage
device, data corruption can occur, and the entire cluster and each SAN-connected system
may be lose connectivity to the storage
When LUNS are presented to failover cluster nodes, each LUN must be presented to each
node in the cluster Also, when the shared storage is accessed by the cluster and other
systems, the LUNs must be masked or presented only to the cluster nodes and the shared
storage device controllers to ensure that no other systems can access or disrupt the cluster
communication There are strict requirements for shared storage support, especially with
failover clusters Storage area networks (SANs) or other types of shared storage must meet
the following requirements:
All fiber, SAS, and iSCSI host bus adapters (HBAs) and Ethernet cards used with iSCSI
software initiators must have (or obtain) the Designed for Microsoft Windows logo
for Windows Server 2008 and have suitable signed device drivers
SAS, fiber, and iSCSI HBAs must use StorPort device drivers to provide targeted LUN
resets and other functions inherent to the StorPort driver specification SCSIport was
at one point supported for two-node clusters, but if a StorPort driver is available it
should be used to ensure support from the hardware vendors and Microsoft
All shared-storage HBAs and backend storage devices, including iSCSI targets, fiber,
and SAS storage arrays, must support SCSI-3 standards and must also support
persis-tent bindings or reservations of LUNs
All shared-storage HBAs must be deployed with matching firmware and driver
versions Failover clusters using shared storage require a stable infrastructure, and
applying the latest storage controller driver to outdated HBA firmware can cause
undesirable situations and may disrupt data access
All nodes in the cluster should contain the same HBAs and use the same version of
drivers and firmware Each cluster node should be an exact duplicate of each other
node when it comes to hardware selection, configuration, and driver and firmware
revisions This allows for a more reliable configuration and simplifies management
and standardization
When iSCSI software initiators are used to connect to iSCSI software- or
hardware-based targets, the network adapter used for iSCSI communication must be connected
to a dedicated switch, cannot be used for any cluster communication, and cannot be
a teamed network adapter
Trang 2In addition, for Microsoft to officially support failover clusters and shared storage, the
entire configuration must be tested as a whole system before it will be considered a
“Windows Server 2008 Failover Cluster Supported Configuration.” The whole system
includes the server brand and model, local disk configuration, HBA, and network card
controller firmware and driver version (and, if applicable, iSCSI software initiator software,
storage array, and storage array controller firmware or SAN operating system version)
The point to keep in mind is that if a company really wants to consider using failover
clusters, they should research and find a suitable solution to meet their budget If a tested
and supported solution cannot be found within their price range, they should consider
alternative solutions that can restore systems in about an hour or a few hours (if they
cannot budget for a restore that takes just a few minutes) The truth is that failover
clus-ters are not for everyone, they are not for the faint of heart, and they are not within every
organization’s IT budget
Even after reading all this, some administrators will still want to deploy test failover
cluster configurations to gain knowledge and experience with the features and
functional-ity They will want to learn how to deploy and how to manage failover clusters, and they
will want to learn how to train staff and present prototype solutions to management For
those administrators, various low-cost shared-storage alternatives are available, including
the Windows iSCSI initiator and a software-based iSCSI target In case a problem is
encountered or data is lost or corrupted, be aware that these are not supported by
Microsoft
SAS Storage Arrays
Serial Attached SCSI disks are one of the newest additions to the disk market SAS storage
arrays can provide organizations with affordable entry-level hardware-based direct
attached storage arrays suitable for Windows Server 2008 clusters SAS storage arrays are
commonly limited to four hosts, but some models support extenders to add additional
hosts as required One of the major issues with direct attached storage (not with SAS) is
that replication of the data within the storage is usually not achievable without involving
one of the host systems and software
Fiber Channel Storage Arrays
With Fiber Channel (FC) HBAs, Windows Server 2008 can access both shared and
nonshared disks residing on a SAN connected to a common FC switch This allows both
the shared-storage and operating system volumes to be located on the SAN, if desired, to
provide diskless servers In many cases, however, diskless storage may not be desirable if
the operating system performs many paging actions, because the cache on the storage
controllers can be used up very fast and can cause delay in disk read and write operations
for dedicated cluster storage If this is desired, however, the SAN must support this option
and be configured to present the OS-dedicated LUNs to only a single host exclusively The
LUNs defined for shared cluster storage must be zones and presented to every node in the
cluster, and to no other systems In many cases, the LUN zoning or masking is configured
on the fiber switch that connects the cluster nodes and the shared-storage device This is a
distinct difference between direct access storage and FC or iSCSI shared storage Both FC
Trang 312 Application-Level Failover and Disaster Recover y in a Hyper-V Environment
and iSCSI require a common fiber or Ethernet switch to establish and maintain
connec-tions between the hosts and the storage
A properly configured FC zone for a cluster will include the World Wide Port Number
(WWPN) of each cluster host’s FC HBAs and the WWPN of the HBA controllers from the
shared-storage device If either the server or the storage device uses multiple HBAs to
connect to a single or multiple FC switches to provide failover or load balancing
function-ality, this is known as multipath I/O (MPIO), and a qualified driver for MPIO management
and communication must be used Also, the function of either MPIO failover or MPIO
load balancing must be verified as approved for Windows Server 2008 Consult the
shared-storage vendor (including the fiber switch vendor) for documentation and supported
configurations In addition, check the cluster hardware compatibility list (HCL) on the
Microsoft website to find approved configurations
iSCSI Storage
When organizations want to use iSCSI storage for Windows Server 2008 failover clusters,
security and network isolation is highly recommended iSCSI uses an initiator or the host
that requires access to the LUNs or iSCSI targets Targets are located or hosted on iSCSI
target portals Using the target portal interface, you must configure the target to be
accessed by multiple initiators in a cluster configuration Both the iSCSI initiators and
target portals come in software- and hardware-based models, but both models use IP
networks for communication between the initiators and the targets The targets will need
to be presented to Windows as a basic disk When standard network cards will be used for
iSCSI communication on Windows Server 2008 systems, the built-in Windows Server 2008
iSCSI initiator can be used, as long as the iSCSI target can support the authentication and
security options provided, if used
Regardless of whether you choose the Microsoft iSCSI initiator of software-based or
hard-ware-based initiators or targets, iSCSI communication should be deployed on isolated
network segments and preferably dedicated network switches Furthermore, the LUNs
presented to the failover cluster should be masked from any systems that are not nodes
participating in the cluster by using authentication and IPsec communication as possible
Within the Windows Server 2008 operating system, the iSCSI HBA or designated network
card should not be used for any failover cluster configuration and cannot be deployed
using network teaming software, or it will not be supported by Microsoft
By now, you should understand that Microsoft wants to support only those organizations
that deploy failover clusters on tested and approved entire systems In many cases,
however, failover clusters can still be deployed and will function; after all, you can use the
Create a Cluster Wizard to deploy a cluster that is not in a supported configuration
NOTE
When deploying a failover cluster, pay close attention to the results of the Validate a
Cluster Wizard to ensure that the system has passed all storage tests to ensure a
sup-por ted configuration is deployed
Trang 4Multipath I/O
Windows Server 2008 supports multipath I/O to external storage devices such as SANs and
iSCSI targets when multiple HBAs are used in the local system or by the shared storage
MPIO can be used to provide failover access to disk storage in case of a controller or HBA
failure, but some drivers also support load balancing across HBAs in both standalone and
failover cluster deployments Windows Server 2008 provides a built-in MPIO driver that
can be leveraged when the manufacturer conforms to the necessary specifications to allow
for the use of this built-in driver
Volume Shadow Copy for Shared Storage Volume
The Volume Shadow Copy Service (VSS) is supported on shared-storage volumes VSS can
take a point-in-time snapshot of an entire volume, enabling administrators and users to
recover data from a previous version Furthermore, failover clusters and the entire
Windows backup architecture use VSS to store backup data Many of today’s services and
applications that are certified to work on Windows Server 2008 failover clusters are VSS
compliant, and careful consideration should be made when choosing an alternative
backup system, unless the system is provided by the shared-storage manufacture and
certi-fied to work in conjunction with VSS, Windows Server 2008, and the service or
applica-tion running on the failover cluster
Failover Cluster Node Operating System Selection
Hyper-V requires the 64-bit version of Windows Server 2008 to run on the host server To
do host-level failover clustering, the version of 64-bit Windows 2008 must be either the
Enterprise Edition or the Datacenter Edition
Deploying a Failover Cluster for Hyper-V Hosts
The Windows Server 2008 failover cluster feature is not installed on a Hyper-V host system
by default and must be installed before failover clusters can be deployed Alternatively, for
administrative workstations, the remote server management features can be installed,
which will include the Failover Cluster Management snap-in, but the feature will need to
be installed on all nodes that will participate in the failover cluster Even before installing
the failover cluster features, several steps should be taken on each node of the cluster to
help deploy a reliable failover cluster Before deploying a failover cluster, perform the
following steps on each node that will be a member of the failover cluster:
Configure fault-tolerant volumes or LUNS using local disks or SAN attached storage
for the operating system volume
Configure at least two network cards, one for client and cluster communication and
one for dedicated cluster communication
Trang 512 Application-Level Failover and Disaster Recover y in a Hyper-V Environment
For iSCSI shared storage, configure an additional, dedicated network adapter or
hard-ware-based iSCSI HBA
Rename each network card properties for easy identification within the Cluster
Management console after the failover cluster is created For example, rename Local
Area Connection to PUBLIC and Local Area Connection 2 to iSCSI and Local Area
Connection 3 to HEARTBEAT, as required and possible Also, if network teaming will
be used, configure the team first, excluding teaming from iSCSI connections, and
rename each physical network adapter in the team to TEAMMEMBER1 and 2 The
virtual team adapter should then get the name of PUBLIC or HEARTBEAT
Configure all necessary IPv4 and IPv6 addresses as static configurations
Verify that any and all HBAs and other storage controllers are running the proper
firmware and matched driver version suitable for Windows Server 2008 failover
clus-ters
If shared storage will be used, plan to use at least two separate LUNs, one to serve as
the witness disk and one to serve as the cluster disk for a high-availability service or
application group
If applications or services not included with Windows Server 2008 will be deployed
in the failover cluster, as a best practice, add an additional fault-tolerant array or
LUN to the system to store the application installation and service files
Ensure that proper LUN masking and zoning has been configured at the FC or
Ethernet switch level for FC or iSCSI shared-storage communication, suitable for
failover clustering Each node in the failover cluster, along with the HBAs of the
shared storage device, should have exclusive access to the LUNs presented to the
failover cluster
If multiple HBAs will be used in each failover node or in the shared-storage device,
ensure that a suitable MPIO driver has been installed The Microsoft Windows Server
2008 MPIO feature can be used to provide this function if approved by the HBA,
switch, and storage device vendors and Microsoft
Shut down all nodes except one; and on that node configure the shared-storage
LUNS as Windows basic disks, format as a single partition/volume for the entire span
of the disk, and define an appropriate drive letter and volume label Shut down the
node used to set up the disks and bring each other node up one at a time and verify
that each LUN is available If necessary, configure the appropriate drive letter if it
does not match what was configured on the first node
As required, test MPIO for load balancing and failover using the appropriate
diagnos-tic or monitoring tool to ensure proper operation on each node one at a time
Designate a domain user account to be used for failover cluster management, and
add this account to the local Administrators group on each cluster node In the
domain, grant this account the Create Computer Accounts right at the domain level
to ensure that when the administrative and high-availability “service or application”
Trang 6Create a spreadsheet with the network names, IP addresses, and cluster disks that
will be used for the administrative cluster and the high-availability “service or
appli-cation” group or groups that will be deployed in the failover cluster Each service or
application group will require a separate network name and IPv4 address If IPv6 is
used, the address can be added separately in addition to the IPv4 address or a
custom or generic service or application group will need to be created
Probably most important, install the Hyper-V role on the server so that the server
system is ready and configured to be a Hyper-V host server See Chapter 4,
“Installing Windows 2008 Server and the Hyper-V Role,” for instructions about
installing the Hyper-V role and making sure the Hyper-V host is working properly
before proceeding with the cluster configuration
After completing the tasks in the preceding list, you can install the failover cluster
Failover clusters are deployed as follows:
1 Preconfigure the nodes, as listed previously
2 Install the failover cluster feature
3 Run the Validate a Configuration Wizard and review the results to ensure that all
tests pass successfully If any tests fail, the configuration will not be supported by
Microsoft and can be prone to several different types of issues and instability
4 Run the Create a Cluster Wizard to actually deploy the administrative cluster
5 Customize the failover cluster properties
6 Run the High Availability Wizard to create a high-availability service or application
group within the failover cluster for the Hyper-V virtualization role
7 Test the failover cluster configuration, and back it up
Installing the Failover Cluster Feature on a Hyper-V Host
Before a failover cluster can be deployed, the necessary feature must be installed To install
the failover cluster feature, perform the following steps:
1 Log on to the Windows Server 2008 cluster node with an account with administrator
privileges
2 Click Start, All Programs, Administrative Tools, and select Server Manager
3 When Server Manager opens, in the tree pane select the Features node
4 In the Tasks pane, click the Add Features link
5 In the Add Features window, select Failover Clustering and click Next
6 When the installation completes, click the Close button to complete the installation
and return to Server Manager
7 Close Server Manager and install the Failover Cluster feature on each of the
remain-ing cluster nodes
Trang 712 Application-Level Failover and Disaster Recover y in a Hyper-V Environment
FIGURE 12.1 Adding the ser vers to be validated by the Validate a Configuration Wizard
Running the Validate a Configuration Wizard
Failover cluster management is the new MMC snap-in used to administer the failover
cluster feature After the feature is installed, the next step is to run the Validate a
Configuration Wizard from the Tasks pane of the Failover Cluster Management console
All nodes should be up and running when the wizard is run To run the Validate a
Configuration Wizard, perform the following steps:
1 Log on to one of the Windows Server 2008 cluster nodes with an account with
administrator privileges over all nodes in the cluster
2 Click Start, All Programs, Administrative Tools, and select Failover Cluster
Management
3 When the Failover Cluster Management console opens, click the Validate a
Configuration link or the Actions Pane
4 When the Validate a Configuration Wizard opens, click Next on the Before You
Begin page
5 In the Select Servers or a Cluster page, enter the name of a cluster node and click the
Add button Repeat this process until all nodes are added to the list, as shown in
Figure 12.1, and click Next to continue
6 In the Testing Options page, read the details that explain the requirements for all
tests to pass to be supported by Microsoft Select the Run All Tests (Recommended)
radio button, and then click Next to continue
7 In the Confirmation page, review the list of servers that will be tested and the list of
tests that will be performed, and then click Next to begin the testing the servers
8 When the tests complete, the Summary window will display the results and whether
Trang 8FIGURE 12.2 Successful result of the Validate a Configuration Wizard is required for Microsoft
failover cluster suppor t
Configuration Wizard If the test failed, click the View Report button to review
detailed results and determine which test failed and why
Even if the Validate a Configuration Wizard does not pass every test, you may still be able
to create a cluster (depending on the test) After the Validation a Configuration Wizard is
completed successfully, the cluster can be created
Creating the Hyper-V Host Failover Cluster
When the Hyper-V host failover cluster is first created, all nodes in the cluster should be
up and running The exception to that rule is when failover clusters use direct attached
storage such as SAS devices that require a process of creating the cluster on a single node
and adding other nodes one at a time For clusters that will not use shared storage or
clus-ters that will connect to shared storage using iSCSI or Fiber Channel connections, all
nodes should be powered on during cluster creation To create the failover cluster,
complete the following steps:
1 Log on to one of the Windows Server 2008 cluster nodes with an account with
administrator privileges over all nodes in the cluster
2 Click Start, All Programs, Administrative Tools, and select Failover Cluster
Management
3 When the Failover Cluster Management console opens, click the Create a Cluster
link the Actions pane
4 When the Create Cluster Wizard opens, click Next on the Before You Begin page
5 In the Select Servers page, enter the name of each cluster node and click Add When
all the nodes are listed, click the Next button to continue
Trang 912 Application-Level Failover and Disaster Recover y in a Hyper-V Environment
FIGURE 12.3 Defining the network name and IPv4 address for the failover cluster
6 In the Access Point for Administering the Cluster page, type in the name of the
cluster and complete the IPv4 address and click Next, as shown in Figure 12.3
7 In the Confirmation page, review the settings, and then click Next to create the cluster
8 In the Summary page, review the results of the cluster-creation process and click
Finish to return to the Failover Cluster Management console If there are any errors,
you can click the View Report button to reveal the detailed cluster-creation report
9 Back in the Failover Cluster Management console, select the cluster name in the tree
pane In the Tasks pane, review the configuration of the cluster
10 In the tree pane, select and expand Nodes to list all the cluster nodes
11 Select Storage and review the cluster storage in the Tasks pane listed under Summary
of Storage, as shown in Figure 12.4
12 Expand Networks in the tree pane to review the list of networks Select each network
and review the names of the adapters in each network
13 After confirming that the cluster is complete, close the Failover Cluster Management
console and log off of the cluster node
After the cluster has been created, you should use the High Availability Wizard to perform
additional tasks before creating any service or application groups These tasks can include,
but may not require, customizing the cluster networks, adding storage to the cluster,
adding nodes to the cluster, and changing the cluster quorum model
Configuring Cluster Networks
After the cluster has been created, you should complete several tasks to improve cluster
management One of these tasks includes customizing the cluster networks Each node in
Trang 10FIGURE 12.4 Displaying the dedicated cluster storage
have already been renamed to describe a network or to easily identify which network a
particular network adapter belongs to Once the nodes are added to the failover cluster for
each network card in a cluster node, there will be a corresponding cluster network Each
cluster network will be named Cluster Network 1, Cluster Network 2, and so forth for each
network Each network can be renamed and can also be configured for use by the cluster
and clients, for internal cluster use only, or the network can be excluded from any cluster
use Networks and network adapters used for iSCSI communication must be excluded from
cluster usage To customize the cluster networks, complete the following steps:
1 Log on to one of the Windows Server 2008 cluster nodes with an account with
administrator privileges over all nodes in the cluster
2 Click Start, All Programs, Administrative Tools, and select Failover Cluster
Management
3 When the Failover Cluster Management console opens, if necessary type in the
name of the local cluster node to connect to the cluster
4 When the Failover Cluster Management console connects to the cluster, select and
expand the cluster name
5 Select and expand Networks in the tree pane and select Cluster Network 1 (for
example)
6 In the Tasks pane, review the name of the network adapters in the network, as
shown in Figure 12.5, for the iSCSI network adapters that are members of Cluster
Network 1