1. Trang chủ
  2. » Công Nghệ Thông Tin

Khám phá windowns server 2008 - p 27 potx

10 192 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 1 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

By default, the Memory section of the Resource Overview in the Reliability and Performance Monitor, shown in Figure 7.17, provides a good high-level view of current memory activity.. For

Trang 1

TABLE 7.3 Impor tant Counters and Descriptions Related to Memor y Behavior

Memor y Committed Bytes Monitors how much memor y (in bytes) has

been allocated by the processes As this number increases above available RAM, so does the size of the pagefile as paging has increased

Memor y Pages/sec Displays the amount of pages that are read

from or written to the disk

Memor y Pages Output/sec Displays vir tual memor y pages written to the

pagefile per second Monitor this counter to identify paging as a bottleneck

Memor y Page Faults/sec Repor ts both soft and hard faults

Process Working Set, _Total Displays the amount of vir tual memor y that is

actually in use

Paging

file

%pagefile in use Repor ts the percentage of the paging file that

is actually in use This counter is used to determine whether the Windows pagefile is a potential bottleneck If this counter remains above 50% or 75% consistently, consider increasing the pagefile size or moving the pagefile to a different disk

outlines the counters necessary to monitor memory and pagefile usage, along with a

description of each

By default, the Memory section of the Resource Overview in the Reliability and

Performance Monitor, shown in Figure 7.17, provides a good high-level view of current

memory activity For more advanced monitoring of memory and pagefile activity, use the

Performance Monitor component of the Reliability and Performance Monitor

Systems experience page faults when a process requires code or data that it can’t find in its

working set A working set is the amount of memory that is committed to a particular

process When this happens, the process has to retrieve the code or data in another part of

physical memory (referred to as a soft fault) or, in the worst case, has to retrieve it from

the disk subsystem (a hard fault) Systems today can handle a large number of soft faults

without significant performance hits However, because hard faults require disk subsystem

access, they can cause the process to wait significantly, which can drag performance to a

crawl The difference between memory and disk subsystem access speeds is exponential

even with the fastest hard drives available The Memory section of the Resource Overview

in the Reliability and Performance Monitor includes columns that display working sets and

hard faults by default.

Trang 2

FIGURE 7.17 Memor y section of the Resource Over view

The Page Faults/sec counter reports both soft and hard faults It’s not uncommon to see

this counter displaying rather large numbers Depending on the workload placed on the

system, this counter can display several hundred faults per second When it gets beyond

several hundred page faults per second for long durations, begin checking other memory

counters to identify whether a bottleneck exists

Probably the most important memory counter is Pages/sec It reveals the number of pages

read from or written to disk and is, therefore, a direct representation of the number of

hard page faults the system is experiencing Microsoft recommends upgrading the amount

of memory in systems that are seeing Pages/sec values consistently averaging more than

five pages per second In actuality, you’ll begin noticing slower performance when this

value is consistently higher than 20 So, it’s important to carefully watch this counter as it

nudges higher than 10 pages per second

NOTE

The Pages/sec counter is also par ticularly useful in determining whether a system is

thrashing Thrashing is a term used to describe systems experiencing more than 100

pages per second Thrashing should never be allowed to occur on Windows 2008

sys-tems because the reliance on the disk subsystem to resolve memor y faults greatly

affects how efficiently the system can sustain workloads

Trang 3

FIGURE 7.18 Vir tual Memor y configuration options

System memory (RAM) is limited in size, and Windows supplements the use of RAM with

virtual memory, which is not as limited Windows will begin paging to disk when all RAM

is being consumed, which, in turn, frees RAM for new applications and processes Virtual

memory resides in the pagefile.sys file, which is usually located in the root of the system

drive Each disk can contain a pagefile The location and size of the pagefile is configured

under the Virtual Memory section, shown in Figure 7.18

To access the Performance Options window, complete the following steps:

1 Click Start

2 Right-click Computer and select Properties

3 Click the Advanced Settings link on the left

4 When the System Properties window opens, click the Settings button under the

Performance section

5 Select the Advanced tab

6 Click Change under Virtual Memory

TIP

Windows will normally automatically handle and increase the size of pagefile.sys as

needed In some cases, however, you might want to increase per formance and

manage vir tual memor y settings yourself Keeping the default pagefile on the system

drive and adding a second pagefile to another hard disk can significantly improve

per formance

Trang 4

Spanning vir tual memor y across multiple disks or just placing the pagefile.sys on

anoth-er, less-used disk, will also allow Windows to run faster Just ensure that the other disk

isn’t slower than the disk pagefile.sys is currently on The more physical memor y a

sys-tem has, the more vir tual memor y will be allocated

Analyzing Processor Usage

Most often, the processor resource is the first one analyzed when a noticeable decrease

occurs in system performance For capacity-analysis purposes, you should monitor two

counters: % Processor Time and Interrupts/sec

The % Processor Time counter indicates the percentage of overall processor utilization If

more than one processor resides on the system, an instance for each one is included along

with a total (combined) value counter If this counter averages a usage rate of 50% or

greater for long durations, first consult other system counters to identify any processes

that might be improperly using the processors or consider upgrading the processor or

processors Generally speaking, consistent utilization in the 50% range doesn’t necessarily

adversely affect how the system handles given workloads When the average processor

utilization spills over the 65% or higher range, performance might become intolerable If

you have multiple processors installed in the system, use the % Total Processor Time

counter to determine the average usage of all processors

The Interrupts/sec counter is also a good guide of processor health It indicates the

number of device interrupts that the processor (either hardware or software driven) is

handling per second Like the Page Faults/sec counter mentioned in the section

“Monitoring System Memory and Pagefile Usage,” this counter might display very high

numbers (in the thousands) without significantly impacting how the system handles

workloads

Conditions that could indicate a processor bottleneck include the following:

Average of % Processor Time is consistently more than 60% to 70% In addition,

spikes that occur frequently at 90% or greater could also indicate a bottleneck even

if the average drops below the 60% to 70% mark

Maximum of % Processor Time is consistently more than 90%

Average of the System Performance Counter; Context Switches/second is consistently

over 20,000

The System Performance counter Processor Queue Length is consistently greater

than 2

By default, the CPU section of the Resource Overview in the Reliability and Performance

Monitor, shown in Figure 7.19, provides a good high-level view of current processor

activ-ity For more advanced monitoring of processors, use the Performance Monitor

compo-nent with the counters discussed previously

Trang 5

FIGURE 7.19 CPU section of the Resource Over view

Evaluating the Disk Subsystem

Hard disk drives and hard disk controllers are the two main components of the disk

subsystem The two objects that gauge hard disk performance are the physical disk and

the logical disk Although the disk subsystem components are becoming more and more

powerful, they are often a common bottleneck because their speeds are exponentially

slower than other resources The effects, however, can be minimal and maybe even

unno-ticeable, depending on the system configuration

To support the Resource Overview’s Disk section, the physical and logical disk counters are

enabled by default in Windows 2008 The Disk section of the Resource Overview in the

Reliability and Performance Monitor, shown in Figure 7.20, provides a good high-level

view of current physical and logical disk activity (combined) For more advanced

monitor-ing of disk activity, use the Performance Monitor component with the desired counters

found in the Physical Disk and Logical Disk sections

Monitoring with the physical and logical disk objects does come with a small price Each

object requires a little resource overhead when you use them for monitoring As a result,

you might want to keep them disabled unless you are going to use them for monitoring

purposes

Trang 6

FIGURE 7.20 Disk section of the Resource Over view

So, what specific disk subsystem counters should be monitored? The most informative

counters for the disk subsystem are % Disk Time and Avg Disk Queue Length The % Disk

Time counter monitors the time that the selected physical or logical drive spends servicing

read and write requests The Avg Disk Queue Length monitors the number of requests not

yet serviced on the physical or logical drive The Avg Disk Queue length value is an

inter-val average; it is a mathematical representation of the number of delays the drive is

expe-riencing If the delay is frequently greater than 2, the disks are not equipped to service the

workload, and delays in performance might occur

Monitoring the Network Subsystem

The network subsystem is by far one of the most difficult subsystems to monitor because

of the many different variables The number of protocols used in the network, NICs,

network-based applications, topologies, subnetting, and more play vital roles in the

network, but they also add to its complexity when you’re trying to determine bottlenecks

Each network environment has different variables; therefore, the counters that you’ll want

to monitor will vary

The information that you’ll want to gain from monitoring the network pertains to

network activity and throughput You can find this information with the Performance

Monitor alone, but it will be difficult at best Instead, it’s important to use other tools,

Trang 7

TABLE 7.4 Network-Based Ser vice Counters Used to Monitor Network Traffic

Network

Inter face

Current Bandwidth Displays used bandwidth for the selected

network adapter

Ser ver Bytes Total/sec Monitors the network traffic generated by the

Ser ver ser vice

such as Network Monitor, discussed earlier in this chapter in the section “Network

Monitor,” in conjunction with the Reliability and Performance Monitor to get the best

representation of network performance as possible You might also consider using

third-party network analysis tools such as network sniffers to ease monitoring and analysis

efforts Using these tools simultaneously can broaden the scope of monitoring and more

accurately depict what is happening on the wire

Because the TCP/IP suite is the underlying set of protocols for a Windows 2008 network

subsystem, this discussion of capacity analysis focuses on this protocol

NOTE

Windows 2008 and Windows Vista deliver enhancement to the existing quality of

ser-vice (QoS) network traffic–shaping solution that is available for XP and Windows Ser ver

2003 QoS uses Group Policy to shape and give priority to network traffic without

recoding applications or making major changes to the network Network traffic can be

“shaped” based on the application sending the data, TCP or UDP addresses (source or

destination), TCP or UDP protocols, and the por ts used by TCP or UDP or any

combina-tion thereof You can find more informacombina-tion about QoS at Microsoft TechNet: http:/

/technet.microsoft.com/en-us/network/bb530836.aspx

Several different network performance objects relate to TCP/IP, including ICMP, IPv4, IPv6,

Network Interface, TCPv4, UDPv6, and more Other counters such as FTP Server and WINS

Server are added after these services are installed Because entire books are dedicated to

optimizing TCP/IP, this section focuses on a few important counters that you should

monitor for capacity-analysis purposes

First, examining error counters, such as Network Interface: Packets Received Errors or

Packets Outbound Errors, is extremely useful in determining whether traffic is easily

travers-ing the network The greater the number of errors indicates that packets must be present,

causing more network traffic If a high number of errors are persistent on the network,

throughput will suffer This can be caused by a bad NIC, unreliable links, and so on

If network throughput appears to be slowing because of excessive traffic, keep a close watch

on the traffic being generated from network-based services such as the ones described in

Table 7.4 Figure 7.21 shows these items being recorded in Performance Monitor

Trang 8

FIGURE 7.21 Network-based counters in Per formance Monitor

Redirector Bytes Total/sec Processes data bytes received for statistical

calculations NBT

Connection

Bytes Total/sec Monitors the network traffic generated by

NetBIOS over TCP connections

Optimizing Performance by Server Roles

In addition to monitoring the common bottlenecks (memory, processor, disk subsystem,

and network subsystem), be aware that functional roles of the server influence what other

counters you should monitor The following sections outline some of the most common

roles for Windows 2008 that require the use of additional performance counters for

analyzing system behavior, establishing baselines, and ensuring system availability and

scalability

Microsoft also makes several other tools available that will analyze systems and

recom-mend changes For example, the Microsoft Baseline Configuration Analyzer (MBCA)

iden-tifies configuration issues, overtaxed hardware, and other items that would have a direct

impact on system performance and makes recommendations to rectify those issues

Ensuring a system is properly configured to deliver services for the role it supports is

essential before performance monitoring and capacity planning can be taken seriously

Trang 9

FIGURE 7.22 Per formance Monitor counters for vir tualization

Virtual Servers

Deployment of virtual servers and consolidation of hardware is becoming more and more

prevalent in the business world When multiple servers are running in a virtual

environ-ment on a single physical hardware platform, performance monitoring and tuning

becomes essential to maximize the density of the virtual systems If three or four virtual

servers are running on a system and the memory and processors aren’t allocated to the

virtual guest session that could use the resources, virtual host resources aren’t being

utilized efficiently In addition to monitoring the common items of memory, disk,

network, and CPU, two performance counters related to virtual sessions are added when

virtualization is running on the Windows 2008 host Figure 7.22 shows these counters

The performance counters related to virtualization are as follows:

virtual server

The virtual session object and its counters are available only when a virtual machine is

running Counters can be applied to all running virtual sessions or to a specific virtual

session

Trang 10

Summary

Capacity planning and performance analysis are critical tasks in ensuring that systems are

running efficiently and effectively in the network environment Too much capacity being

allocated to systems indicates resources are being wasted and not used efficiently, which in

the long run can cause an organization to overspend in their IT budgets and not get the

value out of IT spending Too little capacity in system operations, and performance suffers

in serving users and creates a hardship on servers that can ultimately cause system failure

By properly analyzing the operational functions of a network, a network administrator can

consolidate servers or virtualize servers to gain more density in system resources This

consolidation may result in additional physical servers that can ultimately be used for

other purposes to provide high availability of IT resources, such as for disaster recovery, as

failover servers, or as cluster servers

Although it’s easy to get caught up in daily administration and firefighting, it’s important

to step back and begin capacity-analysis and performance-optimization processes and

procedures These processes and procedures can minimize the environment’s complexity,

help IT personnel gain control over the environment, assist in anticipating future resource

requirements, and ultimately, reduce costs and keep users of the network happy

Best Practices

The following are best practices from this chapter:

Spend time performing capacity analysis to save time troubleshooting and firefighting

Use capacity-analysis processes to help weed out the unknowns

Establish systemwide policies and procedures to begin to proactively manage your

system

After establishing systemwide policies and procedures, start characterizing system

workloads

Use performance metrics and other variables such as workload characterization,

vendor requirements or recommendations, industry-recognized benchmarks, and the

data that you collect to establish a baseline

Use the benchmark results only as a guideline or starting point

Use the Task Manager or the Resource Overview in the Reliability and Performance

Monitor to quickly view performance

Use the Reliability and Performance Monitor to capture performance data on a

regular basis

Consider using System Center Operations Manager or Microsoft and third-party

prod-ucts to assist with performance monitoring, capacity and data analysis, and reporting

Carefully choose what to monitor so that the information doesn’t become unwieldy

Ngày đăng: 06/07/2014, 18:21