1. Trang chủ
  2. » Công Nghệ Thông Tin

Microsoft SQL Server 2008 R2 Unleashed- P140 ppsx

10 93 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 308,09 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

LISTING 36.6 An Example of Clearing the Clean Pages from Cache to Generate Physical I/Os USE bigpubs2008 go CHECKPOINT go DBCC DROPCLEANBUFFERS go SET STATISTICS IO ON SET STATISTICS TIM

Trang 1

where st.stor_id between ‘B100’ and ‘B199’

go

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 0 ms

output deleted

(1077 row(s) affected)

Table ‘sales_noclust’ Scan count 100, logical reads 1383, physical reads 0,

read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead

reads 0

Table ‘stores’ Scan count 1, logical reads 3, physical reads 0, read-ahead

reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0

SQL Server Execution Times:

CPU time = 0 ms, elapsed time = 187 ms

Here, you can see that the total execution time, denoted by the elapsed time, was relatively

low and not significantly higher than the CPU time This is due to the lack of any physical

reads and the fact that all activity is performed in memory

NOTE

In some situations, you might notice that the parse and compile time for a query is

dis-played twice This happens when the query plan is added to the plan cache for

possi-ble reuse The first set of information output is the actual parse and compile before

placing the plan in cache, and the second set of information output appears when SQL

Server retrieves the plan from cache Subsequent executions still show the same two

sets of output, but the parse and compile time is 0 when the plan is reused because a

query plan is not being compiled

If elapsed time is much higher than CPU time, the query had to wait for something, either

I/O or locks If you want to see the effect of physical versus logical I/Os on the

perfor-mance of a query, you need to flush the pages accessed by the query from memory You

can use the DBCC DROPCLEANBUFFERS command to clear all clean buffer pages out of

memory Listing 36.6 shows an example of clearing the pages from cache and rerunning

the query with the STATISTICS IO and STATISTICS TIME options enabled

Trang 2

TIP

To ensure that none of the table is left in cache, make sure all pages are marked as

clean before running the DBCC DROPCLEANBUFFERS command A buffer is dirty if it

con-tains a data row modification that has either not been committed yet or has not been

written out to disk yet To clear the greatest number of buffer pages from cache

memo-ry, make sure all work is committed, checkpoint the database to force all modified

pages to be written out to disk, and then execute the DBCC DROPCLEANBUFFERS

command

CAUTION

The DBCC DROPCLEANBUFFERS command should be executed in a test or development

environment only Flushing all data pages from cache memory in a production

environ-ment can have a significantly adverse impact on system performance

LISTING 36.6 An Example of Clearing the Clean Pages from Cache to Generate Physical I/Os

USE bigpubs2008

go

CHECKPOINT

go

DBCC DROPCLEANBUFFERS

go

SET STATISTICS IO ON

SET STATISTICS TIME ON

go

select st.stor_name, ord_date, qty

from stores st join sales_noclust s on st.stor_id = s.stor_id

where st.stor_id between ‘B100’ and ‘B199’

go

SQL Server parse and compile time:

CPU time = 0 ms, elapsed time = 1 ms

output deleted

(1077 row(s) affected)

Table ‘sales_noclust’ Scan count 100, logical reads 1383, physical reads 6,

read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead

Trang 3

reads 0

Table ‘stores’ Scan count 1, logical reads 3, physical reads 2, read-ahead

reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0

SQL Server Execution Times:

CPU time = 0 ms, elapsed time = 282 ms

Notice that this time around, even though the reported CPU time was the same, the

elapsed time was 282 milliseconds due to the physical I/Os that had to be performed

during this execution

You can use the STATISTICS TIME and STATISTICS IO options together in this way as a

useful tool for benchmarking and comparing performance when modifying queries or

indexes

Using datediff() to Measure Runtime

Although the STATISTICS TIME option works fine for displaying the runtime of a single

query, it is not as useful for displaying the total CPU time and elapsed time for a stored

procedure The STATISTICS TIME option generates time statistics for every command

executed within the stored procedure This makes it difficult to read the output and

deter-mine the total elapsed time for the entire stored procedure

Another way to display runtime for a stored procedure is to capture the current system

time right before it starts, capture the current system time as it completes, and display the

difference between the two, specifying the appropriate-sized datepart parameter to the

datediff() function, depending on how long your procedures typically run For example,

if a procedure takes minutes to complete, you probably want to display the difference in

seconds or minutes, rather than milliseconds If the time to complete is in seconds, you

likely want to specify a datepart of seconds or milliseconds Listing 36.7 displays an

example of using this approach

LISTING 36.7 Using datediff() to Determine Stored Procedure Runtime

set statistics time off

set statistics io off

go

declare @start datetime

select @start = getdate()

exec sp_help

select datediff(ms, @start, getdate()) as ‘runtime(ms)’

go

output deleted

Trang 4

runtime(ms)

-3263

STATISTICS PROFILE

The SET STATISTICS PROFILE option is similar to the SET SHOWPLAN_ALL option but allows

the query to actually execute It returns the same execution plan information displayed

with the SET SHOWPLAN_ALL statement, with the addition of two columns that display

actual execution information The Rows column displays the actual number of rows

returned in the execution step, and the Executions column shows the actual number of

executions for the step The Rows column can be compared to the EstimatedRows column,

and the Execution column can be compared to the EstimatedExecution column to

deter-mine the accuracy of the execution plan estimates

You can set the STATISTICS PROFILE option for individual query sessions In an SSMS

query window, you type the following statement:

SET STATISTICS PROFILE ON

GO

NOTE

The SET STATISTICS PROFILE option has been deprecated and may be removed in a

future version of SQL Server It is recommended that you switch to using the SET

STA-TISTICS XML option instead

STATISTICS XML

Similar to the STATISTICS PROFILE option, the SET STATISTICS XML option allows a query

to execute while also returning the execution plan information The execution plan

infor-mation returned is similar to the XML document displayed with the SET SHOWPLAN_XML

statement

To set the STATISTICS XML option for individual query sessions in SSMS or another query

tool, you type the following statement:

SET STATISTICS XML ON

GO

Trang 5

NOTE

With all the fancy graphical tools available, why would you want to use the text-based

analysis tools? Although the graphical tools are useful for analyzing individual queries

one at a time, they can be a bit tedious if you have to perform analysis on a number of

queries As an alternative, you can put all the queries you want to analyze in a script

file and set the appropriate options to get the query plan and statistics output you

want to see You can then run the script through a tool such as sqlcmd and route the

output to a file You can then quickly scan the file or use an editor’s Find utility to look

for the obvious potential performance issues, such as table scans or long-running

queries Next, you can copy the individual problem queries you identify from the output

file into SSMS, where you can perform a more thorough analysis on them

You could also set up a job to run this SQL script periodically to constantly capture

and save performance statistics This gives you a means to keep a history of the

query performance and execution plans over time This information can be used to

compare performance differences as the data volumes and SQL Server activity levels

change over time

Another advantage of the textual query plan output over the graphical query plans is

that for very complex queries, the graphical plan tends to get very big and spread out

so much that it’s difficult to read and follow The textual output is somewhat more

com-pact and easier to see all at once

Query Analysis with SQL Server Profiler

SQL Server Profiler serves as another powerful tool available for query analysis When you

must monitor a broad range of queries and database activity and analyze the performance,

it is difficult to analyze all those queries manually For example, if you have a number of

stored procedures to analyze, how would you know which ones to focus on as problem

procedures? You would have to identify sample parameters for all of them and manually

execute them individually to see which ones were running too slowly and then, after they

were identified, do some query analysis on them

With SQL Server Profiler, you can simply define a trace to capture performance-related

statistics on the fly while the system is being used normally This way, you can capture a

representative sample of the type of activity your database will receive and capture

statis-tics for the stored procedures as they are being executed with real data values Also, to

avoid having to look at everything, you can set a filter on the Duration column so that it

displays only items with a runtime longer than the specified threshold

The events you want to capture to analyze query performance are listed under the

Performance events They include Showplan All, Showplan Statistics Profile, Showplan

Text, Showplan Text (Unencoded), Showplan XML, Showplan XML for Query Compile,

Trang 6

and Showplan XML Statistics Profile The data columns that you want to be sure to

include when capturing the showplan events are TextData, CPU, StartTime, Duration, and

Reads and Writes Also, for the Showplan Statistics and Showplan All events, you must

also select the BinaryData data column

Capturing the showplan performance information with SQL Server Profiler provides you

with all the same information you can capture with all the other individual tools

discussed in this chapter You can easily save the trace information to a file or table for

replaying the sequence to test index or configuration changes, or simply for historical

analysis If you choose any of the Showplan XML options, you have the option of saving

the XML Showplan events separately from the overall trace file You can choose to save all

XML Showplan events in a single file or separate file for each event (see Figure 36.15) You

can then load the Showplan XML file into SSMS to view the graphical execution plans and

perform your query analysis

When you run a SQL Server Profiler trace with the Showplan XML event enabled, SQL

Server Profiler displays the graphical execution plans captured in the bottom display panel

of the Profiler window when you select a record with a Showplan XML EventClass The

graphical execution plans displayed in SQL Server Profiler are just like the ones displayed

in SSMS, and they also include the same detailed information available via the ToolTips

Figure 36.16 shows an example of a graphical execution plan being displayed in SQL

Server Profiler

FIGURE 36.15 Saving XML Showplan events to a single file

Trang 7

For more information on using SQL Server Profiler, see Chapter 6, “SQL Server Profiler.”

NOTE

Because of the capability to view the graphical execution plans in SQL Server Profiler

as well as the capability to save the XML Showplan events to a separate file, which you

can bring into SSMS for analysis, the XML Showplan events provide a significant

bene-fit over the other, older-style showplan events provided As a matter of fact, these other

showplan events are provided primarily for backward-compatibility purposes In a future

version of SQL Server, the Showplan All, Showplan Statistics Profile, Showplan Text,

and Showplan Text (Unencoded) event classes will be deprecated It is recommended

that you switch to using the newer XML event classes instead

Summary

Between the features of SSMS and the text-based query analysis tools, SQL Server 2008

provides a number of powerful utilities to help you analyze and understand how your

queries are performing and also help you develop a better understanding of how queries

in general are processed and optimized in SQL Server 2008 Such an understanding can

help ensure that the queries you develop will be optimized more effectively by SQL Server

The tools discussed in this chapter are useful for analyzing individual query performance

However, in a multiuser environment, query performance is often affected by more than just

how a single query is optimized One of those factors is locking contention Chapter 37,

“Locking and Performance,” delves into locking in SQL Server, its impact on query and

appli-FIGURE 36.16 Displaying an XML Showplan event in SQL Server Profiler

Trang 8

Locking and Performance

What’s New in Locking and Performance

The Need for Locking Transaction Isolation Levels in SQL Server

The Lock Manager Monitoring Lock Activity in SQL Server

SQL Server Lock Types SQL Server Lock Granularity Lock Compatibility

Locking Contention and Deadlocks

Table Hints for Locking Optimistic Locking

This chapter examines locking and its impact on

transac-tions and performance in SQL Server It also reviews locking

hints that you can specify in queries to override SQL

Server’s default locking behavior

What’s New in Locking and

Performance

SQL Server 2008 doesn’t provide any significant changes in

locking behavior or features over what was provided in SQL

Server 2005 (such as Snapshot Isolation and improved lock

and deadlock monitoring) The main new feature in SQL

Server 2008 is the capability to control lock escalation

behavior at the table level The new LOCK_ESCALATION table

option allows you to enable or disable table-level lock

esca-lation This new feature can reduce contention and improve

concurrency, especially for partitioned tables

One other change for SQL Server 2008 is the deprecation of

the Locks configuration setting This option, while still

visible and settable in sp_configure, is simply ignored by

SQL Server 2008 Also deprecated in SQL Server 2008 is the

timestamp data type It has been replaced with the

rowversion data type For more information on using the

rowversion data type, see the “Optimistic Locking” section

in this chapter

Trang 9

The Need for Locking

In any multiuser database, there must be a consistent set of rules for making changes to

the data For a true transaction-processing database, the database management system

(DBMS) is responsible for resolving potential conflicts between two different processes that

are attempting to change the same piece of information at the same time Such a situation

cannot occur because the consistency of a transaction cannot be guaranteed For example,

if two users were to change the same data at approximately the same time, whose change

would be propagated? Theoretically, the results would be unpredictable because the

answer is dependent on whose transaction completed last Because most applications try

to avoid “unpredictability” with data wherever possible (imagine a banking system

return-ing “unpredictable” results, and you get the idea), some method must be available to

guar-antee sequential and consistent data changes

Any relational database must support the ACID properties for transactions, as discussed in

Chapter 31, “Transaction Management and the Transaction Log”:

Atomicity

Consistency

Isolation

Durability

These ACID properties ensure that data changes in a database are correctly collected

together and that the data is going to be left in a consistent state that corresponds with

the actions being taken

The main role of locking is to provide the isolation that transactions need Isolation

ensures that individual transactions don’t interfere with one another, that a given

transac-tion does not read or modify the data being modified by another transactransac-tion In additransac-tion,

the isolation that locking provides helps ensure consistency within transactions Without

locking, consistent transaction processing is impossible Transactions are logical units of

work that rely on a constant state of data, almost a “snapshot in time” of what they are

modifying, to guarantee their successful completion

Although locking provides isolation for transactions and helps ensure their integrity, it

can also have a significant impact on the performance of the system To keep your system

performing well, you want to keep transactions as short, concise, and noninterfering as

possible This chapter explores the locking features of SQL Server that provide isolation for

transactions You’ll come to understand the performance impact of the various levels and

types of locks in SQL Server and how to define transactions to minimize locking

perfor-mance problems

Transaction Isolation Levels in SQL Server

Isolation levels determine the extent to which data being accessed or modified in one

Trang 10

tical and performance reasons, this might not always be the case In a concurrent

environ-ment in the absence of locking and isolation, the following four scenarios can happen:

Lost update—In this scenario, no isolation is provided to a transaction from other

transactions Multiple transactions can read the same copy of data and modify it

The last transaction to modify the data set prevails, and the changes by all other

transactions are lost

Dirty reads—In this scenario, one transaction can read data that is being modified

by other transactions The data read by the first transaction is inconsistent because

the other transaction might choose to roll back the changes

Nonrepeatable reads—In this scenario, which is somewhat similar to zero

isola-tion, a transaction reads the data twice, but before the second read occurs, another

transaction modifies the data; therefore, the values read by the first read are different

from those of the second read Because the reads are not guaranteed to be repeatable

each time, this scenario is called nonrepeatable reads

Phantom reads—This scenario is similar to nonrepeatable reads However, instead

of the actual rows that were read changing before the transaction is complete,

addi-tional rows are added to the table, resulting in a different set of rows being read the

second time Consider a scenario in which Transaction A reads rows with key values

within the range of 1 through 5 and returns three rows with key values 1, 3, and 5

Before Transaction A reads the data again within the transaction, Transaction B adds

two more rows with the key values 2 and 4 and commits the changes Assuming that

Transaction A and Transaction B both can run independently without blocking each

other, when Transaction A runs the query a second time, it now gets five rows with

key values 1, 2, 3, 4, and 5 This phenomenon is called phantom reads because in the

second pass, you get records you did not expect to retrieve

Ideally, a DBMS must provide levels of isolation to prevent these types of scenarios

Sometimes, for practical and performance reasons, databases relax some of the rules The

American National Standards Institute (ANSI) has defined four transaction isolation levels,

each providing a different degree of isolation to cover the previous scenarios ANSI SQL-92

defines the following four standards for transaction isolation:

Read Uncommitted (Level 0)

Read Committed (Level 1)

Repeatable Read (Level 2)

Serializable (Level 3)

SQL Server 2008 supports all the ANSI isolation levels; in addition, SQL Server 2008 also

supports two additional transaction isolation levels that use row versioning One is an

alternative implementation of Read Committed isolation called Read Committed

Snapshot, and the other is the Snapshot transaction isolation level

Ngày đăng: 05/07/2014, 02:20

TỪ KHÓA LIÊN QUAN