1. Trang chủ
  2. » Công Nghệ Thông Tin

Microsoft SQL Server 2008 R2 Unleashed- P161 pptx

10 181 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 381,55 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

CHAPTER 41 A Performance and Tuning Methodology.. For more information on using and optimizing stored procedures, see Chapter 28, “Creating and Managing Stored Procedures.” Coding Effici

Trang 1

CHAPTER 41 A Performance and Tuning Methodology

Avoid transaction nesting issues in your stored procedures by developing a

consis-tent error-handling strategy for failed transactions or other errors that occur in

trans-actions within your stored procedures Implement that strategy consistently across

all procedures and applications Within stored procedures that might be nested, you

need to check whether the procedure is already being called from within a

transac-tion before issuing another BEGIN TRANstatement If a transaction is already active,

you can issue a SAVE TRANstatement so that the procedure can roll back only the

work that it has performed and allow the calling procedure that initiated the

trans-action to determine whether to continue or abort the overall transtrans-action

Break up large, complex stored procedures into smaller, more manageable stored

pro-cedures Try to create very modular pieces of code that are easily reused and/or

nested

For more information on using and optimizing stored procedures, see Chapter 28,

“Creating and Managing Stored Procedures.”

Coding Efficient Transactions and Minimizing Locking Contention

Poorly written or inefficient transactions can have a detrimental effect on concurrency of

access to data and overall application performance To reduce locking contention for

resources, you should keep transactions as short and efficient as possible During

develop-ment, you might not even notice that a problem exists; the problem might become

noticeable only after the system load is increased and multiple users are executing

transac-tions simultaneously

Following are some guidelines to consider when coding transactions to minimize locking

contention and improve application performance:

Do not return result sets within a transaction Doing so prolongs the transaction

unnecessarily Perform all data retrieval and analysis outside the transaction

Never prompt for user input during a transaction If you do, you lose all control over

the duration of the transaction (Even the best programmers miss this one on

occa-sion.) On the failure of a transaction, be sure to issue the rollback before putting up

a message box telling the user that a problem occurred

Use optimistic locking or snapshot isolation If user input is unavoidable between

data retrieval and modification and you need to handle the possibility of another

user modifying the data values read, leverage the necessary locking strategy (or

isola-tion) to guarantee that no other user corrupts this data Simple things like re-read

and compare, as opposed to holding the resource

Keep statements that comprise a transaction in a single batch to eliminate

unneces-sary delays caused by network input/output between the initial BEGIN TRAN

state-ment and the subsequent COMMIT TRANcommands Additionally, keeping the BEGIN

TRANandCOMMIT/ROLLBACKstatements within the same batch helps avoid the

possi-bility of leaving transactions open should the COMMIT/ROLLBACKstatement not be

issued in a subsequent batch

Trang 2

Consider coding transactions entirely within stored procedures Stored procedures

typically run faster than commands executed from a batch In addition, because

they are server resident, stored procedures reduce the amount of network I/O that

occurs during execution of the transaction, resulting in faster completion of the

transaction

Keep transactions as short and concise as possible The shorter the period of time

locks are held, the less chance for lock contention Keep commands that are not

essential to the unit of work being managed by the transaction (for example,

assign-ment selects, retrieval of updated or inserted rows) outside the transaction

Use the lowest level of locking isolation required by each process For example, if

dirty reads are acceptable and accurate results are not imperative, consider using

transaction Isolation Level 0 Use the Repeatable Read or Serializable Read isolation

levels only if absolutely necessary

For more information on managing transactions and minimizing locking contention, see

Chapters 37, “Locking and Performance,” and 31, “Transaction Management and the

Transaction Log.”

Application Design Guidelines

Locking/deadlock considerations—These considerations are often the most

misunderstood part of SQL Server implementations Start by standardizing on

update, insert, and delete order for all applications that modify data You do not

want to design in locking or deadlocking issues because of inconsistent resource

locking orders that result in a “deadly embrace.” For a more in-depth discussion on

locking and deadlocking and recommendations for avoiding locking performance

issues, see Chapter 37

Stateless application design—To scale out, your application needs to take

advan-tage of load-balancing tiers, application server clustering, and other scaleout options

If you don’t force the application or database to carry state, you will have much

more success in your scaleout plans

Remote Procedure Calls/linked servers—Often data can be accessed via linked

server connections rather than by redundantly copying or replicating data into a

database You can take advantage of this capability with SQL Server to reduce the

redundant storage of data and eliminate synchronization issues between redundant

data stores Because Remote Procedure Calls are being deprecated in SQL Server, you

should stay away from them

Transactional integrity—There is no excuse for sacrificing transactional integrity

for performance The extra overhead (and possible performance impact) comes with

holding resources (and locks) until the transaction commit point to ensure data

integrity However, if you keep the logical unit of work (the business transaction) as

small as possible, you can usually minimize the impact In other words, you should

keep your transaction sizes small and tight

Trang 3

CHAPTER 41 A Performance and Tuning Methodology

Distributed Data Guidelines

Distribute for disaster recovery—Those organizations that have a disaster

recov-ery requirement that they would like to fulfill with distributed data can use several

options One is traditional bit-level stretch clustering (using third-party products

such as from Symantec) to your disaster recovery site Another is simple log shipping

to a secondary data center at some interval Keep in mind, though, that log shipping

will be deprecated at some point Other options include database mirroring

(asyn-chronous mode), periodic full database backups that are sent to another site and

restored to a standby server, and a few variations of data replication

Distribute to satisfy partitioned data accesses—If you have very discrete and

separate data access by some natural key such as geography or product types, it is

often easy to have a huge performance increase by distributing or partitioning your

tables to serve these accesses Data replication options such as peer-to-peer and

multiple publishers fit this well when you also need to isolate the data to separate

servers and even on separate continents Chapter 19, “Replication,” describes all

replication variations

Distribute for performance—Taking the isolation approach a bit further, you can

devise a variety of SQL Server configurations that greatly isolate entire classes of data

access, such as reporting access isolated away from online transactional processing,

and so on Classic SQL Server–based methods for this now include the use of

data-base mirroring and snapshots on the mirror, a few of the data replication options,

and others Check in both Chapters 20, “Database Mirroring,” and 32, “Database

Snapshots,” for details

High-Availability Guidelines

Understand your high-availability (HA) needs first—More important than

applying a single technical solution to achieve high availability is to actually decide

what you really need You should evaluate exactly what your HA requirements

might be with a formal assessment and a cost to the company if you do not have HA

for your application See Chapter 18, “SQL Server High Availability,” for a complete

depiction of your needs and options

Know your options for different levels of HA achievement—With SQL Server,

there are several ways to achieve nearly the same level of high availability, including

SQL clustering, data replication, database mirroring, log shipping, so on But

decid-ing on the right one often depends on many other variables Again, refer to Chapter

18 for details or pick up Microsoft SQL Server High Availability by Sams Publishing as

soon as you can to see how to do a complete HA assessment and technology

deploy-ment This book covers it all!

Be aware of sacrifices for HA at the expense of performance—High availability

often comes at the expense of performance As an example, if you use database

Trang 4

roring in its high availability/automatic failover configuration, you actually end up

with slower transaction processing This can hurt if your SLAs are for subsecond

transactions Be extremely careful here Apply the HA solution that matches your

entire application’s service-level agreements

We have listed many guidelines for you to consider Our hope is that you run through

them for every SQL Server–based system you build Use them as a checklist so that you

catch the big design issues early and that you are designing in performance from the start

Tools of the Performance and Tuning Trade

If you are going off to war (Performance and Tuning war), you should not come empty

handed Bring all your heaviest artillery In other words, make sure you have plenty of

performance and tuning tools to help you diagnose and resolve your issues (to fight your

war) One tool we are providing you is this formal performance and tuning methodology

outlined in this chapter But methodologies are only part of the process There are tools

you can use out of the box from Microsoft as well as plenty of third-party tools that will

help you shed much light on any issues you may be having In the following sections, we

outline a few of both so you can see various methods of getting to the heart of your

performance and tuning problems Some tools are highly graphic and easy to use; others

are more text-based and require much more effort and organizing But you need to come

prepared to fight the war Do not wait until you have a performance problem in your

production environment to learn how to use one of these tools or to have bought a

performance and tuning tool Get what you need upfront

Microsoft Out-of-the-Box

Microsoft continues to offer some built-in capabilities around performance and tuning

with tools such as SQL Server Profiler, Data Collection Services, Performance Monitor

counters that monitor many of the execution aspects of SQL Server, and plenty of SQL

options at the server level to ensure optimal execution We mention a few here, but other

chapters in this book describe these offerings in greater detail:

SQL Server Profiler—This rock-solid Microsoft-bundled offering is slowly getting

better and better As you can see in Figure 41.6, the SQL statements across a SQL

Server instance are captured along with the execution statistics and other

perfor-mance handles You can save your traces and even import them into SQL Server

tables for analysis by sorting the raw SQL code into the Top 10 (or 100)

worst-performing queries We include SQL statements on the CD of this book to help you

manipulate the raw queries and organize them into a usable order (such as the top

100 worst queries)

Trang 5

CHAPTER 41 A Performance and Tuning Methodology

Other Microsoft tools—As mentioned previously, you can also use other tools such

as Perfmon counters to isolate locking, deadlocking, memory utilization, CPU

uti-lization, cache hit ratios, physical and logical disk I/Os, disk queue lengths, and a

host of others This includes counters for database mirroring execution, data

replica-tion execureplica-tion, and many others Even DBCC is still a viable tool for helping track

down pesky things like excessive page splits that play havoc on performance

Chapters 39 and 49 take you deep into these out-of-the-box capabilities for SQL

Server

Third-Party Performance and Tuning Tools

There are a number of performance monitoring and tuning tools available for third-party

vendors Here, we list a few that we have some personal experience with:

Precise 8.5—For database and other tier monitoring in one package, Precise TPM

(formerly Precise i3 from Symantec but spun out a few years ago) is one of the best

out there It’s a bit pricey, but for larger organizations that have vast

implementa-tions, investing in this type of holistic toolset can pay dividends See www.precise

com for a current release of database and J2EE monitoring capabilities

SQL Shot—This tool uses a different approach from Microsoft In particular, SQL

Shot bubbles up all SQL Server–based execution information into a cockpit of visuals

(or graphics) Figure 41.7 shows how easy it is to see trouble spots in your SQL

Server–based system by using SQL Shot’s main GUI See www.dbarchitechs.com for a

current release of the SQL Server 2008 R2 version

FIGURE 41.6 SQL Server Profiler tracing SQL statements

Trang 6

FIGURE 41.7 Graphic depiction of SQL Server performance issues using SQL Shot

Proactive DBA SQL Capture—This tool provides no-impact database monitoring

of end-user response times and SQL activity SQL Capture works by “sniffing”

network SQL conversations between clients and SQL Server, gathering a wide variety

of metrics on the SQL executed by clients Capturing can occur right on the target

server for convenient, low-impact monitoring or on a separate machine for true,

no-impact capturing of database activity You can log all or only selected SQL details to

a repository database of your choice and/or to operating system flat files for later

viewing and analysis See www.proactivedba.com for more detailed information on

the current release

Idera SQL Diagnostic Monitor SQL—This performance monitoring and

diagnos-tics solution can proactively alert administrators to health, performance, or

availabil-ity problems within their SQL Server environment, all from a central console In

addition to real-time monitoring and analysis, current versions also provide the

History Browser, which allows you to diagnose historical SQL Server performance

problems by selecting a historical point-in-time to view See www.idera.com for

information on the current release

Again, come to the war with all your weapons This is critical work, and you need to make

great performance and tuning decisions, be able to isolate issues quickly, and uncover

even the most complex problems

Trang 7

CHAPTER 41 A Performance and Tuning Methodology

Summary

In this chapter, you saw the difference a formal performance and tuning methodology can

make when applied to completely new SQL Server–based implementations We also

showed you a modified version for attacking performance issues you might have with

your existing SQL Server deployment You should use the one that fits your needs best and

guarantees you a great-running SQL Server implementation But you need to use

some-thing that you can follow to the letter so that nosome-thing falls between the cracks We also

outlined a series of performance and tuning guidelines for you that correspond with all

the major layers of your SQL Server environment You should take these along with you to

every design review, code walkthrough, or solution architecture checkpoint Make sure

you consider these guidelines and factor them into all you do Designing in performance

takes deliberate actions and attention to the bigger picture You have to know what

ques-tions to ask and what guidelines to follow The guidelines presented in this chapter should

serve you well And lastly, we highlighted some of the tools by Microsoft and others to

help you with the daunting task of tuning your SQL Server–based implementation If you

always come to the war with heavy artillery, you’ll get great results!

Chapter 42, “What’s New for Transact-SQL in SQL Server 2008,” highlights new features

and changes to some existing ones to help you stay current with the ever-expanding

Transact-SQL offering

Trang 8

What’s New for Transact-SQL in SQL

Server 2008

MERGE Statement

Insert over DML

GROUP BY Clause

Enhancements

Variable Assignment in

DECLARE Statement

Compound Assignment Operators

Row Constructors

New date and time Data

Types and Functions

Table-Valued Parameters

Hierarchyid Data Type

Using FILESTREAM Storage

Sparse Columns

Spatial Data Types

Change Data Capture

Change Tracking

Although SQL Server 2008 introduces some new features

and changes to the Transact-SQL (T-SQL) language that

provide additional capabilities, there is not a significant

number of new features over what was available in 2005

T-SQL does offer the following new features:

Insert over DML

Variable assignment in DECLAREstatement

Compound assignment operators

Row Constructors

Table-valued parameters

FILESTREAM Storage

Sparse Columns

Spatial Data Types

Change Data Capture

Change Tracking

Trang 9

CHAPTER 42 What’s New for Transact-SQL in SQL Server 2008

NOTE

If you are making the leap from SQL Server 2000 (or earlier) to SQL Server 2008 or

SQL Server 2008 R2, you may not be familiar with a number of T-SQL enhancements

introduced in SQL Server 2005 Some of these enhancements are used in the

exam-ples in this chapter If you are looking for an introduction to the new T-SQL features

introduced in SQL Server 2005, check out the “In Case you Missed It ” section in

Chapter 43, “Transact-SQL Programming Guidelines, Tips, and Tricks,” which is provided

on the CD included with this book

NOTE

Unless stated otherwise, all examples in this chapter use tables in the bigpubs2008

database

MERGE Statement

In versions of SQL Server prior to SQL Server 2008, if you had a set of data rows in a

source table that you wanted to synchronize with a target table, you had to perform at

least three operations: one scan of the source table to find matching rows to update in the

target table, another scan of the source table to find nonmatching rows to insert into the

target table, and a third scan to find rows in the target table not contained in the source

table that needed to be deleted SQL Server 2008, however, introduces the MERGE

statement With the MERGEstatement, you can synchronize two tables by inserting,

updat-ing, or deleting rows in one table based on differences found in the other table, all in just

a single statement, minimizing the number of times that rows in the source and target

tables need to be processed The MERGEstatement can also be used for performing

condi-tional inserts or updates of rows in a target table from a source table

delete operations

TheONclause specifies the join conditions that determine how the target and

source match

when rows of target table match rows in the source table and any additional search

conditions

source table does not have a match in the target table

Trang 10

when rows of the target table do not have matches in the source table

or deleted

The basic syntax of the MERGEstatement is as follows:

[ WITH common_table_expression [, n] ]

MERGE

[ TOP ( N ) [ PERCENT ] ]

USING table_or_view_name [ [ AS ] table_alias ]

ON merge_search_condition

[ WHEN MATCHED [ AND search_condition ]

THEN { UPDATE SET set_clause | DELETE } ] [ n ]

[ WHEN NOT MATCHED [ BY TARGET ] [ AND search_condition ]

THEN { INSERT [ ( column_list ) ] { VALUES ( values_list ) | DEFAULT VALUES

}} ]

[ WHEN NOT MATCHED BY SOURCE [ AND search_condition ]

THEN { UPDATE SET set_clause | DELETE } ] [ n ]

[ OUTPUT column_name | scalar_expression

INTO { @table_variable | output_table } [ (column_list) ] ]

, n ] ] ;

TheWHENclauses specify the actions to take on the rows identified by the conditions

speci-fied in the ONclause The conditions specified in the ONclause determine the full result set

that will be operated on Additional filtering to restrict the affected rows can be specified

in the WHENclauses Multiple WHENclauses with different search conditions can be

speci-fied However, if there is a MATCHclause that includes a search condition, it must be

speci-fied before all other WHEN MATCHclauses

Note that the MERGEcommand must be terminated with a semicolon (;) Otherwise, you

receive a syntax error

When you run a MERGEstatement, rows in the source are matched with rows in the target

based on the join predicate that you specify in the ONclause The rows are processed in a

single pass, and one insert, update, or delete operation is performed per input row

depending on the WHENclauses specified The WHENclauses determine which of the

follow-ing matches exist in the result set:

A matched pair consisting of one row from the target and one from the source as a

result of matching condition in the WHEN MATCHEDclause

A row from the source that has no matching row in the target as a result of the

condition specified the WHEN NOT MATCHED BY TARGETclause

A row from the target that has no corresponding row in the source as a result of the

condition specified in the WHEN NOT MATCHED BY SOURCEclause

Ngày đăng: 05/07/2014, 02:20

TỪ KHÓA LIÊN QUAN