TABLE 66-2Lock Compatibility T2 Requests: T1 has: Shared with intent exclusive SIX Yes No No No No No Exclusive lock X An exclusive lock means that the transaction is performing a write
Trang 1declare @retry int set @retry = 1 while @retry = 1 begin
begin try set @retry = 0 begin transaction UPDATE HumanResources.Department SET Name = ‘qq’
WHERE DepartmentID = 2;
UPDATE HumanResources.Department SET Name = ‘x’
WHERE DepartmentID = 1;
commit transaction end try
begin catch
if error_number() = 1205 begin
print error_message() set @retry = 1 end
rollback transaction end catch
end Instead of letting SQL Server decide which transaction will be the ‘‘deadlock victim,’’ a transaction can
‘‘volunteer’’ to serve as the deadlock victim That is, the transaction with the lowest deadlock priority
will be rolled back first Assuming the deadlock priorities are the same, SQL will fallback to the rollback
cost to determine which to rollback The following code inside a transaction will inform SQL Server that
the transaction should be rolled back in case of a deadlock:
SET DEADLOCK_PRIORITY LOW The setting actually allows for a range of values from -10 to 10, or normal (0), low (-5), and high (5)
Minimizing deadlocks
Even though deadlocks can be detected and handled, it’s better to avoid them altogether The following
practices will help prevent deadlocks:
■ Set the server setting for maximum degree of parallelism (maxdop) to 1
■ Keep a transaction short and to the point Any code that doesn’t have to be in the transaction should be left out of it
Trang 2■ Never code a transaction to depend on user input.
■ Try to write batches and procedures so that they obtain locks in the same order — for
example,TableA, thenTableB, thenTableC This way, one procedure will wait for the
next, avoiding a deadlock
■ Plan the physical schema to keep data that might be selected simultaneously close on the
data page by normalizing the schema and carefully selecting the clustered indexes Reducing
the spread of the locks will help prevent lock escalation Smaller locks help prevent lock
contention
■ Ensure that locking is done at the lowest level This includes locks held at the following:
database, object, page, and key The lower the lock level, the more can be held without
contention
■ Don’t increase the isolation level unless it’s necessary A stricter isolation level increases the
duration of the locks and what types of locks are held during the transaction
Understanding SQL Server Locking
SQL Server implements the ACID isolation property with locks that protect a transaction’s rows from
being affected by another transaction SQL Server locks are not just a ‘‘page lock on’’ and ‘‘page lock off’’
scheme, but rather a series of lock levels Before they can be controlled, they must be understood
If you’ve written manual locking schemes for other database engines to overcome their locking
defi-ciencies (as I have), you may feel as though you still need to control the locks Let me assure you that
the SQL Server lock manager can be trusted Nevertheless, SQL Server exposes several methods for
controlling locks
Within SQL Server, you can informally picture two processes: a query processor and a lock manager
The goal of the lock manager is to maintain transactional integrity as efficiently as possible by creating
and dropping locks
Every lock has the following three properties:
■ Granularity: The size of the lock
■ Mode: The type of lock
■ Duration: The isolation mode of the lock
Locks are not impossible to view, but some tricks make viewing the current set of locks easier In
addi-tion, lock contenaddi-tion, or the compatibility of various locks to exist or block other locks, can adversely
affect performance if it’s not understood and controlled
Lock granularity
The portion of the data controlled by a lock can vary from only one row to the entire database, as
shown in Table 66-1 Several combinations of locks, depending on the lock granularity, could satisfy a
locking requirement
Trang 3TABLE 66-1
Lock Granularity
Row Lock Locks a single row This is the smallest lock available SQL Server does not
lock columns
Page Lock Locks a page, or 8 KB One or more rows may exist on a single page
Extent Lock Locks eight pages, or 64 KB
Table Lock Locks the entire table
Database Lock Locks the entire database This lock is used primarily during schema changes
Key Lock Locks nodes on an index
For best performance, the SQL Server lock manager tries to balance the size of the lock against the
number of locks The struggle is between concurrency (smaller locks allow more transactions to access
the data) and performance (fewer locks are faster, as each lock requires memory in the system to
hold the information about the lock)
SQL Server automatically manages the granularity of locks by trying to keep the lock size small and only
escalating to a higher level when it detects memory pressure
Lock mode
Locks not only have granularity, or size, but also a mode that determines their purpose SQL Server
has a rich set of lock modes (such as shared, update, exclusive) Failing to understand lock modes will
almost guarantee that you develop a poorly performing database
Lock contention
The interaction and compatibility of the locks plays a vital role in SQL Server’s transactional integrity
and performance Certain lock modes block other lock modes, as detailed in Table 66-2 For example,
if transaction 1 has a shared lock (S) and transaction 2 requests an exclusive lock (X), then the request
is denied, because a shared lock blocks an exclusive lock
Keep in mind that exclusive locks are ignored unless the page in memory has been updated, i.e.,
is dirty
Shared lock (S)
By far the most common and most abused lock, a shared lock (listed as an ‘‘S’’ in SQL Server) is a simple
‘‘read lock.’’ If a transaction has a shared lock, it’s saying, ‘‘I’m looking at this data.’’ Multiple
transac-tions are allowed to view the same data, as long as no one else already has an incompatible lock
Trang 4TABLE 66-2
Lock Compatibility
T2 Requests:
T1 has:
Shared with intent exclusive (SIX) Yes No No No No No
Exclusive lock (X)
An exclusive lock means that the transaction is performing a write to the data As the name implies,
an exclusive lock means that only one transaction may hold an exclusive lock at one time, and that no
transactions may view the data during the exclusive lock
Update lock (U)
An update lock can be confusing It’s not applied while a transaction is performing an update — that’s
an exclusive lock Instead, the update lock means that the transaction is getting ready to perform an
exclusive lock and is currently scanning the data to determine the row(s) it wants for that lock Think of
the update lock as a shared lock that’s about to morph into an exclusive lock
To help prevent deadlocks, only one transaction may hold an update lock at any given time
Intent locks (various)
An intent lock is a yellow flag or a warning lock that alerts other transactions to the fact that something
more is going on The primary purpose of an intent lock is to improve performance Because an intent
lock is used for all types of locks and for all lock granularities, SQL Server has many types of intent
locks The following is a sampling of the intent locks:
■ Intent Shared Lock (IS)
■ Shared with Intent Exclusive Lock (SIX)
■ Intent Exclusive Lock (IX)
Intent locks serve to stake a claim for a shared or exclusive lock without actually being a shared or
exclusive lock In doing so they solve two performance problems: hierarchical locking and permanent
lock block
Trang 5Without intent locks, if transaction 1 holds a shared lock on a row and transaction 2 wants to grab an
exclusive lock on the table, then transaction 2 needs to check for table locks, extent locks, page locks,
row locks, and key locks
Instead, SQL Server uses intent locks to propagate a lock to higher levels of the data’s hierarchical levels
When transaction 1 gains a row lock, it also places an intent lock on the row’s page and table
The intent locks move the overhead of locking from the transaction needing to check for a lock to the
transaction placing the lock The transaction placing the lock needs to place three or four locks, i.e.,
key, page, object, or database The transaction checking only needs to check for locks that contend with
the three or four locks it needs to place That one-time write of three locks potentially saves hundreds of
searches later as other transactions check for locks
Jim Gray (memorialized in Chapter 1) was the brains behind this optimization.
The intent locks also prevent a serious shared-lock contention problem — what I call ‘‘permanent lock
block.’’ As long as a transaction has a shared lock, another transaction can’t gain an exclusive lock What
would happen if someone grabbed a shared lock every five seconds and held it for 10 seconds while a
transaction was waiting for an exclusive lock? TheUPDATEtransaction could theoretically wait forever
However, once the transaction has an intent exclusive lock (IX), no other transaction can grab a shared
lock The intent exclusive lock isn’t a full exclusive lock, but it lays claim to gaining an exclusive lock in
the future
Schema lock (Sch-M, Sch-S)
Schema locks protect the database schema SQL Server will apply a schema stability (Sch-S) lock during
any query to prevent data definition language (DDL) commands
A schema modification lock (Sch-M) is applied only when SQL Server is adjusting the physical schema
If SQL Server is in the middle of adding a column to a table, then the schema lock will prevent any
other transactions from viewing or modifying the data during the schema-modification operation
Controlling lock timeouts
If a transaction is waiting for a lock, it will continue to wait until the lock is available By default, no
timeout exists — it can theoretically wait forever
Fortunately, you can set the lock time using theset lock_timeoutconnection option Set the
option to a number of milliseconds or set it to infinity (the default) by setting it to-1 Setting the
lock_timeoutto0means that the transaction will instantly give up if any lock contention occurs at
all The application will be very fast, and very ineffective
The following query sets the lock timeout to two seconds (2,000 milliseconds):
SET Lock_Timeout 2000 When a transaction does time out while waiting to gain a lock, a 1222 error is raised
Trang 6Best Practice
Irecommend setting a lock timeout in the connection The length of the wait you should specify depends
on the typical performance of the database I usually set a five-second timeout
Lock duration
The third lock property, lock duration, is determined by the transaction isolation level of the
trans-actions involved — the more stringent the isolation, the longer the locks will be held SQL Server
implements four transaction isolation levels (transaction isolation levels are detailed in the next section.)
Index-level locking restrictions
Isolation levels and locking hints are applied from the connection and query perspective The only way
to control locks from the table perspective is to restrict the granularity of locks on a per-index basis
Using theALTER INDEXcommand, rowlocks and/or pagelocks may be disabled for a particular index,
as follows:
ALTER INDEX AK_Department_Name
ON HumanResources.Department
SET (ALLOW_PAGE_LOCKS = OFF)
This is useful for a couple of specific purposes If a table frequently causes waiting because of page
locks, settingALLOW_PAGE_LOCKStoOFFwill force rowlocks The decreased scope of the lock will
improve concurrency In addition, if a table is seldom updated but frequently read, then row-level and
page-level locks are inappropriate Allowing only table locks is suitable during the majority of table
accesses For the infrequent update, a table-exclusive lock is not a big issue
Sp_indexoptionis for fine-tuning the data schema; that’s why it’s on an index level To restrict the
locks on a table’s primary key, usesp_help tablenameto find the specific name for the primary
key index
The following commands configure the ProductCategorytable as an infrequently updated lookup
table First,sp_helpreports the name of the primary key index:
sp_help ProductCategory
Result (abridged):
- -
-PK ProductCategory 79A81403 nonclustered, ProductCategoryID
unique, primary key located
on PRIMARY
Trang 7Having identified the actual name of the primary key, theALTER INDEXcommand can be set as shown
previously
Transaction Isolation Levels
Any study of how transactions affect performance must include transactional integrity, which refers to the
quality, or fidelity, of the transaction Three types of problems violate transactional integrity: dirty reads,
nonrepeatable reads, and phantom rows
The level of isolation, or the height of the fence between transactions, can be adjusted to control which
transactional faults are permitted The ANSI SQL-92 committee specifies four isolation levels: read
uncommitted, read committed, repeatable read, and serializable
SQL Server 2005 introduced two additional row-versioning levels, which enables two levels of optimistic
transaction isolation: snapshot and read committed snapshot All six transaction isolation levels are listed in
Table 66-3 and detailed in this section
TABLE 66-3
ANSI-92 Isolation Levels
Isolation Level
(Transaction
isolation level is
set for the
connection)
Table Hint (override the connection’s transaction isolation level)
Dirty Read (Seeing another transaction’s noncommitted changes)
Non-Repeatable Read (Seeing another transaction’s committed changes)
Phantom Row (Seeing additional rows selected by where clause as a result of another transaction)
Reader/Writer Blocking (A write transaction blocks a read transaction) Read
Uncommitted
(least restrictive)
NoLock, Read-Uncommitted
Possible Possible Possible Yes
Read Committed
(Sql Server
default;
moderately
restrictive)
ReadCommitted Prevented Possible Possible Yes
Serializable
(most restrictive)
Serializable Prevented Prevented Prevented Yes
Read Committed
Snapshot
Prevented Possible Possible No
Trang 8Internally, SQL Server uses locks for isolation (except for the snapshot isolations), and the transaction
isolation level determines the duration of the share lock or exclusive lock for the transaction, as listed in
Table 66-4
TABLE 66-4
Isolation Levels and Lock Duration
Isolation Level Share-Lock Duration Exclusive-Lock Duration
Read
Uncommitted
None Held only long enough to prevent physical
corruption; otherwise, exclusive locks are neither applied nor honored
Read Committed Held while the transaction is
reading the data
Held until TRANSACTION COMMIT
Repeatable Read Held until TRANSACTION
COMMIT
Held until TRANSACTION COMMIT
Serializable Held until TRANSACTION
COMMIT
Held until TRANSACTION COMMIT The exclusive lock also uses a keylock (also called
arange lock) to prevent inserts
Setting the transaction isolation level
The transaction isolation level can be set at the connection level using theSETcommand Setting the
transaction isolation level affects all statements for the duration of the connection, or until the
transac-tion isolatransac-tion level is changed again (you can’t change the isolatransac-tion level once in a transactransac-tion):
SET TRANSACTION ISOLATION LEVEL
READ COMMITTED;
To view the current connection transaction isolation level, useDBCC UserOptions, or query
sys.dm_exec_sessions:
SELECT TIL.Description
FROM sys.dm_exec_sessions dmv
JOIN (VALUES(1, ‘Read Uncommitted’),
(2, ‘Read Committed’), (3, ‘Repeatable Read’), (4, ‘Serializable’))
AS TIL(ID, Description)
ON dmv.transaction_isolation_level = TIL.ID
WHERE session_id = @@spid;
Trang 9READ COMMITTED Alternately, the transaction isolation level for a single DML statement can be set by using table-lock hints
in theFROMclause (WITHis optional) These will override the current connection transaction isolation
level and apply the hint on a per-table basis For example, in the next code sample, theDepartment
table is actually accessed using a read uncommitted transaction isolation level, not the connection’s read
committed transaction isolation level:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT Name
FROM HumanResources.Department WITH (NOLOCK)
WHERE DepartmentID = 1;
Level 1 – Read uncommitted and the dirty read
The lowest level of isolation, read uncommitted, is nothing more than a line drawn in the sand It
doesn’t really provide any isolation between transactions, and it allows all three transactional faults
A dirty read, when one transaction can read uncommitted changes made by another transaction, is
possibly the most egregious transaction isolation fault It is illustrated in Figure 66-7
FIGURE 66-7
A dirty read occurs when transaction 2 can see transaction 1’s uncommitted changes
t1
Update
Commit
t2
Select
Isolation
To demonstrate the read uncommitted transaction isolation level and the dirty read it allows, the
fol-lowing code uses two connections — creating two transactions: transaction 1 is on the left, and
transac-tion 2 is on the right The second transactransac-tion will see the first transactransac-tion’s update before that update is
committed:
USE AdventureWorks2008;
Trang 10BEGIN TRANSACTION;
UPDATE HumanResources.Department
SET Name = ‘Transaction Fault’
WHERE DepartmentID = 1;
In a separate Query Editor window (refer to Figure 66-1), execute another transaction in its own
con-nection window (Use the Query tab context menu and New Vertical Tab Group to split the windows.)
This transaction will set its transaction isolation level to permit dirty reads Only the second transaction
needs to be set to read uncommitted for transaction 2 to experience a dirty read:
Transaction 2
USE AdventureWorks2008;
SET TRANSACTION ISOLATION LEVEL
READ UNCOMMITTED;
SELECT Name
FROM HumanResources.Department
WHERE DepartmentID = 1;
Result:
Name
-Transaction Fault
Transaction 1 hasn’t yet committed the transaction, but transaction 2 was able to read ‘‘Transaction
Fault.’’ That’s a dirty read violation of transactional integrity
To finish the task, the first transaction will roll back that transaction:
Transaction 1
ROLLBACK TRANSACTION
Best Practice
Never use read uncommitted or the With (NOLOCK) table hint It’s often argued that read uncommitted
is OK for a reporting database, the rationale being that dirty reads won’t matter because there’s little
updating and/or the data is not changing If that’s the case, then the reporting locks are only share locks,
which won’t block anyway Another argument is that users don’t mind seeing inconsistent data However,
it’s often the case that users don’t understand what ‘‘seeing inconsistent data’’ means
There are other issues about reading uncommitted data related to how the SQL engine optimizes such a read
that can result in your query reading the same data more than once