Optimistic locking assumes that no one else will attempt to change the data while a user is working on the data in a form.. Optimistic locking does not apply locks while a user is workin
Trang 1If you were to open a third or fourth transaction, they would all still see the original value,The Bald
Knight
Even after the second transaction committed the change, the first transaction would still see the
original value,The Bald Knight This is the same behavior as the serializable isolation but without
the blocking that occurs with serializable isolation Any new transactions would see updated value,
Using read committed snapshot isolation
Read committed snapshot isolation is enabled using a similar syntax:
ALTER DATABASE Aesop SET READ_COMMITTED_SNAPSHOT ON
Like snapshot isolation, read committed snapshot isolation uses row versioning to stave off locking and
blocking issues In the previous example, transaction 1 would see transaction 2’s update once it was
committed
The difference with snapshot isolation is that you don’t specify a new isolation level This just changes
the behavior of the standard read committed isolation level, which means you shouldn’t have to change
your application to benefit from it
Handling write conficts
Transactions that write to the data within a snapshot isolation can be blocked by a previous
uncom-mitted write transaction This blocking won’t cause the new transaction to wait; instead, it generates an
error Be sure to usetry-catchto handle these errors, wait a split second, and try again
Using locking hints
Locking hints enable you to make minute adjustments to the locking strategy Whereas the isolation level
affects the entire connection, locking hints are specific to one table within one query (see Table 66-5)
TheWITH (locking hint)option is placed after the table in theFROMclause of the query You can
specify multiple locking hints by separating them with commas
The following query uses a locking hint in theFROMclause of anUPDATEquery to prevent the lock
manager from escalating the granularity of the locks:
USE OBXKites UPDATE Product
FROM Product WITH (RowLock)
SET ProductName = ProductName + ‘ Updated’
If a query includes subqueries, don’t forget that each query’s table references will generate locks and can
be controlled by a locking hint
Trang 2TABLE 66-5
Locking Hints
Locking Hint Description
ReadUnCommitted Isolation level Doesn’t apply or honor locks Same as no lock
ReadCommitted Isolation level Uses the default transaction-isolation level
RepeatableRead Isolation level Holds share and exclusive locks until COMMIT
TRANSACTION
Serializable Isolation level Applies the serializable transaction isolation level durations
to the table, which holds the shared lock until the transaction is complete
granularity
TablockX Forces an exclusive lock on the table This prevents any other transaction
from working with the table
blocks any other reads or writes of the data between the initial read and a write operation This can be used to escalate locks used by a SELECT statement within a serializable isolation transaction from causing deadlocks
Application Locks
SQL Server uses a very sophisticated locking scheme Sometimes a process or a resource other than data
requires locking For example, a procedure might need to run that would be ill affected if another user
started another instance of the same procedure
Several years ago, I wrote a program that routed cables for nuclear power plant designs After the
geom-etry of the plant (what’s where) was entered and tested, the design engineers entered the cable-source
equipment, destination equipment, and type of cable to be used Once several cables were entered, a
procedure wormed each cable through the cable trays so that cables were as short as possible The
pro-cedure also considered cable fail-safe routes and separated incompatible cables While I enjoyed writing
that database, if multiple instances of the worm procedure ran simultaneously, each instance attempted
Trang 3to route the cables, and the data became fouled An application lock is the perfect solution to that type
of problem
Application locks open up the whole world of SQL Server locks for custom uses within applications
Instead of using data as a locked resource, application locks use any named user resource declared in
thesp_GetAppLockstored procedure
Application locks must be obtained within a transaction As with the locks the engine puts
on the database resources, you can specify the lock mode (Shared,Update,Exclusive,
IntentExclusive, orIntentShared) The return code indicates whether or not the procedure was
successful in obtaining the lock, as follows:
■ 0: Lock was obtained normally
■ 1: Lock was obtained after another procedure released it
■ -1: Lock request failed (timeout)
■ -2: Lock request failed (canceled)
■ -3: Lock request failed (deadlock)
■ -999: Lock request failed (other error) Thesp_ReleaseAppLockstored procedure releases the lock The following code shows how the
application lock can be used in a batch or procedure:
DECLARE @ShareOK INT
EXEC @ShareOK = sp_GetAppLock
@Resource = ‘CableWorm’,
@LockMode = ‘Exclusive’
IF @ShareOK < 0
.Error handling code code
EXEC sp_ReleaseAppLock @Resource = ‘CableWorm’
Go
When the application locks are viewed using Enterprise Manager orsp_Lock, the lock appears as an
‘‘APP’’-type lock The following is an abbreviated listing of sp_lockexecuted at the same time as the
previous code:
Sp_Lock
Result:
- - -
Note two minor differences from the way application locks are handled by SQL Server:
■ Deadlocks are not automatically detected
■ If a transaction gets a lock several times, it has to release that lock the same number of times
Trang 4Application Locking Design
Aside from SQL Server locks, another locking issue deserves to be addressed How the client application
deals with multi-user contention is important to both the user’s experience and the integrity of the data
Implementing optimistic locking
The two basic means of dealing with multi-user access are optimistic locking and pessimistic locking The
one you use determines the coding methods of the application
Optimistic locking assumes that no one else will attempt to change the data while a user is working on
the data in a form Therefore, you can read the data and then later go back and update the data based
on what you originally read Optimistic locking does not apply locks while a user is working with data
in the front-end application The disadvantage of optimistic locking is that multiple users can read and
write the data because they aren’t blocked from doing so by locks, which can result in lost updates
The pessimistic (or ‘‘Murphy’’) method takes a different approach: If anything can go wrong it will
When a user is working on some data, a pessimistic locking scheme locks that data until the user is
finished with it
While pessimistic locking may work in small workgroup applications on desktop databases, large
client/server applications require higher levels of concurrency If SQL Server locks are held while a user
is viewing the data in an application, performance will be unreasonably slow
The accepted best practice is to implement an optimistic locking scheme using minimal SQL Server
locks, as well as a method for preventing lost updates
Lost updates
A lost update occurs when two users edit the same row, complete their edits, and save the data, and the
second user’s update overwrites the first user’s update For example:
1 Joe opens Product 1001, a 21-inch box kite, in the front-end application SQL Server applies a
shared lock for a split second while retrieving the data
2 Sue also opens Product 1001 using the front-end application.
3 Joe and Sue both make edits to the box-kite data Joe rephrases the product description, and
Sue fixes the product category
4 Joe saves the data in the application, which sends an update to SQL Server TheUPDATE
command replaces the old product description with Joe’s new description
5 Sue presses the ‘‘save and close’’ button, and her data is sent to SQL Server in anotherUPDATE
statement The product category is now fixed, but the old description was in Sue’s form, so
Joe’s new description was overwritten with the old description
6 Joe discovers the error and complains to the IT vice president during the next round of golf
about the unreliability of that new SQL Server–based database
Because lost updates only occur when two users edit the same row at the same time, the problem might
not occur for months Nonetheless, it’s a flaw in the transactional integrity of the database and should be
prevented
Trang 5Minimizing lost updates
If the application is going to use an optimistic locking scheme, you can minimize the chance that a lost
update can occur, as well as minimize the effects of a lost update, using the following methods:
■ Normalize the database so that it has many long, narrow tables With fewer columns in a row, the chance of a lost update is reduced For example, theOBXKitesdatabase has a separate table for prices A user can work on product pricing and not interfere with another user working on other product data
■ If theUPDATEstatement is being constructed by the front-end application, have it check the controls and send an update for only those columns that are actually changed by the user
This technique alone would prevent the lost update described in the previous example of Joe’s and Sue’s updates and most lost updates in the real world As an added benefit, it reduces client/server traffic and the workload on SQL Server
■ If an optimistic locking scheme is not preventing lost updates, the application is using a ‘‘he who writes last, writes best’’ scheme Although lost updates may occur, a data-audit trail can minimize the effect by exposing updates to the same row within minutes and tracking the data changes
Preventing lost updates
A stronger solution to the lost update problem than just minimizing the effect is to block lost updates
where the data has changed since it was read This can be done in two ways The more complicated
version checks the current value of each column against the value that was originally read Although it
can be very complicated, it offers you very fine-grain control over doing partial updates
The second way is to use theRowVersionmethod Therowversiondata type, previously known as
atimestampin earlier versions of SQL Server, automatically provides a new value every time the row
is updated By comparing theRowVersionvalue retrieved during the row select and theRowVersion
value at the time of update, it’s trivial for code to detect whether the row has been changed and a lost
update would occur
TheRowVersionmethod can be used inSELECTandUPDATEstatements by adding theRowVersion
value in theWHEREclause of theUPDATEstatement
The following sequence demonstrates theRowVersiontechnique using two user updates Both users
begin by opening the 21-inch box kite in the front-end application BothSELECTstatements retrieve the
SELECT RowVersion, ProductName
FROM Product WHERE ProductCode = ‘1001’
Result:
-
-0x0000000000000077 Basic Box Kite 21 inch
Trang 6Both front-end applications can grab the data and populate the form Joe edits theProductNameto
‘‘Joe’s Update.’’ When Joe is ready to update the database, the ‘‘save and close’’ button executes the
following SQL statement:
UPDATE Product
SET ProductName = ‘Joe’s Update’
WHERE ProductCode = ‘1001’
AND RowVersion = 0x0000000000000077
Once SQL Server has processed Joe’s update, it automatically updates theRowVersionvalue as well
Checking the row again, Joe sees that his edit took effect:
SELECT RowVersion, ProductName
FROM Product
WHERE ProductCode = ‘1001’
Result:
-
-0x00000000000000B9 Joe’s Update
If the update procedure checks to see whether any rows were affected, it can detect that Joe’s edit was
accepted:
SELECT @@ROWCOUNT
Result:
1
Although theRowVersioncolumn’s value was changed, Sue’s front-end application isn’t aware of the
new value When Sue attempts to save her edit, theUPDATEstatement won’t find any rows meeting that
criterion:
UPDATE Product SET ProductName = ‘Sue’s Update’
WHERE ProductCode = ‘1001’
AND RowVersion = 0x0000000000000077
If the update procedure checks to see whether any rows were affected, it can detect that Sue’s edit was
ignored:
SELECT @@ROWCOUNT
Result:
0
This method can also be incorporated into applications driven by stored procedures TheFETCHor
GETstored procedure returns theRowVersionalong with the rest of the data for the row When the
Trang 7application is ready to update and calls theUPDATEstored procedure, it includes theRowVersion
as one of the required parameters TheUPDATEstored procedure can then check theRowVersion
and raise an error if the two don’t match If the method is sophisticated, the stored procedure or the
front-end application can check the audit trail to see whether or not the columns updated would cause a
lost update and report the changes to the last user in the error dialog
Transaction-Log Architecture
SQL Server’s design meets the transactional-integrity ACID properties, largely because of its write-ahead
transaction log The write-ahead transaction log ensures the durability of every transaction
Transaction log sequence
Every data-modification operation goes through the same sequence in which it writes first to the
transac-tion log and then to the data file The following sectransac-tions describe the 12 steps in a transactransac-tion
Database beginning state
Before the transaction begins, the database is in a consistent state All indexes are complete and point to
the correct row The data meets all the enforced rules for data integrity Every foreign key points to a
valid primary key
Some data pages are likely already cached in memory Additional data pages or index pages are copied
into memory as needed Here are the steps of a transaction:
1 The database is in a consistent state.
Data-modification command
The transaction is initiated by a submitted query, batch, or stored procedure, as shown in Figure 66-10
2 The code issues aBEGIN TRANSACTIONcommand Even when the DML command is a stand-alone command without aBEGIN TRANSACTIONand aCOMMIT TRANSACTION, it is still handled as a transaction
3 The code issues a single DMLINSERT,UPDATE, orDELETEcommand, or a series of them
To give you an example of the transaction log in action, the following code initiates a transaction and then submits twoUPDATEcommands:
USE OBXKites;
BEGIN TRANSACTION;
UPDATE Product SET ProductDescription = ‘Transaction Log Test A’, DiscontinueDate = ‘12/31/2003’
WHERE Code = ‘1001’;
UPDATE Product
Trang 8SET ProductDescription = ‘Transaction Log Test B’, DiscontinueDate = ‘4/1/2003’
WHERE Code = ‘1002’;
Notice that the transaction has not yet been committed
FIGURE 66-10
The SQL DML commands are performed in memory as part of a transaction
Data Pages In RAM
T Update
Data File
T-Log Delete Insert
6) Confirm
5) Write Ahead
1) Begin in Consistent State
4) Write to
Data Page
Update Confirmed
2) Begin Tran
3) Update
SQL Update
4 The query optimization plan is either generated or pulled from memory Any required locks
are applied, and the data modifications, including index updates, page splits, and any other
required system operations, are performed in memory At this point the data pages in memory
are different from those that are stored in the data file
The following section continues the chronological walk through the process
Transaction log recorded
The most important aspect of the transaction log is that all data modifications are written to it and
confirmed prior to being written to the data file (refer to Figure 66-10)
Best Practice
The write-ahead nature of the transaction log is what makes it critical that the transaction log be stored on
a different disk subsystem from the data file If they are stored separately and either disk subsystem fails,
then the database will still be intact, and you will be able to recover it to the split second before the failure
Conversely, if they are on the same drive, a drive failure will require you to restore from the last backup If
the transaction log fails, it can’t be recovered from the data file, so it’s a best practice to invest in redundancy
for the T-Log files along with regular T-Log backups
Trang 95 The data modifications are written to the transaction log.
6 The transaction log DML entries are confirmed This ensures that the log entries are in fact
written to the transaction log
Transaction commit
When the sequence of tasks is complete, theCOMMIT TRANSACTIONcloses the transaction Even this
task is written to the transaction log, as shown in Figure 66-11
FIGURE 66-11
The commit transaction command launches another insert into the transaction log
Data Pages In RAM
T Commit
Data File
T-Log Delete Insert
Commit
9) Confirm
8) Write Ahead
1) Begin in Consistent State
4) Write to Data Page
2) Begin Tran
3) Update
7) Commit Tran
SQL Update
7 The following code closes the transaction:
COMMIT TRANSACTION
To watch transactions post to the transaction log, watch the Transaction screencast on
www.sqlserverbible.com
8 TheCOMMITentry is written to the transaction log
9 The transaction-logCOMMITentry is confirmed (see Figure 66-12)
If you’re interested in digging deeper into the transaction log, You might want to research
::fn_dblog(startlsn, endlsn) , an undocumented system function that reads the log.
Also, Change Data Capture leverages the transaction log, so there are some new functions to work with
the transaction log and LSNs, as described in Chapter 60, ‘‘Change Data Capture.’’
Data-file update
With the transaction safely stored in the transaction log, the last operation is to write the data
modifica-tion to the data file, as shown in Figure 66-13
Trang 10FIGURE 66-12
Viewing committed transactions in the transaction log using ApexSQL Log, a third-party product
FIGURE 66-13
As one of the last steps, the data modification is written to the data file
Data Pages In RAM
T Data File Write
Data File
T-Log Delete Insert Commit
9) Confirm
8) Write Ahead
1) Begin in Consistent State
11) Transaction is marked as written
to the data file
10) Write when
it gets time, tag in T-LOG 12) Finish in
Consistent State
4) Write to
Data Page
2) Begin Tran
3) Update
7) Commit Tran
SQL Update