To demonstrate a nonrepeatable read, the following sequence sets up two concurrent transactions.Transaction 2, on the right side, is in the default read committed transaction isolation l
Trang 1Level 2 – Read committed
SQL Server’s default transaction isolation level, read committed, (described previously) is like a nice,
polite white-picket fence between two good neighbors It prevents dirty reads, but doesn’t bog the
system down with excessive lock contention For this reason, it’s SQL Server’s default isolation level and
an ideal choice for most OTLP projects
Best Practice
Unless there’s a specific reason to escalate the transaction isolation level, I strongly recommend that you
keep your transactions at read committed
Level 3 – Repeatable read
The third level of isolation, repeatable read, is like a 10-foot chain-link fence with barbed wire on top
There’s a significant difference between read committed’s white-picket fence and repeatable read Read
committed only has to lock the transaction that’s doing the writing To ensure that reads are consistent,
share locks on the reading transaction have to be extended as well
First, here’s a walk-through of a nonrepeatable read fault
Nonrepeatable read fault
A nonrepeatable read is similar to a dirty read, but a nonrepeatable read occurs when a transaction can
see the committed updates from another transaction (see Figure 66-8) True isolation means that one
transaction never affects another transaction If the isolation is complete, then no data changes from
outside the transaction should be seen by the transaction Reading a row inside a transaction should
produce the same results every time If reading a row twice results in different values, that’s a
nonrepeatable read type of transaction fault
FIGURE 66-8
A nonrepeatable read transaction fault is when transaction 2 selects the same data twice and sees
different values
Update
Commit
t1
Select
Select
t2
Isolation
Retrieves Updated Value
Trang 2To demonstrate a nonrepeatable read, the following sequence sets up two concurrent transactions.
Transaction 2, on the right side, is in the default read committed transaction isolation level, which
allows the nonrepeatable read fault
Assuming an unaltered copy ofAdventureWorks2008, transaction 2 begins a logical transaction and
then reads the department name as ‘‘Engineering’’:
Transaction 2
USE AdventureWorks2008;
SET TRANSACTION ISOLATION LEVEL
READ COMMITTED;
BEGIN TRANSACTION;
SELECT Name
FROM HumanResources.Department
WHERE DepartmentID = 1;
Result:
Name
-Engineering
Transaction 1 on the left side now updates the department name to ‘‘Non-Repeatable Read’’:
Transaction 1
USE AdventureWorks2008;
UPDATE HumanResources.Department
SET Name = ‘Non-Repeatable Read’
WHERE DepartmentID = 1;
Transaction 2, back on the right side, reads the row again If it sees the value updated by transaction 1,
that will be a nonrepeatable read transaction fault:
SELECT Name
FROM HumanResources.Department
WHERE DepartmentID = 1;
COMMIT TRANSACTION;
Result:
Name
-Non-Repeatable Read
Sure enough, transaction 2’s read was not repeatable
Trang 3Preventing the fault
Rerunning the same scripts with transaction 2’s transaction isolation level to Repeatable Read will result
in a very different behavior (assuming an unaltered copy of AdventureWorks2008):
Transaction 2 USE AdventureWorks2008;
SET TRANSACTION ISOLATION LEVEL
REPEATABLE READ;
BEGIN TRANSACTION;
SELECT Name FROM HumanResources.Department WHERE DepartmentID = 1;
Result:
Name -Engineering
Transaction 1 on the left side now updates the department name to ‘‘Non-Repeatable Read’’:
Transaction 1 USE AdventureWorks2008;
UPDATE HumanResources.Department SET Name = ‘Non-Repeatable Read’
WHERE DepartmentID = 1;
Here’s the first major difference: There’s no ‘‘1 row(s) affected’’ message indicating that the update is
paused, waiting for transaction 2’s share lock
Transaction 2, back on the right side, reads the row again If it sees the value updated by transaction 1,
then that’s a nonrepeatable read transaction fault:
SELECT Name FROM HumanResources.Department WHERE DepartmentID = 1;
Result:
Name
-Engineering
But the result is not the updated value from transaction 1 Instead, the original value is still in place
The read was repeatable and the nonrepeatable read fault has been prevented
When transaction 2 completes the transaction, it releases the share lock:
COMMIT TRANSACTION;
Trang 4Immediately, transaction 1, on the left side, is now free to complete its update and the ‘‘1
row(s)affected’’ message appears in the Messages pane
Repeatable read protects against the selected rows being updated, but it doesn’t protect against new
rows being added to or deleted from the selected range Therefore, you could get a different value/set of
results, if new rows are added or deleted To avoid this, use the serializable transaction isolation level
of protection
Best Practice
Repeatable read has significant overhead, but it’s perfect for situations when a transaction must be able to
read the data multiple times, perhaps performing calculations, and guarantee that no other transaction
updates the data during these calculations
The key to using repeatable read is applying it conservatively — if an application requires repeatable read in
some cases, be careful to only set repeatable read for those transactions that require it Leave all the other
transactions at the default read committed transaction isolation level
Level 4 – Serializable
This most restrictive isolation level prevents all transactional faults and is like a high-security prison
wall Serializable protects against all three transactional faults: dirty reads, nonrepeatable reads, and
phantom rows
Just as serialized inventory means that each item is uniquely identified and accounted for, the
serial-ized transaction isolation level means that each row in every select’s result set is accounted for; and if
that select is reissued, then the result will not include any row additions or deletions made by any other
transaction
This mode is useful for databases for which absolute transactional integrity is more important than
per-formance Banking, accounting, and high-contention sales databases, such as the stock market, typically
use serialized isolation
Best Practice
Use the serialized transaction level when performing multiple aggregations on a ranged set of rows and
there’s a risk that another connection might add or remove rows from that range during the transaction
An example might be when a transaction is reconciling multiple accounts and must ensure that no other
transaction inserts within the same range as the adjustments during the reconciliation
Phantom rows
The least severe transactional-integrity fault is a phantom row, which means that one transaction’s insert,
update, or delete causes different rows to be returned in another transaction, as shown in Figure 66-9
Trang 5FIGURE 66-9
When the rows returned by a select are altered by another transaction, the phenomenon is called a
phantom row
Insert
Commit
t1
Select
Select
t2
Isolation
New additional rows are returned
4 Rows
6 Rows
Beginning with a clean copy of AdventureWorks2008, transaction 2 selects all the rows in a specific
range (Name BETWEEN ‘A’ AND ‘G’):
Transaction 2 USE AdventureWorks2008 SET TRANSACTION ISOLATION LEVEL
REPEATABLEREAD
BEGIN TRANSACTION SELECT DepartmentID as DeptID, Name FROM HumanResources.Department
WHERE Name BETWEEN ‘A’ AND ‘G’
Result:
-
Transaction 1 now inserts a new row into the range selected by transaction 2:
Transaction 1 Insert a row in the range INSERT HumanResources.Department (Name, GroupName)
VALUES (’ABC Dept’, ‘Test Dept’)
Trang 6When transaction 2 selects the same range again, if ‘ABC Dept’is in the result list, then a phantom
row transaction fault occurred:
Transaction 2
re-selecting the same range
SELECT DepartmentID as DeptID, Name
FROM HumanResources.Department
WHERE Name BETWEEN ‘A’ AND ‘G’
COMMIT TRANSACTION
Result:
-
Sure enough,‘ABC Dept’is in the result list, and that’s the phantom row
Serialized transaction isolation level
The highest transaction isolation level can defend the transaction against the phantom row
Transaction 2 will first insert a sample row, ‘Amazing FX Dept,’ so transaction 1 will have a row that
can be deleted without worrying about referential integrity issues It then sets the transaction isolation
level, begins a transaction, and reads a range of data:
Transaction 2
USE AdventureWorks2008
insert test row for deletion
INSERT HumanResources.Department
(Name, GroupName)
VALUES
(’Amazing FX Dept’, ‘Test Dept’)
SET TRANSACTION ISOLATION LEVEL
SERIALIZABLE
BEGIN TRANSACTION
SELECT DepartmentID as DeptID, Name
FROM HumanResources.Department
WHERE Name BETWEEN ‘A’ AND ‘G’
Trang 7-
Transaction 2’sSELECTreturned six rows
With transaction 2 in a transaction and serialized transaction isolation level protecting the range of
names from ‘A’ to ‘G’, transaction 1 will attempt to insert, update, and delete into and from that range:
Transaction 1 Insert a row in the range
INSERT HumanResources.Department (Name, GroupName) VALUES (’ABC Dept’, ‘Test Dept’)
Update Dept into the range
UPDATE HumanResources.Department SET Name = ‘ABC Test’
WHERE DepartmentID = 1 Engineering
Delete Dept from range
DELETE HumanResources.Department WHERE DepartmentID = 17 Amazing FX Dept
The significant point here is that none of transaction 1’s DML commands produced a
‘‘1 row(s)affected’’ message
Transaction 2 now reselects the same range:
Transaction 2 SELECT DepartmentID as DeptID, Name FROM HumanResources.Department WHERE Name BETWEEN ‘A’ AND ‘G’
Result:
-
Trang 8TheSELECTreturns the same six rows with the same values Transactional integrity is intact, and the
phantom row fault has been thwarted
Transaction 1 is still on hold, waiting for transaction 2 to complete its transaction:
COMMIT TRANSACTION
As soon as transaction 2 issues a commit transaction and releases its locks, transaction 1 is free to make
its changes, and three ‘‘1 row(s)affected’’ messages appear in transaction 1’s Messages pane:
SELECT *
FROM HumanResources.Department
Result:
-
Selecting the range after transaction 2 is committed and transaction 1 has made its updates reveals
the inserted and updated rows added to the range In addition, the Amazing FX department has
disappeared
Concurrency and the serialized isolation level are not friends because in order to get the protection
needed for the serialized isolation level, more locks are required; worse, those locks have to be on
key ranges to prevent someone else from inserting rows In the event that you don’t have the correct
indexes, the only way SQL can prevent phantoms is to lock the table Locking the table is obviously not
good for concurrency For this reason, if you need to use serialized transactions, you must ensure that
you have the correct indexes in order to avoid table locks
Snapshot isolations
Traditionally, writers block readers, and readers block writers, but version-based isolations are a
completely different twist When version-based isolations are enabled, if a transaction modifies
(irrespective of the isolation level) data, a pre-modification version of the data is stored This allows
other transactions to read the original version of the data even while the original transaction is in an
uncommitted state
Therefore, snapshot isolation eliminates writer versus reader contention Nevertheless, contention isn’t
completely gone — you still have writers conflicting with writers If a second writer attempts to update
a resource that’s already being updated, the second resource is blocked
There are two version-based isolations — snapshot isolation and read committed snapshot isolation:
Trang 9■ Snapshot isolation: Operates like serializable isolation without the locking and blocking issues The same select within a transaction will see before image of the data
■ Read committed snapshot isolation: Sees any committed data, similar to SQL Server’s default isolation level of read committed However, importantly, it doesn’t place any shared locks on the data being read
Best Practice
Oracle’s default transaction behavior is just like snapshot isolation, which is why some DBAs moving
up to SQL Server love snapshot isolation, and why some assume snapshot isolation must be better
somehow than traditional transaction isolation levels
It’s true that snapshot isolation can eliminate some locking and blocking issues and therefore improves
performance given the right hardware
However, the best practice is as follows: If you choose snapshot isolation, it should be an architecture issue,
not a performance issue If another transaction is updating the data, should the user wait for the new data, or
should the user see the before image of the data? For many applications, returning the before image would
paint a false picture
Enabling row versioning
Snapshot actually leverages SQL Server’s row-versioning technology, which copies any row being
updated intoTempDB Configuring snapshot isolation, therefore, requires first enabling row versioning
for the database Besides theTempDBload, row versioning also adds a 14-byte row identifier to each
row This extra data is added to the row when the row is modified if it hasn’t been done previously It
is used to store the pointer to the versioned row
Because snapshot isolation uses row versioning, which writes copies of the rows to TempDB , this can put an incredible load on TempDB If you enable the row-version-based isolations,
be prepared to watch TempDB and perhaps locate TempDB ’s data and transaction logs on their own disk
subsystems.
Row versioning alters the row structure so that a copy of the row can be sent toTempDB
The following code enables snapshot isolation To alter the database and turn on snapshot isolation,
there can no other connections to the database:
USE Aesop;
ALTER DATABASE Aesop
SET ALLOW_SNAPSHOT_ISOLATION ON check snapshot isolation select name,
snapshot_isolation_state, snapshot_isolation_state_desc,
Trang 10from sys.databases
where database_id = DB_ID()
Transaction 1 now begins a reading transaction, leaving the transaction open (uncommitted):
USE Aesop
SET TRANSACTION ISOLATION LEVEL Snapshot;
BEGIN TRAN
SELECT Title
FROM FABLE
WHERE FableID = 2
Result:
Title
-The Bald Knight
A second transaction begins an update to the same row that the first transaction has open:
USE Aesop;
SET TRANSACTION ISOLATION LEVEL Snapshot;
BEGIN TRAN
UPDATE Fable
SET Title = ‘Rocking with Snapshots’
WHERE FableID = 2;
SELECT * FROM FABLE WHERE FableID = 2
Result:
Title
-Rocking with Snapshots
This is pretty amazing The second transaction is able to update the row even though the first
transac-tion is still open Going back to the first transactransac-tion, it will still see the original data:
SELECT Title
FROM FABLE
WHERE FableID = 2
Result:
Title
-The Bald Knight