Aug 312009
 

When is an index scan not a scan?  When it’s a seek!

Consider the following query:

USE AdventureWorks
GO
SELECT * FROM Production.TransactionHistory

As there is no filter on the table, you would expect to see an index scan.  Turn on the Actual Execution Plan, and SET STATISTICS IO ON, and have a look at the results:

image

(113443 row(s) affected)
Table 'TransactionHistory'. Scan count 1, logical reads 792, 
physical reads 0, read-ahead reads 0, lob logical reads 0, 
lob physical reads 0, lob read-ahead reads 0.

Just as expected.  Let’s now consider the following query:

SELECT * FROM Production.TransactionHistory WHERE TransactionID >= 1
(113443 row(s) affected)
Table 'TransactionHistory'. Scan count 1, logical reads 792, 
physical reads 0, read-ahead reads 0, lob logical reads 0, 
lob physical reads 0, lob read-ahead reads 0.

image 

The query returns the same set of rows, and has identical performance, but shows up in the execution plan as a seek!

The take-home message here is that just because you can see an index seek in your query does not mean that performance is great.  It means that the table was accessed using a filter you’ve placed on the query.  A lesser message, but sometimes overlooked is that a seek can “scan” a long range of values, if each of the values matches the filter.

Aug 272009
 

The Snapshot Isolation Level and Read Committed Snapshot features were a big improvement when SQL Server 2005 came out – finally, SQL Server had optimistic locking as well as pessimistic locking!  This is a feature that allows writers not to block readers, and readers will not block writers – instead, the readers will look at the most recent row, and ignore the fact that it’s currently being written to.  This sounds great on first inspection – long running transactions won’t block other transactions! If you accidentally have a BEGIN TRANSACTION but you haven’t followed up with a COMMIT or a ROLLBACK, other users won’t be blocked.

It recently came up in discussion that you can just turn on Read Committed Snapshot, and a lot of your blocking will be a thing of the past.  While this is true, I strongly discourage turning on Read Committed Snapshot for existing applications unless you know exactly how your application will respond to it.

Read Committed Snapshot is a modification to the Read Committed Isolation level that uses row versioning to read the previous value.  The trouble comes when you realise that Read Committed Snapshot is the default isolation level, and once you’ve turned it on, every single read committed transaction will run as a snapshot. 

If the developers of your application expect that Read Committed will be the default (back when they wrote the application when SQL Server 2000 was the latest version, this was a reasonable assumption), then changing Read Committed’s behaviour can have unintended consequences.

Let’s look at an example.  We’ll create a new database, containing a single table, Inventory.  This is a simple stock control system, and we’ll be looking at the procedure to check out an item. It’s worth noting that there are better and safer methods of writing the stock checkout function to not have this issue (such as using Repeatable Read), but it’s very possible that the developers of your application used something similar, and it worked “good enough”.

CREATE DATABASE ReadCommittedSnapshotTest
GO

USE ReadCommittedSnapshotTest
GO

CREATE TABLE Inventory (
      ItemID int NOT NULL 
    , LocationID int NOT NULL
    , Qty int
    , PRIMARY KEY (ItemID, LocationID)
)
CREATE NONCLUSTERED INDEX ncixLocationItem ON Inventory(LocationID, ItemID)
GO

Next, we’ll insert some data.

SET NOCOUNT ON
INSERT INTO Inventory (ItemID, LocationID, Qty)
VALUES(rand() * 1000 + 1, 1, rand() * 40 + 1)
INSERT INTO Inventory (ItemID, LocationID, Qty)
VALUES(rand() * 1000 + 1, 2, rand() * 40 + 1)
INSERT INTO Inventory (ItemID, LocationID, Qty)
VALUES(rand() * 1000 + 1, 3, rand() * 40 + 1)
GO 2000
-- Ignore any key violation errors - we'll still get enough data
INSERT INTO Inventory (ItemID, LocationID, Qty) VALUES (796, 1, 5)

For the next part, we need to run the following query on two separate sessions at the same time.  I’ve added a 10-second delay in processing the transaction – you can imagine that there may be other processing required to complete the checkout, and this may take a second or two.  Ten seconds is a little unrealistic, but provides enough time to run the query in the other window.

The way this transaction works is that we begin a transaction, and read the number of items that are in stock at the moment.  If there are more than we want to take, we update the Inventory.

BEGIN TRANSACTION
DECLARE @QtyRequired int, @QtyRemain int
SELECT @QtyRequired = 4
SELECT @QtyRemain = SUM(QTY) FROM Inventory WHERE ItemID = 796 AND LocationID = 1
IF @QtyRemain - @QtyRequired >= 0 
BEGIN
    UPDATE Inventory SET Qty = Qty - @QtyRequired
    WHERE ItemID = 796 AND LocationID = 1
    -- Do other stuff in other tables or databases to check out the item
    WAITFOR DELAY '00:00:10'
    SELECT 'Checkout complete'
END
ELSE
    SELECT 'Not enough qty!'
COMMIT TRANSACTION

SELECT * FROM Inventory WHERE ItemID = 796

If you run the two queries together, you’ll notice that both queries take about 10 seconds to run, and the one that ran first will report “Checkout complete”, and the other will report “Not enough qty!”.  This is good – the second query was blocked until the first was finished.

Let’s turn on READ COMMITTED SNAPSHOT and see what the effect will be.  First we’ll replace the items so we can run the same query again.

-- Replace the inventory
UPDATE Inventory SET Qty = 5 WHERE ItemID = 796 AND LocationID = 1

-- Turn on READ_COMMITTED_SNAPSHOT
ALTER DATABASE ReadCommittedSnapshotTest
SET READ_COMMITTED_SNAPSHOT ON
-- You may need to disconnect the other session for this to complete.

Run the two queries again, side by side.  This time, it still takes 10 seconds to run both queries, but the second query returns a –3 quantity.  Oh dear.

As mentioned, using Repeatable Read would help solve this issue, as you would be guaranteed the data wouldn’t change between the initial stock count check, and the update.  As the two queries are VERY close together, you may never see this issue with the code above, but it still can happen.  As I mentioned, this is an example of “good enough” code.  However, the difference is that the problem only has a narrow window of a few milliseconds to occur with READ COMMITTED, but has 10 seconds to occur with READ COMMITTED SNAPSHOT.

The conclusion of this example is that your application may have unexpected results if you blindly turn on READ COMMITTED SNAPSHOT.  It is a great feature of SQL Server 2005, but unless you know exactly what the effects will be, I don’t recommend turning it on for existing systems.