Today I am demonstrating a deadlock condition which I came across after I had accidentally increased isolation level to serializable. We can replay this condition with the simplest table possible:
CREATE TABLE dbo.test
INSERT INTO dbo.test VALUES (1, 'aa');
Now let’s open SSMS (BTW, since version 2016 there is an installer decoupled from SQL Server available here) with two separate tabs. Please consider, that each tab has its own connection. Then execute the following statement to increase isolation level in both tabs:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
The following code tries to resemble application level function which does a bunch of possibly time consuming things. These are emulated with
WAITFOR instruction. But the point is that the transaction does both
UPDATE on the same table having those time consuming things in between.
SELECT * FROM dbo.test;
WAITFOR DELAY '00:00:10';
raiserror ('after wait', 10,1) with nowait;
UPDATE [dbo].[test] SET [Val] = 'cccccc' WHERE [Id] = 1;
Let’s put the code in both tabs and then execute one tab and the second tab. After waiting more than 10 seconds, which is the delay in code, we will observe an error message on the first tab:
Msg 1205, Level 13, State 56, Line 8
Transaction (Process ID 54) was deadlocked on lock resources with another
process and has been chosen as the deadlock victim. Rerun the transaction.
This situation occurred in a web application where concurrent execution of methods is pretty common. For application developer it is very easy to be tricked into thinking that having set
SERIALIZABLE isolation level we magically make sequential execution of our SQL code. But this is wrong. By setting
SERIALIZABLE level we do not automatically switch the behavior of the code wrapped with transaction to the behavior of
lock statement known from C# (technically
lock is a monitor).
I would advise having a closer look at the instructions wrapped in transaction. In real application the execution flow is much more `polluted` with an ORM calls, but my simplified code from above just tries to model common scenario of reads followed by writes. What happens here is that SQL Server takes a reader lock on the table after executing the
SELECT. When we execute the code again in another session we have one more reader lock taken on the table. Now when the first session passes
waitfor and comes to
UPDATE it needs to take a writer lock and waits (I am purposely using generic vocabulary instead of SQL Server specific one — these locks inside database engine all have their names). We observe the first tab waits more than 10 seconds. This is because when the first tab reaches its
UPDATE it needs to take writer lock, but it is locked by the
SELECT in the second tab. Conversely, the second’s tab
UPDATE waits for the lock taken by the
SELECT in the first tab. This is deadlock which fortunately is detected by the engine.
The problem is caused by the lock taken witch
SELECT instruction having
SERIALIZABLE isolation level set. The lock is not taken in this place with
READ COMMITED which is the default level.
I am writing about this for the following reasons:
This is very simple scenario from the application point of view: to read some data, update the data, do some things and have all of this wrapped with a transaction.
It is very easy to make wrong assumption, that
SERIALIZABLE level guarantees that our SQL code will be executed sequentially. But it only guarantees, that if the transactions execute, their observable effects will be as if they both executed sequentially i.e. one after another. But it is your job to make them actually execute not run into a deadlock.