An Error Occurred While Reading From the Store Provider's Data Reader Deadlock
"Transaction was deadlocked" error occurs when ii or more sessions are waiting to become a lock on a resources which has already locked past another session in the same blocking chain. As a result, none of the sessions can be completed and SQL Server has to arbitrate to solve this problem. It gets rid of the deadlock by automatically choosing one of the sessions as a victim and kills it allowing the other session to continue. In such instance, the client receives the following error bulletin:
Transaction (Process ID) was deadlocked on lock resource with another procedure and has been chosen every bit the deadlock victim. Rerun the transaction.
and the killed session is rolled back. As a rule, the victim is the session that requires the least corporeality of overhead to rollback.
Why SQL Server Deadlocks Happen?
To empathize how " Transaction (Process ID) was deadlocked on lock resource with another procedure and has been chosen as the deadlock victim. Rerun the transaction " error happens let's consider a very simple example.
Let'due south create two tables "t1" and "t2" containing merely one integer column:
CREATE Table t1 (id int) CREATE Tabular array t2 (id int)
and make full them with some information:
INSERT INTO t1 (id) SELECT i Wedlock ALL SELECT 2 UNION ALL SELECT 3 GO INSERT INTO t2 (id) SELECT 1 UNION ALL SELECT ii UNION ALL SELECT iii
Now, suppose we started a transaction that deletes rows with id=2 from t1:
BEGIN TRAN DELETE FROM t1 WHERE id = 2
Then, assume that the other transaction is going to delete the aforementioned rows from both tables:
Brainstorm TRAN DELETE FROM t2 WHERE id = 2 DELETE FROM t1 WHERE id = 2
It needs to be waiting for the beginning transaction to complete and release table t1.
Simply, presume that the beginning transaction now deletes the same row from the second table:
DELETE FROM t2 WHERE id = 2
After executing this argument you should receive the following error message:
Transaction (Process ID) was deadlocked on lock resource with some other process and has been chosen as the deadlock victim. Rerun the transaction.
Information technology is caused by a situation when the first transaction is waiting for the 2nd ane (to release t2) while the 2d transaction is also waiting the first (to release t1) one in the same time.
How to Analyze Deadlock Graphs
A deadlock graph is a cake of information showing what resource and sessions are involved in a deadlock. It helps to understand why the deadlock happened.
Earlier SQL Servers 2008, in order to capture this data yous had to prepare up a server-side tracing or enable trace flags and then look while the deadlock occurs. Starting from SQL Server 2008 everything is much easier. You can recollect a deadlock graphs retrospectively from the extended events "system_health" session. To do this, go to "Management" > "Extended Events" > "Sessions" > "system_health" > "package0.event_file" and click "View Target Data…"
Thousands of events will be shown in the opened window. There yous can find deadlock reports which marked as "xml_deadlock_report". Allow's choose one nosotros've just simulated
and look at its deadlock graph (in form of XML) details consisting of resources and processes sections.
Resource section displays the lists with all the resources which were involved in the deadlock:
It shows what processes were fighting over and what types of locks they were causing. Information technology has ii or more entries. Each entry has a description of the resource followed by the lists of the processes that held a lock or requested a lock on that resource. Locks in that section mainly volition chronicle to a central, RID, a folio or a table.
After the resources department let'due south plough to the processes section to detect out what those processes were doing.
Processes section displays the details of all the processes which were involved in the deadlock
This section contains entries nearly the threads involved in the deadlock and provides such crucial information like host names, login names, isolation level, times, session settings and then on. Simply the well-nigh valuable information is the isolation level under which each query was running and the details nigh argument caused the deadlock.
How to Choose a Deadlock Victim
If you can't avoid deadlocks, there is an option to specify which process should dice when a deadlock occurs. SQL Server chooses a deadlock victim based on two factors: DEADLOCK_PRIORITY fix for each session and the amount of work which SQL Server has to do in society to roll back the transaction.
The DEADLOCK_PRIORITY option can be set by a user to HIGH, NORMAL, LOW or to an integer value from -10 to ten. By default, DEADLOCK_PRIORITY is set to NORMAL (0). Use the post-obit syntax to set up deadlock priority:
SET DEADLOCK_PRIORITY { LOW | NORMAL | Loftier | <numeric-priority> | @deadlock_var | @deadlock_intvar } <numeric-priority> ::= { -10 | -9 | -viii | … | 0 | … | 8 | nine | 10 }
For case, a session with NORMAL deadlock priority will be chosen as a deadlock victim if it involved in a deadlock chain with other sessions having deadlock priority gear up to Loftier or integer value greater than 0. And it will survive if the other sessions have LOW deadlock priority or its integer value less than null.
Low is equal to -5, NORMAL is the same as o, High equals to five. In other words, run the post-obit commands to set a deadlock priority to NORMAL:
SET DEADLOCK_PRIORITY NORMAL; Become
or
SET DEADLOCK_PRIORITY 0; Get
To check the deadlock priority of the session you can utilize the following query:
SELECT session_id, DEADLOCK_PRIORITY FROM sys.dm_exec_sessions WHERE SESSION_ID = @@SPID
How to Avoid Deadlocks in SQL Server
As a developer, you need to design database modules in a way that minimizes risks of deadlocks. Here are some useful tips on how to do that:
Make sure the applications access all shared objects in the same lodge
Consider the following two applications ( bad practice ):
Awarding 1 | APPLICATION 2 |
one. Begin Transaction | i. Begin Transaction |
2. Update Part Table | 2. Update Supplier Table |
3. Update Supplier Tabular array | iii. Update Part Table |
4. Commit Transaction | 4. Commit Transaction |
These ii applications may deadlock oft. If both are nearly to execute step 3, they may each cease up blocked by the other, because they both demand admission to an object that the other application locked in stride two and was not to release it till the end of the transaction.
Please see the following correction of the higher up case, changing the order of the statements ( skillful practice ):
APPLICATION ane | APPLICATION 2 |
1. Begin Transaction | one. Begin Transaction |
2. Update Supplier Table | 2. Update Supplier Tabular array |
3. Update Function Table | three. Update Office Table |
iv. Commit Transaction | 4. Commit Transaction |
It is a very good idea to define some programming policy that defines the guild in which database objects and resources have to be accessed past the applications (too information technology is a good policy to release locks in the opposite guild to that in which the applications locked them).
Go along transactions short and simple
Delight consider the previous instance:
the applications have two statements in a transaction ( bad example )
APPLICATION 1 | Application 2 |
1. Begin Transaction | 1. Begin Transaction |
2. Update Supplier | 2. Update Part |
5. Update Function | 5. Update Supplier |
vi. Commit Transaction | vi. Commit Transaction |
Delight consider the following changes in the above example ( good example ):
APPLICATION ane | APPLICATION two |
1. Begin Transaction | 1. Begin Transaction |
2. Update Supplier | 2. Update Part |
3. Commit Transaction | 3. Commit Transaction |
four. Brainstorm Transaction | 4. Begin Transaction |
v. Update Part | 5. Update Supplier |
half-dozen. Commit Transaction | vi. Commit Transaction |
In this case you lot have everywhere one update at a time, thе transactions are very curt, and there will exist no deadlocks hither at all.
Make sure the applications use the minimum necessary transaction isolation level.
The lower isolation level, the lower possibility of deadlocks (and the higher possibility of violation of data integrity, although).
For example, when the lowest isolation level possible (READ UNCOMMITTED) is used, there are no deadlocks at all. Although in this case yous take to take special care of data integrity,
equally READ UNCOMMITTED isolation level allows a transaction to read a table before some other transaction finishes writing to the table (i.e. earlier the writing commits), and in this situation some data can be read before an update finishes, and then you have to be careful about reading possibly outdated or inconsistent data.
Use "manual" lock/unlock possibilities to lock/unlock objects by yourself, not leaving information technology to the system, i.e. non using loftier level transaction isolation levels
Consider once again the above two applications ( bad practice ):
APPLICATION 1 | Application 2 |
1. Brainstorm Transaction | ane. Begin Transaction |
2. Update Part Table | ii. Update Supplier Tabular array |
3. Update Supplier Table | 3. Update Function Tabular array |
4. Commit Transaction | 4. Commit Transaction |
Here the situation is prone to deadlocks, but if all of the 'update' statements are wrapped up with special lock/unlock procedures, and then there will be no deadlocks.
Consider the applications changed this way (good practice).
APPLICATION one | APPLICATION 2 |
Brainstorm Transaction | Begin Transaction |
'Manual Lock ' of Role | ' Manual Lock ' of Supplier |
Update Part | Update Supplier |
'Transmission Release' of Office | ' Manual Release ' of Supplier |
' Manual Lock ' of Supplier | ' Transmission Lock ' of Function |
Update Supplier | Update Office |
' Manual Release ' of Supplier | ' Manual Release ' of Part |
Commit Transaction | Commit Transaction |
To implement 'Manual Lock' use sp_getapplock process.
For 'Manual Release' use sp_releaseapplock procedure.
In the case to a higher place there will be no deadlocks (and no 'lost updates' etc) even using the lowest transaction isolation level, every bit each transaction do not enquire for access to another object before releasing the previous ane.
In case Repeatable read or Serializable isolation levels are required, and ii applications use the aforementioned database object very oftentimes, use UPDLOCK hint
Consider the following two transactions (bad exercise):
Awarding ane | Awarding ii |
ane. Brainstorm Transaction | ane. Begin Transaction |
ii. Read Office table (S-lock #1 is set by the Read) | 2. Read Function table (S-lock #ii is set by the Read) |
three. Change structure of Function table (waiting for Due south-lock #ii release) | three. Alter structure of Office tabular array (waiting for Due south-lock #1 release) |
four. Commit Transaction | iv. Commit Transaction |
Here are the 2 transactions just checking some data upward earlier irresolute them
(e.1000. counting a number of records of a tabular array before inserting a new record in information technology).
On step ii both transactions utilise S-lock to the same database object and then on step 3 they both look for release of the S-locks for changing something in the construction of the object (e.grand. inserting a row to the table).
Delight run across the changes in the 2 transactions ( good practise ):
Awarding 1 | APPLICATION ii |
1. Brainstorm Transaction | 1. Brainstorm Transaction |
2. Select Role table with UPDLOCK optimizer hint | 2. Select Part table with UPDLOCK optimizer hint |
3. Access Part tabular array | 3. Access Part table |
4. Commit Transaction | 4. Commit Transaction |
The advice is to southward tart all relevant transactions with SELECT statement with UPDLOCK optimizer hint for making the two applications to bargain with the same database object one afterward some other in turn:
SELECT * FROM Office WITH (UPDLOCK)
Other advices on how to avert deadlocks in SQL Server
There are besides other general advices related to the matter, such as
- have normalized database design,
- use bound connections and sessions,
- reduce lock time (e.grand. don't allow users input during transactions),
- avoid cursors,
- use row versioning-based Isolation levels,
- use ROWLOCK optimizer hint,
etc.
Source: https://academy.sqlbak.com/transaction-process-id-was-deadlocked-on-lock-resources-with-another-process-and-has-been-chosen-as-the-deadlock-victim-msg-1205/
0 Response to "An Error Occurred While Reading From the Store Provider's Data Reader Deadlock"
Post a Comment