-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock when adding multiple entities #11428
Comments
I do not know why is behaving that way, just some notes:
Second, you should check if you can do snapshot instead uncommited. And third, EF does not get along very well with batch inserts and updates, i would recommend to include EF Plus to handle this specific inserts. |
Hi Antonio, thanks for your tip on TransactionScope. As stated in the code, I don't think that statement has any effect on the problem being reported.
I think EF should be able to perform well in the code I have provided. |
I know, it should be capable, how many elements are you trying to add? remember that EF will add the entity, then request the last inserted id and keep the entity tracked until the context is closed, making the context more slow after each insert, like here. On the other hand, this should result in a very slow insertion rather than the deadlock you are experiencing (or, if you are trying to insert a lot of objects). So, if your list is large, or if is very important that the operation is made quick, i still recommend to try EF Plus. If you list is small nor you care about performance your solution should work or I can not see anything that causes the problem, any help from the people who really know EF that help us could be appreciated! |
550 per task x 6 tasks. |
Ok, i can say for sure, while that is a short lived context in terms of code, is still a very long lived context, since is used for 550 insertions (i guess, one context per thread), where each insertion will cause EF to:
The cache and the entity tracking will cost you a lot, and from what i see not necessary. No matter where you put that Bulk inset is a hard problem, you can check an implementation a little more complete that yours in stackoverflow. I would not recommend to follow your approach (only if you are doing a very few insertions) since will cause a lot overhead on your app and on your DB. But still, i repeat, i have no idea why your code works with the workaround and breaks without it, but i almost sure both approaches are inefficient and will lead to further problems under heavy or even normal load. Take what I say with care, I'm no expert, just a user trying to help, I hope someone with better credentials could help us clear this out. |
@divega - To write about lock escalation. |
@sam-wheat @antonioortizpola, @smitpatel and I played a bit with the repro and realized what you are seeing is the effect of a feature in SQL Server called lock escalation, which is the process of converting many fine-grained locks (e.g. row locks) into less fine grained locks, e.g. page locks or table locks. There is some information about how to diagnose deadlocks and the effects of lock escalation at https://docs.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide#deadlocking. But the short answer is that you can avoid this lock escalation in two ways:
|
Note for triage: I am leaving this open to have a chance to investigate/discuss if there is anything we should do to prevent lock escalation from happening by default. |
Removing milestone so that it shows up during triage. |
@smitpatel and @divega to do a binary search to find the magic number for max batch size that allows this repro to work. |
The magic number moves away as the size of table grows. |
I modified my original upload to do a more formal test of batch size versus task count. |
@smitpatel @AndriySvyryd can you check with the data from @sam-wheat if there is a number that makes sense or at least enough information to decide some other action plan? I am not going to be able to check myself for at least a week. |
Discussed this in triage and it is looking like 128 could be a reasonable value to use, but we don't think this change needs to be made for 2.1, so moving to the backlog. |
What is the current default value please? |
@sam-wheat There is no default value, we just create the biggest batches possible. |
Related to #9270. |
I have a multi threading application that needs to insert bulk of records in the same table from different places. After continuously running 2 Hrs my application gets connection time out issue Stack Trace I suspect this may be the cause of deadlock in a particular table. I verified in SQL that table is on deadlock. I have tried the following solution
When running the multi threading application it says MSDTC Server on unavailble . Note: I started the Distributed Transaction Coordinator service but still i get your environment not support distributed transactions I tried with SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED but still i am getting the same issue after 2Hrs running continuously I added this SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED between the BeginTransaction and Commit transaction in my C# code but no luck. Could you please someone can help me to overcome from this ? |
What version was this fixed in? |
@SimonCropp It hasn't been fixed yet, you need to set MaxBatchSize manually. |
@AndriySvyryd thanks. in that case should this issue be re-opened? |
@SimonCropp No, this is a duplicate of #9270 |
@AndriySvyryd thanks again. would you mine placcing the " this is a duplicate of #9270" at the top of the issue description? |
@SimonCropp We get a lot of duplicates. They are closed and tagged with the |
I don't know if this is a problem with EF or I am just doing it wrong (more often the latter).
If I am doing it wrong I would like to know why the code works when I save each entity individually as opposed to saving all of them at once outside the loop. As is shown in the attached project, setting the isolation level appears to have no effect, nor does removing the lookup that happens for each row.
InsertTest.zip
The text was updated successfully, but these errors were encountered: