Click here to Skip to main content
15,891,633 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
Hi.
Just wondering if any of you have encountered issues when multiple users are trying to insert their data into a single table concurrently. Table has unique identity column named ID and it auto-increments its own value whenever a record is inserted. if users will try to insert data simultaneously, say 20 concurrent users, will it cause a deadlock within the table itself?

What I have tried:

Tried to simulate 3 concurrent sessions inserting 10,000 unique records each, nothing problematic occurred though processing is slower than I actually anticipated. But our production server supports up to 200 users with an expected max concurrent sessions of around 100, and I don't have that simulation resources to play with to stress test my app. Hope you could help me by pointing me to the right direction. Thanks! :)
Posted
Updated 22-Feb-18 16:59pm
Comments
PIEBALDconsult 22-Feb-18 21:34pm    
Only if you do it wrong.
Boy Balantoy 22-Feb-18 21:53pm    
Sorry but I don't get the sense of this answer. Can u kindly elaborate on how I may be able to "do it wrong"? That would be much appreciated.

All server-based SQL Servers handle concurrent connections and operations themselves. You don't have to do anything in your code to ensure this works.

In fact, you actually have to go out of your way to screw things up, like assigning your own primary key values in code instead of letting the SQL server do it. And man, noob's just love to go out of their way, screw things up, and do everything the hard way.
 
Share this answer
 
Deadlock is a different concept. In a simple for deadlock occurs if

- user A updates row 11 in table 1
- user B updates row 22 in table 2
- user B tries to update row 11 in table 1. Now it has to wait because the row is locked by user A
- user A tries to update row 22 in table 2. Now it has to wait because the row is locked by user B

The above is a deadlock. Both users are waiting for a resource from someone else and neither is willingly releasing it's own resources.

What comes to the question itself. If you insert into a table you should have no problems even if multiple user insert into the same table at the same time. This is a common scenario in databases.

However, some problems you might encounter
- operations slow down because of lock waits. This will happen even if the rows are not duplicates. For example index locks may cause waits
- transaction log handling becomes the bottleneck if not on a fast device
- memory handling slows operations if too big portion of the memory contains dirty pages without being able to flush them on the disk
- and so on...

Most of the potential problems are related to the server side resources. So ensuring that the resources (memory, CPU, disk speed...) are adequate should tackle most questions. If the operations are mainly updates, be careful with the amount of indexes, do not over index if not really needed.

Perhaps the biggest question comes down to the length of individual transactions. Transaction must enforce data integrity but keeping them as short as possible helps. This is something that should be taken into consideration when creating the client side. For example if a single transaction contains multiple inserts, do not execute them separately from the client side. Instead, create a stored procedure which takes an array as a parameter and insert all the rows during a single call.
 
Share this answer
 
Comments
Boy Balantoy 5-Aug-18 23:44pm    
Thanks for this mate. Really appreciated. :)

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900