There is no such concept as lock assignment. A lock cannot be busy or free. The concepts are very different from what you might think of.
In C++/CLI (.NET) it can be lock or
System.Threading.ReaderWriterLockSlim
, in native C++ it can be
CriticalSection or
Mutex (see
http://www.curly-brace.com/lock.html[
^], for example).
The right programming practice is only one — sandwiching some access to a shared resource between acquiring (P) and releasing (V) of some thread synchronization object of the type mentioned above, a lock; "P" and "V" are original terms by
Edsger Wybe Dijkstra (
http://en.wikipedia.org/wiki/Edsger_W._Dijkstra[
^]), abbreviations from Dutch "wait" and "release".
So, in pseudo-code, it looks like this:
method AccessSharedResource() {
Lock.P();
try {
WorkWithSharedResource();
} finally {
Lock.V();
} end exception
} end method AccessSharedResource
A function
AccessSharedResource
is run by more than one thread at a time; all threads also use the same instance of the lock
Lock
; and the goal of this construct is to allow all threads to call
WorkWithSharedResource
, but only one thread at a time. Also, all the thread which have to wait for their turn should consume zero CPU time.
All those primitives work on threads. Some synchronize threads of the same process, some other can do the same to the threads belonging to different processes (named Mutex can do that; it is named because system-wide unique identification is needed to identify the thread; as processes are isolated and all references/pointers/handles only make sense withing their processes —
address spaces are isolated; same numeric value of the address means different address in different processes). Anyway, in all cases, those synchronization primitives affect only the threads.
A call to
P
is
blocking. If there is only one thread, this function returns immediately, but a lock is
acquired by this thread. The next threads a blocked at this call: OS switches them off and keep at the special wait state wasting no CPU time. Practically, OS never schedule waiting threads back to execution until one of the threads is awaken. It can be awaken by special means like abort (.NET; also there is a similar technique in Windows related to exception seeding method — it would take a whole big article to explain how it works), timeout expiration, and — main thing — by a call to
V
performed by another thread. Practically, on this call, OS immediately looks at the queue where the threads waiting on the same instance of lock are accumulated, picks the first in queue, returns it to a working state and schedules for execution. New thread enters the area between
P
and
V
, in this case, calls
WorkWithSharedResource
. Remaining waiting threads remain in wait state until next one is waken up.
In this way, OS guarantees that all threads pass, tries to maximize throughput and guarantees that only one thread at a time can access the shared resource.
—SA