Click here to Skip to main content
15,868,016 members
Articles / Programming Languages / C#

Thread Synchronization Constructs Used in the CLR

Rate me:
Please Sign up or sign in to vote.
5.00/5 (8 votes)
4 Jun 2009CPOL17 min read 26K   32   1
This article is meant to sort through and explain some of the complexities in threading.

Abstract

A process represents an instance of a running program, and as such, has its own address space, security token, and at least one thread that runs within that process. A thread is a unit of execution context. Processes do not run. Threads run. For obvious reasons, most processes have several threads that run within the container of the process. Threads can simplify program design and implementation while improving performance. As explained in the article “Performing Asynchronous Operations”, applications will achieve the highest performance when the application’s threads are not waiting for operations to complete. The ideal is therefore to implement methods that operate on their own data and to avoid writing methods that access any kind of shared data. Stated loosely, a process makes resources (such as files, ports, Registry keys, etc.) available to the threads that run within that process. In order to do that, the process must access the resource. This creates a handle table in which a handle number is used by the threads to reference those resources in order to access them, ideally in an orderly fashion. If one thread requests exclusive access to a resource, other threads that need access to that resource cannot get their work done. At the same time, you can’t just let any thread access any resource at any time: this would be similar to reading a book while the text is being changed. An important point to note is that thread synchronization is not the same as thread scheduling. Advancement in microprocessor technology has enabled L1 and L2 caching and pipelining. System code execution is not linear. A system thread of execution can get preempted by the CPU for an operation that is considered to have a higher priority. Caching as much executable system code prevents the control unit of the CPU from having to locate the next instruction pointed at by the instruction pointer and read it from that memory location to then write it to the address bus (after which the opcode is broken down into bit fields where it can be understood by the CPU).

Thread synchronization and how it can hurt performance

Thread synchronization is required when two or more threads might access a shared resource at the same time. A resource, again, can be as simple as a block of memory or a single object, or it can be much more complex, like a collection object that contains thousands of objects inside it, each of which may contain other objects as well. Sometimes it is also necessary to treat multiple objects as a single resource for the purposes of thread synchronization. For example, your application might need to add or remove items to or from a queue and update a count of items in the queue in a thread-safe manner. In this example, the queue and the count data must always be manipulated together in an atomic fashion. The term atomic refers to isolating the thread when it executes. Admittedly, thread synchronization has little to do with making the program do what its true intention is. In fact, writing thread synchronization code is difficult, and doing it incorrectly can lead to resources in inconsistent states causing unpredictable behavior. Furthermore, adding thread synchronization to your code makes the code run slower, hurting performance and reducing scalability.

Scalability is a frequently used term that means that the system increases in functionality with the addition of the execution context. At this point anyone who has ever dealt with thread synchronization knows that the direction of this article is heading towards synchronization objects, and in a way, they are correct. The Windows Operating System, however, is a preemptive, multitasking Operating System, while the .NET Framework’s Common Language Runtime is a virtual execution system. Many of the CLR’s thread synchronization constructs are actually just object-oriented class wrappers around Win32 thread synchronization constructs. Before discussing those constructs, however, some important points need to be made clear. The first is that if you are building a reusable class library (a DLL), you make sure that the type’s static methods are thread safe to ensure that if multiple threads call a type’s static methods concurrently, there is no way for the type’s static state to get corrupted. Thread synchronization constructs accomplish this. Now, recall the modern day CPU on-chip cache memory. Accessing this memory is fast compared to accessing memory from the mother board. The first time a thread reads a value in memory, the CPU fetches the desired value from the motherboard's memory and stores it in the CPU’s on-chip cache. In fact, sometimes the CPU will fetch the surrounding bytes (called a cache line) at one time because applications typically read bytes that are near each other in memory. When one of the surrounding bytes is read, it may already be in the cache; if so, the motherboard memory is not accessed, and the performance is high. When a thread writes to memory, the CPU modifies the byte in its cache and does not write the modified value to motherboard memory, again improving performance. So problem solved, huh? Not quite. While the CPU cache improves performance, those same caches can make multiple threads think that a field has different values at the same time. A field is a named variable which points to a typed data slot stored on an instance of that type, while static members are accessed through the type itself rather than the instance (recall building a reusable library). Fields must be unique to a single type, and static fields are stored per type, per application domain. So since we know how CPU caches can affect your application’s behavior, we must find out what the CLR can offer to take control over cache coherency. Being a virtual execution system, one of the goals of the CLR is to help developers avoid writing code for microprocessor architecture. At the same time, storing variables in the registers helps optimize code and performance.

Thread Synchronization Objects via Win32

Windows provides four objects designed for thread and process synchronization. Three of these objects – mutexes, semaphores, and events – are kernel objects and have handles. The fourth object, the CRITICAL_SECTION object, is commonly used to protect shared resources among threads by guaranteeing exclusive access. The way that a thread waits for a critical section to become available differs according to the CPU design. On single processor machines, the thread goes into a wait state (kernel transition), whereas on multiprocessors, the thread might try to spin a number of times in hopes that a critical section becomes available. Taking into account the previously mentioned cache problems and the issues involved with building reusable DLLs, variables shared by all threads should be static or in global storage, declared volatile, and protected with the CRITICAL_SECTION object. The most common way to abstract a CPU-specific implementation is to provide a method. The System.Threading.Thread class offers several static methods:

C#
public static object VolatileRead(ref object address);
public static byte VolatileRead(ref byte address);
public static double VolatileRead(ref double address
public static short VolatileRead(ref short address);
public static int VolatileRead(ref int address);
public static long VolatileRead(ref long address);
public static IntPtr VolatileRead(ref IntPtr address);
public static object VolatileRead(ref object address);
public static sbyte VolatileRead(ref sbyte address);
public static float VolatileRead(ref float address);
public static ushort VolatileRead(ref ushort address);
public static uint VolatileRead(ref uint address);
public static ulong VolatileRead(ref ulong address);
public static UIntPtr VolatileRead(ref UIntPtr address);
public static void VolatileWrite(ref byte address, byte value);
public static void VolatileWrite(ref double address, double value);
public static void VolatileWrite(ref short address, short value);
public static void VolatileWrite(ref int address, int value);
public static void VolatileWrite(ref long address, long value);
public static void VolatileWrite(ref IntPtr address, IntPtr value);
public static void VolatileWrite(ref object address, object value);
public static void VolatileWrite(ref sbyte address, sbyte value);
public static void VolatileWrite(ref float address, float value);
public static void VolatileWrite(ref ushort address, ushort value);
public static void VolatileWrite(ref uint address, uint value);
public static void VolatileWrite(ref ulong address, ulong value);
public static void VolatileWrite(ref UIntPtr address, UIntPtr value);
static void MemoryBarrier();

All of the VolatileRead methods perform a read with acquire semantics; they read the value referred to by the address argument and then invalidates the CPU’s cache, forcing future reads to come from main memory. The C# volatile keyword can be applied to any static or instance field of a class, except for non-primitive value types. These modifiers change all accesses of a field to a volatile read or write. Jeffrey Richter, author of “The CLR via C#”, discourages its use because most accesses of fields do not need to be volatile, and it hurts performance on a modern CPU like the IA64. Also, the C# compiler prevents passing a volatile field by reference. Shown below is some sample code that is solely meant to illustrate the use of the volatile keyword. The reason that the Thread class supports volatile reading from and writing to fields is that volatile reads and writes are believed to prevent the caching of data within a CPU from causing data inconsistencies in threaded applications. Instead of using the Thread class’ VolatileRead and VolatileWrite methods (or the C# volatile keyword), use the interlocked class’ methods or other high-level synchronization. Here is code that uses the volatile keyword:

C#
using System;
using System.Threading;

public static class Program {

   public static void Main() {
   }
}

internal sealed class CacheCoherencyProblem {
   private Byte  m_initialized = 0;
   private Int32 m_value = 0;

   // This method is executed by one thread
   private void Thread1() {
      m_value = 5;
      m_initialized = 1;
   }

   // This method is executed by another thread
   private void Thread2() {
      if (m_initialized == 1) {
         // This may execute and display 0
         Console.WriteLine(m_value);
      }
   }
}

internal sealed class VolatileMethods {
   private Byte  m_initialized = 0;
   private Int32 m_value = 0;

   // This method is executed by one thread
   private void Thread1() {
      m_value = 5;
      Thread.VolatileWrite(ref m_initialized, 1);
   }

   // This method is executed by another thread
   private void Thread2() {
      if (Thread.VolatileRead(ref m_initialized) == 1) {
         // If we get here, 5 will be displayed
         Console.WriteLine(m_value);
      }
   }
}

internal sealed class VolatileField {
   private volatile Byte m_initialized = 0;
   private Int32 m_value = 0;

   // This method is executed by one thread
   private void Thread1() {
      m_value = 5;
      m_initialized = 1;
   }

   // This method is executed by another thread
   private void Thread2() {
      if (m_initialized == 1) {
         // If we get here, 5 will be displayed
         Console.WriteLine(m_value);
      }
   }
   private void CantPassVolatileFieldByReference() {
      // Warning CS0420: a reference to a volatile field 
      // will not be treated as volatile
      Boolean success = Byte.TryParse("123", out m_initialized);
   }
}

Note: code on the article is obviously difficult and tedious to copy. Go to the framework command prompt, and then type the name of the file, say, c:\..\.NET> type con > file.cs. Paste the code here. Press Ctrl-Z.

Now compile the code. con means console, which is an old DOS term used in the old CP/M manuals.

Brief Reminder about Using the Thread Pool

The process of using synchronization constructs involves developing your program by breaking into parts that are multithreaded. The threads must access resources in an orderly manner and must do this atomically. The entire process, then, begins with creating a thread, having it do some work, and then destroying it. The ThreadPool class will work to reuse existing threads from a managed pool of threads, and when it is done using the thread, the thread gets returned to the thread pool for reuse. The thread pool is accessed via the ThreadPool class. Use the ThreadPool class when you must achieve the following:

  • When you want to create and destroy threads in the easiest manner possible.
  • When the performance of your application uses threads of the highest priority.

Here is an example of using the ThreadPool class:

C#
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;

namespace ThreadPool1
{
    class Program
    {
        static int interval;

        static void Main(string[] args)
        {
            Console.Write("Interval to display results at?> ");
            interval = int.Parse(Console.ReadLine());

            ThreadPool.QueueUserWorkItem(new WaitCallback(StartMethod));
            Thread.Sleep(100);
            ThreadPool.QueueUserWorkItem(new WaitCallback(StartMethod));
            Console.ReadLine();

        }

        static void StartMethod(Object stateInfo)
        {
            DisplayNumbers("Thread " + DateTime.Now.Millisecond.ToString());
            Console.WriteLine("Thread Finished");
        }

        static void DisplayNumbers(string GivenThreadName)
        {
            Console.WriteLine("Starting thread: " + GivenThreadName);

            for (int i = 1; i <= 8 * interval; i++)
            {
                if (i % interval == 0)
                }
                 Console.WriteLine("Count has reached " + i);
                    Thread.Sleep(1000);
                }
            }
        }
     }
}

Here is the output:

Interval to display results at?> 5
Starting thread: Thread 311
Count has reached 5
Starting thread: Thread 420
Count has reached 5
Count has reached 10
Count has reached 10
Count has reached 15
Count has reached 15
Count has reached 20
Count has reached 20
Count has reached 25
Count has reached 25
Count has reached 30
Count has reached 30
Count has reached 35
Count has reached 35
Count has reached 40
Count has reached 40
Thread Finished
Thread Finished

Notice that we imported the ThreadPool class that defines the methods and the data members that comprises it. Also notice that rather than creating an instance of the Thread class, we called the ThreadPool.QueueUserWorkItem() method. At the first invocation of the QueueUserWorkItem method, the main thread is put to sleep for 100 milliseconds before a second thread in the managed pool is used. This is accomplished by using another instance of the QueueUserWorkItem method invocation. When using the WaitCallBack delegate, you also have the ability to pass in a parameter as well. This is illustrated in the example code:

C#
ThreadPool.QueueUserWorkItem(new WaitCallback(StartMethod), "First Thread");
Thread.Sleep(100);
ThreadPool.QueueUserWorkItem(new WaitCallback(StartMethod), "Second Thread");

Creating, Starting, and Interacting between Threads

Creating a Thread

The advantage of threading is the ability to create applications that use more than one thread of execution. For example, a process can have a user interface thread that manages interactions with the user and worker threads that perform other tasks while the user interface thread waits for user input. To be acquainted with working with threads, you must become acquainted with the Thread class. The Thread class is used to create and start threads. Below is a listing of the Thread class’ most important properties:

  • IsAlive: gets a value that the current thread is currently executing
  • IsBackground: gets or sets whether the thread runs as a background thread
  • IsThreadPoolThread: gets whether the thread is a thread in the thread pool
  • ManagedThreadId: gets a number to identify the current thread
  • Name: gets or sets a name associated with the thread
  • Priority: gets or sets the priority of a thread
  • ThreadState: gets the ThreadState for the thread

Below is a listing of the Thread class’ methods. Note that these listings do not include the static thread properties and static thread methods:

  • Abort: raises a ThreadAbortException on the thread to indicate that the thread should be aborted
  • Interrupt: raises a ThreadInterruptedException when a thread is in a blocking state
  • Join: blocks the calling thread until the execution terminates
  • Start: sets a thread to be scheduled for execution

The code below demonstrates how to create and start a thread, and shows the interaction between two threads running simultaneously within the same process. Note that you don't have to stop or free the thread, as this is done automatically by the CLR. The program begins by creating an object of type Alpha (oAlpha) and a Thread (oThread) that references the Beta method of the Alpha class. The thread is then started. The IsAlive property of the thread allows the program to wait until the thread is initialized (created, allocated, and so on). The main thread is accessed through Thread, and the Sleep method tells the thread to give up its time slice and stop executing for a certain amount of milliseconds. The oThread is then stopped and joined. Joining a thread makes the main thread wait for it to die or for a specified time to expire. The reader should notice that this code follows these steps:

  1. Create a method that takes no arguments and does not return any data
  2. Create a new ThreadStart delegate, and specify the method in step 1
  3. Create a new Thread object, specifying the ThreadStart object created in step 2
  4. Call Thread.Start

Here is the example code:

C#
using System;
using System.Threading;
public class Alpha
{
   // This method that will be called when the thread is started
   public void Beta()
   {
      while (true)
      {
         Console.WriteLine("Alpha.Beta is running in its own thread.");
      }
  }
};
public class Simple
{
   public static int Main()
   {
      Console.WriteLine("Thread Start/Stop/Join Sample");
      Alpha oAlpha = new Alpha();
      // Create the thread object, passing in the Alpha.Beta method
      // via a ThreadStart delegate. This does not start the thread.
      Thread oThread = new Thread(new ThreadStart(oAlpha.Beta));

      // Start the thread
      oThread.Start();

      // Spin for a while waiting for the started thread to become
      // alive:
      while (!oThread.IsAlive);
      
      // Put the Main thread to sleep for 1 millisecond to allow oThread
      // to do some work:
      Thread.Sleep(1);
      
      // Request that oThread be stopped
      oThread.Abort();
      
      // Wait until oThread finishes. Join also has overloads
      // that take a millisecond interval or a TimeSpan object.
      oThread.Join();
      Console.WriteLine();
      Console.WriteLine("Alpha.Beta has finished");
       try 
      {
         Console.WriteLine("Try to restart the Alpha.Beta thread");
         oThread.Start();
      }
      catch (ThreadStateException) 
      {
         Console.Write("ThreadStateException trying to restart Alpha.Beta. ");
         Console.WriteLine("Expected since aborted threads cannot be restarted.");
      }
      return 0;
   }
}

Hopefully, the beginner in .NET threading can see how the properties are used, as well as the creation of a static method that is later called upon. Here is the output of this code:

Thread Start/Stop/Join Sample
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.
Alpha.Beta is running in its own thread.

Alpha.Beta has finished
Try to restart the Alpha.Beta thread
ThreadStateException trying to restart Alpha.Beta. 
Expected since aborted threads cannot be restarted.

Synchronizing Two Threads: A Producer and a Consumer

Thread synchronization can be accomplished using the C# lock keyword and the Pulse method of the Monitor object. The Pulse method notifies a thread in the waiting queue of a change in the object's state. The example creates a Cell object that has two methods: ReadFromCell and WriteToCell. Two other objects are created from the classes CellProd and CellCons; these objects both have a method ThreadRun whose job is to call ReadFromCell and WriteToCell. Synchronization is accomplished by waiting for "pulses" from the Monitor object, which come in order. That is, first an item is produced (the consumer at this point is waiting for a pulse), then a pulse occurs, then the consumer consumes the production (while the producer is waiting for a pulse), and so on.

C#
using System;
using System.Threading;

public class MonitorSample
{
   public static void Main(String[] args)
   {
      int result = 0;   // Result initialized to say there is no error
      Cell cell = new Cell( );

      CellProd prod = new CellProd(cell, 20);  // Use cell for storage, 
                                               // produce 20 items
      CellCons cons = new CellCons(cell, 20);  // Use cell for storage, 
                                               // consume 20 items

      Thread producer = new Thread(new ThreadStart(prod.ThreadRun));
      Thread consumer = new Thread(new ThreadStart(cons.ThreadRun));
      // Threads producer and consumer have been created, 
      // but not started at this point.

      try
      {
         producer.Start( );
         consumer.Start( );

         producer.Join( );   // Join both threads with no timeout
                             // Run both until done.
         consumer.Join( );  
      // threads producer and consumer have finished at this point.
      }
      catch (ThreadStateException e)
      {
         Console.WriteLine(e);  // Display text of exception
         result = 1;            // Result says there was an error
      }
      catch (ThreadInterruptedException e)
      {
         Console.WriteLine(e);  // This exception means that the thread
                                // was interrupted during a Wait
         result = 1;            // Result says there was an error
      }
      // Even though Main returns void, this provides a return code to 
      // the parent process.
      Environment.ExitCode = result;
   }
}

public class CellProd
{
   Cell cell;         // Field to hold cell object to be used
   int quantity = 1;  // Field for how many items to produce in cell

   public CellProd(Cell box, int request)
   {
      cell = box;          // Pass in what cell object to be used
      quantity = request;  // Pass in how many items to produce in cell
   }
   public void ThreadRun( )
   {
      for(int looper=1; looper<=quantity; looper++)
         cell.WriteToCell(looper);  // "producing"
   }
}

public class CellCons
{
   Cell cell;         // Field to hold cell object to be used
   int quantity = 1;  // Field for how many items to consume from cell

   public CellCons(Cell box, int request)
   {
      cell = box;          // Pass in what cell object to be used
      quantity = request;  // Pass in how many items to consume from cell
   }
   public void ThreadRun( )
   {
      int valReturned;
      for(int looper=1; looper<=quantity; looper++)
      // Consume the result by placing it in valReturned.
         valReturned=cell.ReadFromCell( );
   }
}

public class Cell
{
   int cellContents;         // Cell contents
   bool readerFlag = false;  // State flag
   public int ReadFromCell( )
   {
      lock(this)   // Enter synchronization block
      {
         if (!readerFlag)
         {            // Wait until Cell.WriteToCell is done producing
            try
            {
               // Waits for the Monitor.Pulse in WriteToCell
               Monitor.Wait(this);
            }
            catch (SynchronizationLockException e)
            {
               Console.WriteLine(e);
            }
            catch (ThreadInterruptedException e)
            {
               Console.WriteLine(e);
            }
         }
         Console.WriteLine("Consume: {0}",cellContents);
         readerFlag = false;    // Reset the state flag to say consuming
                                // is done.
         Monitor.Pulse(this);   // Pulse tells Cell.WriteToCell that
                                // Cell.ReadFromCell is done.
      }   // Exit synchronization block
      return cellContents;
   }
   
   public void WriteToCell(int n)
   {
      lock(this)  // Enter synchronization block
      {
         if (readerFlag)
         {      // Wait until Cell.ReadFromCell is done consuming.
            try
            {
               Monitor.Wait(this);   // Wait for the Monitor.Pulse in
                                     // ReadFromCell
            }
            catch (SynchronizationLockException e)
            {
               Console.WriteLine(e);
            }
            catch (ThreadInterruptedException e)
            {
               Console.WriteLine(e);
            }
         }
         cellContents = n;
         Console.WriteLine("Produce: {0}",cellContents);
         readerFlag = true;    // Reset the state flag to say producing
                               // is done
         Monitor.Pulse(this);  // Pulse tells Cell.ReadFromCell that 
                               // Cell.WriteToCell is done.
      }   // Exit synchronization block
   }
}

The output:

Produce: 1
Consume: 1
Produce: 2
Consume: 2
Produce: 3
Consume: 3
Produce: 4
Consume: 4
Produce: 5
Consume: 5
Produce: 6
Consume: 6
. . . . . . 

Produce: 20
Consume: 20

The comments placed in the code above are meant to illustrate the use of the Monitor class and sync blocks. In the Win32 API, the CRITICAL_SECTION structure (synchronization object that is not kernel object and therefore has no handle) offers a fast and efficient way to synchronize threads for mutual exclusive access to a shared resource when the threads are running in a single process. The CLR does not offer a CRITICAL_SECTION object, but it does offer a similar mechanism allowing mutual exclusive access to a resource among a set of threads running in the same process. This mechanism is made possible via the System.Threading.Monitor class (as seen in the code above) and sync blocks.

How Does this Construct Work?

Since the Common Language Runtime is an object-oriented platform, a developer will construct objects and then call the type’s members in order to manipulate those objects. Sometimes these objects are manipulated by multiple threads, and to ensure that the object’s state will not become corrupted, thread synchronization must be performed. The idea is that every object in the heap has a data structure similar to a CRITICAL_SECTION object that can be used as a thread synchronization lock. Then, the Framework Class Library provides methods that accept a reference to an object, and these methods will use the object’s data structure to take and release the thread synchronization lock. Recall that whenever an object is created on the heap, it will get two additional overhead fields associated with it: the type object pointer and the sync block. The type object pointer contains the memory address of the type’s type object. The sync block index contains an integer index into the array of sync blocks. When an object is constructed, the object's SyncBlockIndex is initialized to a negative value to indicate that it doesn't refer to any SyncBlock at all. Then, when a method is called to synchronize access to the object, the CLR finds a free SyncBlock in its cache and sets the object's SyncBlockIndex to refer to the SyncBlock. In other words, SyncBlocks are associated with an object on the fly when the object needs the synchronization fields. When no more threads are synchronizing access to the object, the object's SyncBlockIndex is reset to a negative number, and the SyncBlock is free to be associated with another object in the future.

2.JPG

In the CLR Data Structures section of the figure, you can see that there is one data structure for every type that the system knows about; you can also see the set of SyncBlock structures. In the managed heap section of the figure, you can see that three objects, ObjectA, ObjectB, and ObjectC, were created. Each object's MethodTablePointer field refers to the type's method table. From the method table, you can tell each object's type. So, we can easily see that ObjectA and ObjectB are instances of the SomeType type while ObjectC is an instance of the AnotherType type.

You'll notice that ObjectA's SyncBlockIndex overhead field is set to 0. This indicates that SyncBlock #0 is currently being used by ObjectA. On the other hand, ObjectB's SyncBlockIndex field is set to -1 indicating that ObjectB doesn't have a SyncBlock associated with it for its use. Finally, ObjectC's SyncBlockIndex field is set to 2 indicating that it is using SyncBlock #2. In the example I've presented here, SyncBlock #1 is not in use, and may be associated with some object in the future.

So, logically, you see that every object in the heap has a SyncBlock associated with it that can be used for fast, exclusive thread synchronization. Physically, however, the SyncBlock structures are only associated with the object when they are needed, and are disassociated from an object when they are no longer needed. This means that the memory usage is efficient. By the way, the SyncBlock cache is able to create more SyncBlocks if necessary so you shouldn't worry about the system running out of them if many objects are being synchronized simultaneously. (Reference: Jeffrey Richter: CLR via C#, and Thread-Safe Synchronization, written by Jeffrey Richter, MSDN).

Using a Mutex Object

You can use a mutex object to protect a shared resource from simultaneous access by multiple threads or processes. The state of a mutex object is either set to signaled, when it is not owned by any thread, or non-signaled, when it is owned. Only one thread at a time can own a mutex object. For example, to prevent two threads from writing to shared memory at the same time, each thread waits for ownership of a mutex object before executing the code that accesses the memory. After writing to the shared memory, the thread releases the mutex object.

The purpose of the Mutex class is to provide a locking mechanism, and it works in much the same way as the monitor class. The major difference is that the Mutex class can lock data across AppDomain and process boundaries. To use a mutex, follow these steps:

  1. Create an instance of the Mutex class to be shared across many threads.
  2. Inside a new thread, create an if statement calling the Mutex class’s WaitOne method to wait until the lock is available.
  3. Create a try/finally block inside the if statement (optional).
  4. Inside the try portion, do the needed work while you have exclusive access to the Mutex object.
  5. In the finally part of the try/finally block, release the Mutex by calling Mutex.ReleaseMutex.

Here is a simple example, followed by a more practical example:

C#
using System;
using System.Threading;
public sealed class Program {
    public static void Main(string[]  args) {
        Mutex oneMutex = null;
        const string MutexName = "RUNMEONLYONCE";
        try   // try and open the mutex
        {
          oneMutex = Mutex.OpenExisting(MutexName);
        }
        catch (WaitHandleCannotBeOpendedException)
        {
        }
        if (oneMutex == null)
        {
           oneMutex = new Mutex(true, MutexName);
        }
        else
        {
          oneMutex.Close();
          return;
        }
        Console.WriteLine("My Application");
        Console.Read();
    }
}

This example code demonstrates how to use the classes Mutex, AutoResetEvent, and WaitHandle in processing threads. It also demonstrates the methods used in processing the mutex object.

C#
using System;
using System.Threading;

public class MutexSample
{
   static Mutex gM1;
   static Mutex gM2;
   const int ITERS = 100;
   static AutoResetEvent Event1 = new AutoResetEvent(false);
   static AutoResetEvent Event2 = new AutoResetEvent(false);
   static AutoResetEvent Event3 = new AutoResetEvent(false);
   static AutoResetEvent Event4 = new AutoResetEvent(false);
   
   public static void Main(String[] args)
   {
      Console.WriteLine("Mutex Sample ...");
      // Create Mutex initialOwned, with name of "MyMutex".
      gM1 = new Mutex(true,"MyMutex");
      // Create Mutex initialOwned, with no name.
      gM2 = new Mutex(true);
      Console.WriteLine(" - Main Owns gM1 and gM2");

      AutoResetEvent[] evs = new AutoResetEvent[4];
      evs[0] = Event1;    // Event for t1
      evs[1] = Event2;    // Event for t2
      evs[2] = Event3;    // Event for t3
      evs[3] = Event4;    // Event for t4

      MutexSample tm = new MutexSample( );
      Thread t1 = new Thread(new ThreadStart(tm.t1Start));
      Thread t2 = new Thread(new ThreadStart(tm.t2Start));
      Thread t3 = new Thread(new ThreadStart(tm.t3Start));
      Thread t4 = new Thread(new ThreadStart(tm.t4Start));
      t1.Start( );   // Does Mutex.WaitAll(Mutex[] of gM1 and gM2)
      t2.Start( );   // Does Mutex.WaitOne(Mutex gM1)
      t3.Start( );   // Does Mutex.WaitAny(Mutex[] of gM1 and gM2)
      t4.Start( );   // Does Mutex.WaitOne(Mutex gM2)

      Thread.Sleep(2000);
      Console.WriteLine(" - Main releases gM1");
      gM1.ReleaseMutex( );  // t2 and t3 will end and signal

      Thread.Sleep(1000);
      Console.WriteLine(" - Main releases gM2");
      gM2.ReleaseMutex( );  // t1 and t4 will end and signal

      // Waiting until all four threads signal that they are done.
      WaitHandle.WaitAll(evs); 
      Console.WriteLine("... Mutex Sample");
   }

   public void t1Start( )
   {
      Console.WriteLine("t1Start started,  Mutex.WaitAll(Mutex[])");
      Mutex[] gMs = new Mutex[2];
      gMs[0] = gM1;  // Create and load an array of Mutex for WaitAll call
      gMs[1] = gM2;
      Mutex.WaitAll(gMs);  // Waits until both gM1 and gM2 are released
      Thread.Sleep(2000);
      Console.WriteLine("t1Start finished, Mutex.WaitAll(Mutex[]) satisfied");
      Event1.Set( );      // AutoResetEvent.Set() flagging method is done
   }

   public void t2Start( )
   {
      Console.WriteLine("t2Start started,  gM1.WaitOne( )");
      gM1.WaitOne( );    // Waits until Mutex gM1 is released
      Console.WriteLine("t2Start finished, gM1.WaitOne( ) satisfied");
      Event2.Set( );     // AutoResetEvent.Set() flagging method is done
   }

   public void t3Start( )
   {
      Console.WriteLine("t3Start started,  Mutex.WaitAny(Mutex[])");
      Mutex[] gMs = new Mutex[2];
      gMs[0] = gM1;  // Create and load an array of Mutex for WaitAny call
      gMs[1] = gM2;
      Mutex.WaitAny(gMs);  // Waits until either Mutex is released
      Console.WriteLine("t3Start finished, Mutex.WaitAny(Mutex[])");
      Event3.Set( );       // AutoResetEvent.Set() flagging method is done
   }

   public void t4Start( )
   {
      Console.WriteLine("t4Start started,  gM2.WaitOne( )");
      gM2.WaitOne( );   // Waits until Mutex gM2 is released
      Console.WriteLine("t4Start finished, gM2.WaitOne( )");
      Event4.Set( );    // AutoResetEvent.Set() flagging method is done
   }
}

This code might behave differently on different systems. Here is the output that I get on my machine:

Mutex Sample ...
 - Main Owns gM1 and gM2
t1Start started,  Mutex.WaitAll(Mutex[])
t2Start started,  gM1.WaitOne( )
t3Start started,  Mutex.WaitAny(Mutex[])
t4Start started,  gM2.WaitOne( )
 - Main releases gM1
t3Start finished, Mutex.WaitAny(Mutex[])
 - Main releases gM2
t4Start finished, gM2.WaitOne( )
... Mutex Sample

Many contend that successful applications built in the future will take advantage of the new hyper-threading and dual-core processor technologies in order to break the applications into mutlithreaded parts. Thread synchronization constructs are difficult, and any .NET developer should maintain a keen focus on the Asynchronous Programming Model. Finally, parts of this article have been referenced from Jeffrey Richter's book, the CLR via C#. I recommend this book highly. It is a good read.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer Monroe Community
United States United States
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions

 
GeneralNice Pin
Maze5-Jun-09 2:29
Maze5-Jun-09 2:29 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.