This tip discusses the reason why volatile keywork is used with double check lock pattern.
The Need for Volatile with the Double Check Locking Pattern for C#
There have been many articles written about double check locking such as is it safe to use on .NET or other platforms/compilers/languages, and if the volatile
keyword is needed.
Double check locking is safe on the .NET platform running on x86 hardware (if implemented correctly), though may not be in other languages/platforms/hardware due to their memory models.
I'll cover here using the old style double checking locking pattern and if volatile
is needed, and the newer .NET Lazy<T>
initialization that avoids having to worry about this.
Below is an example of the traditional style of double check locking.
public class Singleton {
private static volatile Singleton _instance = null;
private static Object _locker = new Object();
public static Singleton GetValue()
{
if (_instance == null)
{
lock(_locker)
{
if (_instance == null)
{ _ instance = new Singleton(); }
}
}
return _ instance;
}
The simple definition from Microsoft of volatile
is as follows:
"The volatile keyword indicates that a field might be modified by multiple threads that are executing at the same time. Fields that are declared volatile are not subject to compiler optimizations that assume access by a single thread. This ensures that the most up-to-date value is present in the field at all times. The volatile modifier is usually used for a field that is accessed by multiple threads without using the lock statement to serialize access."
As you'll see in this post (Understand the Impact of Low-Lock Techniques in Multithreaded Apps), technically, if you are running on an x86 machine with the Microsoft version of .NET, this would "work" without using volatile.
The basic explanation of why it "works" in specific cases without volatile
is that the x86 memory model and the .NET memory model combined are strong enough to allow it to work at the moment with the current implementations of x86 CPUs and .NET compiler/jit. Many people point to this article as proof that volatile is not needed since it seems to be from a very authoritative source, Vance Morrison, a lead on the Microsoft JIT team.
In this article, he even shows a way to completely remove locks from a typical double check locking implementation, but does note that you could end up with multiple objects that way for a singleton (he doesn't mention that would mean that your singleton shouldn't maintain any state if it is used that way - since it could lose some information for other instances of the singleton get lost/collected)
At the end of the article, Vance Morrison makes it clearer about these general low lock techniques and volatile usage: "A Final Word of Warning - I am only guessing at the x86 memory model from observed behavior on existing processors. Thus low-lock techniques are also fragile because hardware and compilers can get more aggressive over time. Here are some strategies to minimize the impact of this fragility on your code. First, whenever possible, avoid low-lock techniques. (...) Finally, assume the weakest memory model possible, using volatile declarations instead of relying on implicit guarantees."
If you need more convincing, for non-MS implementations, if you want to follow the ECMA spec and to be sure this will work for all memory models/hardware that follow it, it is best to add the volatile
keyword. Read this.
The ECMA group wrote up a spec that allows for a more flexible memory model than Microsoft presently implemented, to allow for more optimizations on different platforms. If you follow the guidelines for the ECMA spec and use volatile
, you are much more likely to be able to work across more platforms and be future proof to some extent.
If you need any further convincing, in a more recent article, volatile
usage gets clarified that while it may work now on x86 and Itanium at the moment, it doesn't on ARM and may not work in x86/Itanium or other architectures in the future (The C# Memory Model in Theory and Practice).
Summary - Recommendation for Old Style Double Check Locking
If you are going to use this older style of double check locking, either use the volatile
keyword or use the VolatileRead
and VolatileWrite
functions for any access done outside of a lock
/synchronization construct.
There are a few reasons:
- It's the safest standard thing to do as this will match that ECMA spec to allow running on a wider variety of hardware/platforms if you might ever need that (i.e., Mono).
- Many of the articles (except The C# Memory Model in Theory and Practice) don't take into account other factors like what if the field was touched elsewhere and what if the JIT/Compiler does start to support enregistration of class fields (putting them into CPU registers as an optimization - this is currently done for local variables and it is suggested in the future they might do it for fields). Will Microsoft ever change this? Hard to say. I suspect much multithreaded code would break if fields were allowed to be put in CPU registers without some really intelligent compiler implementations. But we know on some platforms just the nature of the hardware platform can make this tricky, so better safe than sorry, use the
volatile
keyword or appropriate volatile
read/write functions.
The New way - Here is an alternative example using the lazy init provided by the .NET Framework to avoid having to worry about possible issues with double check locking.
Microsoft decided to make it easier for people trying to implement the lazy initialization pattern by using the new Lazy<T>
, so that they do not need to worry about the internals that make it happen.
public class Singleton
{
private static readonly Lazy<Singleton> _instance
= new Lazy<Singleton>(() => new Singleton());
private Singleton()
{
}
public static Singleton Instance
{
get
{
return _instance.Value;
}
}
}
Example from http://geekswithblogs.net/BlackRabbitCoder/archive/2010/05/19/c-system.lazylttgt-and-the-singleton-design-pattern.aspx.