Click here to Skip to main content
15,887,267 members
Articles / Programming Languages / C#
Tip/Trick

The Precision Battle: Demonstrating the Importance of Decimals over Doubles in .NET

Rate me:
Please Sign up or sign in to vote.
5.00/5 (5 votes)
17 Sep 2023CPOL2 min read 8.4K   6   2
Exploring the precision differences between .NET's decimal and double types in financial calculations
In .NET, the choice of data type can significantly impact the precision of arithmetic operations, especially in financial contexts. This article highlights a hands-on demonstration comparing the decimal and double types, revealing the subtle yet crucial discrepancies that can emerge when using less precise data types in repetitive arithmetic tasks, emphasizing the superiority of decimal for precise financial calculations.

Introduction

When dealing with financial computations, a seemingly innocuous choice, such as selecting a data type, can have major implications. This article delves into the consequential difference between decimal and double data types in .NET, particularly in scenarios that require utmost precision.

Background

Both decimal and double are floating-point data types in .NET, but they are designed for different purposes. The double type is a double-precision floating-point format, suitable for scientific calculations where approximation is acceptable. On the other hand, the decimal type, with its 128-bit data size, is intended for financial and monetary calculations where precision is crucial.

Using the Code

To showcase the difference between the two types, we simulate an operation where a small amount, 0.01, is added multiple times, 10,000 iterations to be precise.

C#
namespace DemoConsole
{
    internal class Program
    {
        static void Main(string[] args)
        {
            // Simulating a financial operation where we're adding 
            // a small amount multiple times
            const int iterations = 10000;
            const decimal decimalValue = 0.01M; // M suffix denotes a decimal
            const double doubleValue = 0.01;

            decimal totalDecimal = 0M;
            double totalDouble = 0.0;

            for (int i = 0; i < iterations; i++)
            {
                totalDecimal += decimalValue;
                totalDouble += doubleValue;
            }

            Console.WriteLine($"Using decimal after 
                             {iterations} iterations: {totalDecimal}");
            Console.WriteLine($"Using double after 
                             {iterations} iterations: {totalDouble}");

            if ((double)totalDecimal != totalDouble)
            {
                Console.WriteLine("The values are not the same!");
            }
            else
            {
                Console.WriteLine("The values are the same.");
            }

            Console.ReadKey();
        }
    }
}

After running the above code, the output is as follows:

C#
Using decimal after 10000 iterations: 100,00
Using double after 10000 iterations: 100,00000000001425
The values are not the same!

As seen in the results, the double type deviates by an approximate value of 0.00000000001425, highlighting the inaccuracies that can creep in even with seemingly simple arithmetic.

This output clearly illustrates the difference in precision. While the decimal type accurately computes the result as 100, the double type introduces a small but noticeable error. In financial operations, even such minute discrepancies can be significant.

Note: The M suffix in the decimalValue constant is used to denote a decimal literal in C#.

If you wish to explore the code or contribute, you can find the source code on GitHub.

Points of Interest

It's worth noting that the underlying representations of these two types in memory contribute to this disparity. The double type uses a binary representation which can lead to rounding errors for certain fractional values, while the decimal type is designed to mitigate such errors, especially in financial contexts.

History

  • 17th September, 2023: Initial article created showcasing the importance of the decimal data type in .NET for precision-required operations

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior)
Netherlands Netherlands
I am a self-employed software engineer working on .NET Core. I love TDD.

Comments and Discussions

 
QuestionWhat is the cost of accuracy Pin
Сергій Ярошко18-Sep-23 22:22
professionalСергій Ярошко18-Sep-23 22:22 
QuestionMultiplication Pin
Kenneth Haugland17-Sep-23 12:58
mvaKenneth Haugland17-Sep-23 12:58 
You are right but this tip trick could easily be expanded, its a massive and important topic.

Technically all numbers stored on a computer is an approximation, were epsilon is the minimum value on a computer
x_approx = x_real +/- epsilon

This means that using floating point precision you cannot know exactly what number the user actually meant to store.

One more thing is that in IEEE standards all numbers are stored by the matissa (the actual decimal number) and the exponent. This is the reason that multiplication is a better option then just summing low numbers as shown in your example.

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.