Most floating point values can not be represented exactly. As a result the stored values differ slightly from the real values. With double precision, 16 significant decimal digits can be represented. That means counting from the first non-zero digit, all digits being more than 16 positions to the right are not relevant (random when printed out).
When performing now operations with floating point values, bigger errors are introduced. If you for example add a smaller value to a larger one, the resulting precision is defined by the larger one:
1.000 000 000 000 000 000
+ 0.000 000 000 000 001 xxx yyy
= 1.000 000 000 000 001 zzz
The digits marked with x and y in the above example will be lost and z becomes random (zero in the above example).
Performing more operations with the result will increase the errors. That is what you are seeing.
The only solution when the errors are too large is using a more precise number format. While
long double
might be used, you should check if it is supported on your platform. Microsoft Visual Studio for example does not use
long double
(the
long double
type is in fact a
double
).
You should also know about (and probably use) the scientifc format for floating point numbers.
You can use it for example to replace the
pow()
calls:
b = 1e-15;
c = 2e-14;
d = 3e-14;
It can be also used when printing values using the
printf
function. Such output is often better readable than a lot of trailing or leading zeroes. Especially the
G
format is useful (it will use the scientific format only for small and large numbers):
printf("d: %.16G\n", d);
In the above example the precision is limited to 16 digits so that non-relevant digits won't be printed.
So you should try to use the above formatting within your program. If the results are as expected then (because non-relevant digits are not printed and the output is rounded instead), all is OK.