Usually, double/float is used to represent floating-point numbers. But we encountered a problem when we used to double/float. Let’s have a look at the below code. This is a simple subtraction of two numbers.
Double: 0,3–0,2 = 0.09999999999999998
Float: 0,3–0,2 = 0.10000001
Aren’t these results we didn’t expect? But very close. If we made them assertion with 0.1, we would get an error.
There is a reason for that, computer face to a problem when it has to use floating-point numbers. That is called a “Floating-Point Rounding Error”.
Although there is a negligible difference between the result we expect and the actual result, this difference is enough to significantly impact the financial and business systems.
Why do we get such unexpected outputs with double?
- Double/float data type follows IEEE754 specifications.
- Floating-point numbers cannot precisely represent all real numbers
For a more detailed overview of the particular cases where errors and inaccuracies can be introduced, see the accuracy section of the Wikipedia article.
If you want to compare two floating-point numbers that should in theory be equivalent, you need to allow a certain degree of tolerance. So, are we going to perform our tests by giving tolerance? Of course NO.
What should we use instead of double/float?
So you should never use floating point types where you need 100% precision. A Big Decimal is an exact way of representing numbers to avoid floating point errors during calculations.
BigDec: 0,3–0,2 = 0.1
Yes! This is the result we expected. With this result, we can make a definite assertion in our cases. But I must say something. Big Decimal has the drawback of being slower, and more challenging to write algorithms with.
Big Decimal is immensely suited to calculations where a high level of accuracy is needed. If you are dealing with financial calculations, currency, prices or precision is a must, use BigDecimal. Otherwise, Doubles tend to be good enough.