Pure Programmer
Blue Matrix


Cluster Map

Project: Floating Point Error

Demonstrate the limitations of floating point math by computing 1/3 and storing it in a double variable. Then print the variable with 1/3 and the result of multiplying that variable by 3, 9 and 300 to sixteen decimal places. Do we get the exact results we expected? How do we get this result when 1/3 is infinitely repeating in its decimal form 0.3333...?

Try a similar experiment by assigning 0.2 to a variable. Then with a loop, add it 1000 times to another summation variable that was initialized with 0.0. Print the two-tenths variable, the final summation variable and the two-tenths variable times 1000, all to sixteen decimal places. Do we get the exact answer of 200.0 that we expect in both cases? Why does multiplication result in a better answer than repeated addition?

See [[Floating-point_arithmetic#Accuracy_problems|Floating Point Arithmetic Accuracy Problems]]

Output
$ node FloatingPointError.js 1/3: 0.3333333333333333 one: 1.0000000000000000 three: 3.0000000000000000 hundred: 100.0000000000000000 twoTenths: 0.2000000000000000 sum: 199.9999999999971863 200: 200.0000000000000000

Solution