Always false float comparisons

Timon Gehr via Digitalmars-d digitalmars-d at puremagic.com
Thu May 19 11:22:48 PDT 2016


On 19.05.2016 08:04, Joakim wrote:
> On Wednesday, 18 May 2016 at 17:10:25 UTC, Timon Gehr wrote:
>> It's not just slightly worse, it can cut the number of useful bits in
>> half or more! It is not unusual, I have actually run into those
>> problems in the past, and it can break an algorithm that is in Phobos
>> today!
>
> I wouldn't call that broken.  Looking at the hex output by replacing %f
> with %A in writefln, it appears the only differences in all those
> results is the last byte in the significand.

Argh...

// ...

void main(){
     //double[] data=[1e16,1,-9e15];
     import std.range;
     double[] data=1e16~repeat(1.0,100000000).array~(-9e15);
     import std.stdio;
     writefln("%f",sum(data)); // baseline
     writefln("%f",kahan(data)); // kahan
     writefln("%f",kahanBroken(data)); // broken kahan
}


dmd -run kahanDemo.d
1000000000000000.000000
1000000100000000.000000
1000000000000000.000000

dmd -m32 -O -run kahanDemo.d
1000000000000000.000000
1000000000000000.000000
1000000000000000.000000


Better?

Obviously there is more structure in the data that I invent manually 
than in a real test case where it would go wrong. The problems carry 
over though.


> As Don's talk pointed out,
> all floating-point calculations will see loss of precision starting there.
> ...


This is implicitly assuming a development model where the programmer 
first writes down the computation as it would be correct in the real 
number system and then naively replaces every operation by the rounding 
equivalent and hopes for the best.

It is a useful rule if that is what you're doing. One might be doing 
something else. Consider the following paper for an example where the 
last bit in the significant actually carries useful information for many 
of the values used in the program.

http://www.jaist.ac.jp/~s1410018/papers/qd.pdf

> In this case, not increasing precision gets the more accurate result,
> but other examples could be constructed that _heavily_ favor increasing
> precision.

Sure. In such cases, you should use higher precision. What is the 
problem? This is already supported (the compiler is not allowed to use 
lower precision than requested).

> In fact, almost any real-world, non-toy calculation would
> favor it.
>
> In any case, nobody should depend on the precision out that far being
> accurate or "reliable."


IEEE floating point has well-defined behaviour and there is absolutely 
nothing wrong with code that delivers more accurate results just because 
it is actually aware of the actual semantics of the operations being 
carried out.


More information about the Digitalmars-d mailing list