Always false float comparisons

Ola Fosheim Grøstad via Digitalmars-d digitalmars-d at puremagic.com
Mon Jun 6 07:44:55 PDT 2016


On Saturday, 21 May 2016 at 22:05:31 UTC, Timon Gehr wrote:
> On 21.05.2016 20:14, Walter Bright wrote:
>> It's good to list traps for the unwary in FP usage. It's 
>> disingenuous to
>> list only problems with one design and pretend there are no 
>> traps in
>> another design.
>
> Some designs are much better than others.

Indeed. There are actually _only_ problems with D's take on 
floating point. It even prevents implementing higher precision 
double-double and quad-double math libraries by error-correction 
techniques where you get 106 bit/212 bit mantissas:

C++ w/2 x 64 bits adder and conservative settings
--> GOOD ACCURACY/ ~106 significant bits:

#include <iostream>
int main()
{
const double a = 1.23456;
const double b = 1.3e-18;
double hi = a+b;
double lo = -((a - ((a+b) - ((a+b) - a))) - (b + ((a+b) - a)));
std::cout << hi << std::endl; // 1.23456
std::cout << lo << std::endl; // 1.3e-18 SUCCESS!
std::cout << (hi-a) <<std::endl; // 0
}



D w/2 x 64/80 bits adder
--> BAD ACCURACY

import std.stdio;
void main()
{
const double a = 1.23456;
const double b = 1.3e-18;
double hi = a+b;
double lo = -((a - ((a+b) - ((a+b) - a))) - (b + ((a+b) - a)));
writeln(hi);  // 1.23456
writeln(lo); // 2.60104e-18 FAILURE!
writeln(hi-a); // 0
}


Add to this that compiler-backed emulation of 128 bit floats are 
twice as fast as 80-bit floats in hardware... that's how 
incredibly slow it 80 bit floats are on modern hardware.

I don't even understand why this is a topic as there is not one 
single rational to keep it the way it is. Not one.




More information about the Digitalmars-d mailing list