Always false float comparisons

Walter Bright via Digitalmars-d digitalmars-d at puremagic.com
Sun May 15 23:16:10 PDT 2016


On 5/15/2016 10:13 PM, Manu via Digitalmars-d wrote:
> 1.3f != 1.3 is not accurate, it's wrong.

I'm sorry, there is no way to make FP behave like mathematics. It's its own 
beast, with its own rules.


>> The initial Java spec worked as you desired, and they were pretty much
>> forced to back off of it.
> Ok, why's that?

Because forcing the x87 to work at reduced precision caused a 2x slowdown or 
something like that, making Java uncompetitive (i.e. unusable) for numerics work.


>> They won't match on any code that uses the x87. The standard doesn't require
>> float math to use float instructions, they can (and do) use double
>> instructions for temporaries.
> If it does, then it is careful to make sure the precision expectations
> are maintained.

Have you tested this?

> If you don't '-ffast-math', the FPU code produces a
> IEEE conformant result on reasonable compilers.

Googling 'fp:fast' yields:

"Creates the fastest code in most cases by relaxing the rules for optimizing 
floating-point operations. This enables the compiler to optimize floating-point 
code for speed at the expense of accuracy and correctness. When /fp:fast is 
specified, the compiler may not round correctly at assignment statements, 
typecasts, or function calls, and may not perform rounding of intermediate 
expressions. The compiler may reorder operations or perform algebraic 
transforms—for example, by following associative and distributive rules—without 
regard to the effect on finite precision results. The compiler may change 
operations and operands to single-precision instead of following the C++ type 
promotion rules. Floating-point-specific contraction optimizations are always 
enabled (fp_contract is ON). Floating-point exceptions and FPU environment 
access are disabled (/fp:except- is implied and fenv_access is OFF)."

This doesn't line up with what you said it does?

> We depend on this.

I googled 'fp:precise', which is the VC++ default, and found this:

"Using /fp:precise when fenv_access is ON disables optimizations such as 
compile-time evaluations of floating-point expressions."

How about that? No CTFE! Is that really what you wanted? :-)


> They are certainly selected with the _intent_ that they are less accurate.

This astonishes me. What algorithm requires less accuracy?


> It's not reasonable that a CTFE function may produce a radically
> different result than the same function at runtime.

Yeah, since the less accurate version can suffer from a phenomenon called "total 
loss of precision" where accumulated roundoff errors make the value utter 
garbage. When is this desirable?


>> I'm interested to hear how he was "shafted" by this. This apparently also
>> contradicts the claim that other languages do as you ask.
>
> I've explained prior the cases where this has happened are most often
> invoked by the hardware having a reduced runtime precision than the
> compiler. The only cases I know of where this has happened due to the
> compiler internally is CodeWarrior; an old/dead C++ compiler that
> always sucked and caused us headaches of all kinds.
> The point is, the CTFE behaviour is essentially identical to our
> classic case where the hardware runs a different precision than the
> compiler, and that's built into the language! It's not just an anomaly
> expressed by one particular awkward platform we're required to
> support.

You mentioned there was a "shimmer" effect. With the extremely limited ability 
of C++ compilers to fold constants, I'm left wondering how your code suffered 
from this, and why you would calculate the same value at both compile and run time.



More information about the Digitalmars-d mailing list