Always false float comparisons

Ola Fosheim Grøstad via Digitalmars-d digitalmars-d at puremagic.com
Thu May 19 00:51:00 PDT 2016


On Wednesday, 18 May 2016 at 22:16:44 UTC, jmh530 wrote:
> On Wednesday, 18 May 2016 at 21:49:34 UTC, Joseph Rushton 
> Wakeling wrote:
>> On Wednesday, 18 May 2016 at 20:29:27 UTC, Walter Bright wrote:
>>> I do not understand the tolerance for bad results in 
>>> scientific, engineering, medical, or finance applications.
>>
>> I don't think anyone has suggested tolerance for bad results 
>> in any of those applications.
>>
>
> I don't think its about tolerance for bad results, so much as 
> the ability to make the trade-off between speed and precision 
> when you need to.

It isn't only about speed. It is about correctness as well. 
Compilers should not change the outcome (including precision and 
rounding) unless the programmer has explicitly requested it, and 
if you do the effects should be well documented so that the 
effect is clear to the programmer. This is a well known and 
universally accepted principle in compiler design.

Take for instance the documentation page for a professional level 
compiler targeting embedded programming:

http://processors.wiki.ti.com/index.php/Floating_Point_Optimization

It gives the programmer explicit control over what kind of 
deviations the compiler can create.

If you look at Intel's compiler you'll see that they even turn 
off fused-multiply-add in strict mode because it skips a single 
rounding between the multiply and the add.

https://software.intel.com/sites/default/files/article/326703/fp-control-2012-08.pdf

Some languages allow constant folding of literals in 
_expressions_ to use infinite precision, but once you have bound 
it to a variable it should use the same precision, and you always 
have to use the same rounding mode. This means that if you should 
check the rounding mode before using precomputed values.

If you cannot emulate the computation then you can simply run the 
computations prior to entering main and put the "constant 
computations" in global variables.

That said, compilers may have some problematic settings enabled 
by default in order to look good in benchmarks/test suites.

In order to be IEEE compliant you cannot even optimize "0.0 - x" 
to "-x", you also cannot optimize "x - 0.0" to "x". Such issues 
make compiler vendors provide many different floating point 
options as command line flags.



More information about the Digitalmars-d mailing list