Always false float comparisons

Joakim via Digitalmars-d digitalmars-d at puremagic.com
Tue May 17 20:01:14 PDT 2016


On Tuesday, 17 May 2016 at 14:59:45 UTC, Ola Fosheim Grøstad 
wrote:
> There are lots of algorithms that will break if you randomly 
> switch precision of different expressions.

There is nothing "random" about increasing precision till the 
end, it follows a well-defined rule.

> Heck, nearly all of the SIMD optimizations on differentials 
> that I use will break if one lane is computed with a different 
> precision than the other lanes. I don't give a rats ass about 
> increased precision. I WANT THE DAMN LANES TO BE IN THE SAME 
> PHASE (or close to it)!! Phase-locking is much more important 
> than value accuracy.

So you're planning on running phase-locking code partially in 
CTFE and runtime and it's somehow very sensitive to precision?  
If your "phase-locking" depends on producing bit-exact results 
with floating-point, you're doing it wrong.

>> Now, people can always make mistakes in their implementation 
>> and unwittingly depend on lower precision somehow, but that 
>> _should_ fail.
>
> People WILL make mistakes, but if you cannot control precision 
> then you cannot:
>
> 1. create a reference implementation to compare with
>
> 2. unit test floating point code in a reliable way
>
> 3. test for convergence/divergence in feedback loops (which can 
> have _disastrous_ results and could literally ruin your 
> speakers/hearing in the case of audio).

If any of this depends on comparing bit-exact floating-point 
results, you're doing it wrong.

>> None of this is controversial to me: you shouldn't be 
>> comparing floating-point numbers with anything other than 
>> approxEqual,
>
> I don't agree.
>
> 1. Comparing typed constants for equality should be 
> unproblematic. In D that is broken.

If the constant is calculated rather than a literal, you should 
be checking using approxEqual.

> 2. Testing for zero is a necessity when doing division.

If the variable being tested is calculated, you should be using 
approxEqual.

> 3. Comparing for equality is the same as subtraction followed 
> by testing for zero.

So what?  You should be using approxEqual.

> So, the rule is: you shouldn't compare at all unless you know 
> the error bounds, but that depends on WHY you are comparing.

No, you should always use error bounds.  Sometimes you can get 
away with checking bit-exact equality, say for constants that you 
defined yourself with no calculation, but it's never a good 
practice.

> However, with constants/sentinels and some methods you do 
> know... Also, with some input you do know that the algorithm 
> WILL fail for certain values at a _GIVEN_ precision. Testing 
> for equality for those values makes a lot of sense, until some 
> a**hole decides to randomly "improve" precision where it was 
> typed to something specific and known.
>
> Take this:
>
> f(x) = 1/(2-x)
>
> Should I not be able to test for the exact value "2" here?

It would make more sense to figure out what the max value of f(x) 
is you're trying to avoid, say 1e6, and then check for 
approxEqual(x, 2, 2e-6).  That would make much more sense than 
only avoiding 2, when an x that is arbitrarily close to 2 can 
also blow up f(x).

> I don't see why "1.3" typed to a given precision should be 
> different. You want to force me to a more than 3x more 
> expensive test just to satisfy some useless FP semantic that 
> does not provide any real world benefits whatsoever?

Oh, it's real world alright, you should be avoiding more than 
just 2 in your example above.

>> increasing precision should never bother your algorithm, and a 
>> higher-precision, common soft-float for CTFE will help 
>> cross-compiling and you'll never notice the speed hit.
>
> Randomly increasing precision is never a good idea. Yes, having 
> different precision in different code paths can ruin the 
> quality of both rendering, data analysis and break algorithms.
>
> Having infinite precision untyped real that may downgrade to 
> say 64 bits mantissa is acceptable. Or in the case of Go, a 256 
> bit mantissa. That's a different story.
>
> Having single precision floats that randomly are turned into 
> arbitrary precision floats is not acceptable. Not at all.

Simply repeating the word "random" over and over again does not 
make it so.


More information about the Digitalmars-d mailing list