Always false float comparisons

Timon Gehr via Digitalmars-d digitalmars-d at puremagic.com
Wed May 18 10:41:36 PDT 2016


I had written and sent this message three days ago, but it seemingly 
never showed up on the newsgroup. I'm sorry if it seemed that I didn't 
explain myself, I was operating under the assumption that this message 
had been made available to you.


On 14.05.2016 03:26, Walter Bright wrote:
 > On 5/13/2016 5:49 PM, Timon Gehr wrote:
 >> Nonsense. That might be true for your use cases. Others might actually
 >> depend on
 >> IEE 754 semantics in non-trivial ways. Higher precision for
 >> temporaries does not
 >> imply higher accuracy for the overall computation.
 >
 > Of course it implies it.
 > ...

No, see below.


 > An anecdote: a colleague of mine was once doing a chained calculation.
 > At every step, he rounded to 2 digits of precision after the decimal
 > point, because 2 digits of precision was enough for anybody. I carried
 > out the same calculation to the max precision of the calculator (10
 > digits). He simply could not understand why his result was off by a
 > factor of 2, which was a couple hundred times his individual roundoff
 > error.
 > ...

Now assume that colleague of your was doing that chained calculation, 
and his calculator magically added the additional digits behind his back 
(it can do this by caching the last full-precision value for each number 
prefix). He wouldn't even notice that his rounding strategy does not 
work. Sometime later he might then use a calculator that does not do the 
magical enhancing.

 >
 >> E.g., correctness of double-double arithmetic is crucially dependent
 >> on correct
 >> rounding semantics for double:
 >> 
https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic
 >>
 >
 > Double-double has its own peculiar issues, and is not relevant to this
 > discussion.
 > ...

It is relevant to this discussion insofar that it can occur in 
algorithms that use double-precision floating-point arithmetic. It 
illustrates a potential issue with implicit enhancement of precision. 
For double-double, there are two values of type double that together 
represent a higher-precision value. (One of them has a shifted exponent, 
such that their mantissa bits do not overlap.)

You have mantissas like:

|------double1------| |------double2--------|


Now assume that the compiler instead uses extended precision, what you 
get is something we might call extended-extended of the form:

|---------extended1---------| |---------extended2-----------|

Now those values are written back into 64-bit double storage, now 
observe which part of the double-double mantissa is lost:


|---------extended1---xxxxxx| |---------extended2-----xxxxxx|

|
v

|------double1------| |------double2--------|


The middle part of the mantissa is thrown away, and we are left with 
single double-precision plus some noise. Implicitly using extended 
precision for some parts of the computation approximately cuts the 
number of accurate mantissa bits in half. I don't want to have to deal 
with this. Just give me what I ask for.


 >
 >> Also, it seems to me that for e.g.
 >> https://en.wikipedia.org/wiki/Kahan_summation_algorithm,
 >> the result can actually be made less precise by adding casts to higher
 >> precision
 >> and truncations back to lower precision at appropriate places in the
 >> code.
 >
 > I don't see any support for your claim there.
 > ....

It's using the same trick that double-double does. The above reasoning 
should apply.

 >
 >> And even if higher precision helps, what good is a "precision-boost"
 >> that e.g.
 >> disappears on 64-bit builds and then creates inconsistent results?
 >
 > That's why I was thinking of putting in 128 bit floats for the compiler
 > internals.
 > ...

Runtime should do the same as CTFE. Are you suggesting we use 128-bit 
soft-floats at run time for all float types?


 >
 >> Sometimes reproducibility/predictability is more important than maybe
 >> making
 >> fewer rounding errors sometimes. This includes reproducibility between
 >> CTFE and
 >> runtime.
 >
 > A more accurate answer should never cause your algorithm to fail.

It's not more accurate, just more precise, and it is only for some 
temporary computations, and you don't necessarily know which. The way 
the new roundoff errors propagate is chaotic, and might not be what the 
code anticipated.

 > It's like putting better parts in your car causing the car to fail.
 > ...

It's like (possibly repeatedly) interchanging "better" parts and "worse" 
parts while the engine is still running.

Anyway, it should be obvious that this kind of reasoning by analogy does 
not lead anywhere.


 >
 >> Just actually comply to the IEEE floating point standard when using 
their
 >> terminology. There are algorithms that are designed for it and that
 >> might stop
 >> working if the language does not comply.
 >
 > Conjecture.

I have given a concrete example.

 > I've written FP algorithms (from Cody+Waite, for example),
 > and none of them degraded when using more precision.
 > ...

For the entire computation or some random temporaries?

 >
 > Consider that the 8087 has been operating at 80 bits precision by
 > default for 30 years. I've NEVER heard of anyone getting actual bad
 > results from this.

Fine, so you haven't.

 > They have complained about their test suites that
 > tested for less accurate results broke.

What happened is that the test suites broke.


 > They have complained about the
 > speed of x87. And Intel has been trying to get rid of the x87 forever.

It's nice to have 80-bit precision. I just want to explicitly ask for it.


 > Sometimes I wonder if there's a disinformation campaign about more
 > accuracy being bad, because it smacks of nonsense.
 >
 > BTW, I once asked Prof Kahan about this. He flat out told me that the
 > only reason to downgrade precision was if storage was tight or you
 > needed it to run faster. I am not making this up.

Obviously, but I think his comment was about enhancing precision for the 
entire computation front-to-back, not just some parts of it. I can do 
that on my own. I don't need the compiler to second-guess me.





More information about the Digitalmars-d mailing list