Always false float comparisons

Timon Gehr via Digitalmars-d digitalmars-d at puremagic.com
Mon Aug 22 18:29:55 PDT 2016


On 22.08.2016 20:26, Joakim wrote:
> Sorry, I stopped reading this thread after my last response, as I felt I
> was wasting too much time on this discussion, so I didn't read your
> response till now.
> ...

No problem. Would have been fine with me if it stayed that way.

> On Saturday, 21 May 2016 at 14:38:20 UTC, Timon Gehr wrote:
>> On 20.05.2016 13:32, Joakim wrote:
>>> Yet you're the one arguing against increasing precision everywhere in
>>> CTFE.
>>> ...
>>
>> High precision is usually good (use high precision, up to arbitrary
>> precision or even symbolic arithmetic whenever it improves your
>> results and you can afford it). *Implicitly* increasing precision
>> during CTFE is bad. *Explicitly* using higher precision during CTFE
>> than at running time /may or may not/ be good. In case it is good,
>> there is often no reason to stop at 80 bits.
>
> It is not "implicitly increasing,"

Yes it is. I don't state anywhere that I want the precision to increase. 
The default assumption is that CTFE behaves as (close to as reasonably 
possible to) runtime execution.

> Walter has said it will always be
> done for CTFE, ie it is explicit behavior for all compile-time
> calculation.

Well, you can challenge the definition of words I am using if you want, 
but what's the point?

>  And he agrees with you about not stopping at 80 bits,
> which is why he wanted to increase the precision of compile-time
> calculation even more.
> ...

I'd rather not think of someone reaching that conclusion as agreeing 
with me.

>>>> This example wasn't specifically about CTFE, but just imagine that
>>>> only part of the computation is done at CTFE, all local variables are
>>>> transferred to runtime and the computation is completed there.
>>>
>>> Why would I imagine that?
>>
>> Because that's the most direct way to go from that example to one
>> where implicit precision enhancement during CTFE only is bad.
>
> Obviously, but you still have not said why one would need to do that in
> some real situation, which is what I was asking for.
> ...

It seems you think your use cases are real, but mine are not, so there 
is no way to give you a "real" example. I can just hope that Murphy's 
law strikes and you eventually run into the problems yourself.


>>> And if any part of it is done at runtime using the algorithms you gave,
>>> which you yourself admit works fine if you use the right
>>> higher-precision types,
>>
>> What's "right" about them? That the compiler will not implicitly
>> transform some of them to even higher precision in order to break the
>> algorithm again? (I don't think that is even guaranteed.)
>
> What's right is that their precision is high enough to possibly give you
> the accuracy you want, and increasing their precision will only better
> that.
> ...

I have explained why this is not true. (There is another explanation 
further below.)


>>>>> No, it is intrinsic to any floating-point calculation.
>>>>> ...
>>>>
>>>> How do you even define accuracy if you don't specify an infinitely
>>>> precise reference result?
>>>
>>> There is no such thing as an infinitely precise result.  All one can do
>>> is compute using even higher precision and compare it to lower
>>> precision.
>>> ...
>>
>> If I may ask, how much mathematics have you been exposed to?
>
> I suspect a lot more than you have.

I would not expect anyone familiar with the real number system to make a 
remark like "there is no such thing as an infinitely precise result".

> Note that I'm talking about
> calculation and compute, which can only be done at finite precision.

I wasn't, and it was my post pointing out the implicit assumption that 
floating point algorithms are thought of as operating on real numbers 
that started this subthread, if you remember. Then you said that my 
point was untrue without any argumentation, and I asked a very specific 
question in order to figure out how you reached your conclusion. Then 
you wrote a comment that didn't address my question at all and was 
obviously untrue from where I stood. Therefore I suspected that we might 
be using incompatible terminology, hence I asked how familiar you are 
with mathematical language, which you didn't answer either.

> One can manipulate symbolic math with all kinds of abstractions, but
> once you have to insert arbitrarily but finitely precise inputs and
> _compute_ outputs, you have to round somewhere for any non-trivial
> calculation.
> ...

You don't need to insert any concrete values to make relevant 
definitions and draw conclusions. My question was how you define 
accuracy, because this is crucial for understanding and/or refuting your 
point. It's a reasonable question that you ought to be able to answer if 
you use the term in an argument repeatedly.

>>> That is a very specific case where they're implementing higher-precision
>>> algorithms using lower-precision registers.
>>
>> If I argue in the abstract, people request examples. If I provide
>> examples, people complain that they are too specific.
>
> Yes, and?  The point of providing examples is to illustrate a general
> need with a specific case.  If your specific case is too niche, it is
> not a general need, ie the people you're complaining about can make both
> those statements and still make sense.
> ...

I think the problem is that they don't see the general need from the 
example.

>>> If you're going to all that
>>> trouble, you should know not to blindly run the same code at
>>> compile-time.
>>> ...
>>
>> The point of CTFE is to be able to run any code at compile-time that
>> adheres to a well-defined set of restrictions. Not using floating
>> point is not one of them.
>
> My point is that potentially not being able to use CTFE for
> floating-point calculation that is highly specific to the hardware is a
> perfectly reasonable restriction.
> ...

I meant that the restriction is not enforced by the language definition. 
I.e. it is not a compile-time error to compute with built-in floating 
point types in CTFE.

Anyway, it is unfortunate but true that performance requirements might 
make it necessary to allow the results to be slightly hardware-specific; 
I agree that some compromises might be necessary. Arbitrarily using 
higher precision even in cases where the target hardware actually 
supports all features of IEEE floats and doubles does not seem like a 
good compromise though, it's completely unforced.

>>>>> The only mention of "the last bit" is
>>>>
>>>> This part is actually funny. Thanks for the laugh. :-)
>>>> I was going to say that your text search was too naive, but then I
>>>> double-checked your claim and there are actually two mentions of "the
>>>> last bit", and close by to the other mention, the paper says that "the
>>>> first double a_0 is a double-precision approximation to the number a,
>>>> accurate to almost half an ulp."
>>>
>>> Is there a point to this paragraph?
>>>
>>
>> I don't think further explanations are required here. Maybe be more
>> careful next time.
>
> Not required because you have some unstated assumptions that we are
> supposed to read from your mind?

Because anyone with a suitable pdf reader can verify that "the last bit" 
is mentioned twice inside that pdf document, and that the mention that 
you didn't see supports my point.

> Specifically, you have not said why
> doing the calculation of that "double-precision approximation" at a
> higher precision and then rounding would necessarily throw their
> algorithms off.
> ...

I did somewhere in this thread. (Using ASCII-art graphics even.)

Basically, the number is represented using two doubles with a 
non-overlapping mantissa. I'll try to explain using a decimal 
floating-point type to maybe illustrate it better. E.g. assume that the 
higher-precision type has a 4-digit mantissa, and the lower-precision 
type has a 3-digit mantissa:

The computation could have resulted in the double-double (1234e2, 56.78) 
representing the number 123456.78 (which is the exact sum of the two 
components.)

If we now round both components to lower precision independently, we are 
left with (123e3, 56.7) which represents the number 123056.7, which has 
only 3 accurate mantissa digits.

If OTOH, we had used the lower-precision type from the start, we would 
get a more accurate result, such as (123e3, 457) representing the number 
123457.

This might be slightly counter-intuitive, but it is not that uncommon 
for floating point-specific algorithms to actually rely on floating 
point specifics.

Here, the issue is that the compiler has no way to know the correct way 
to transform the set of higher-precision floating point numbers to a 
corresponding set of lower-precision floating point numbers; it does not 
know how the values are actually interpreted by the program.

>>> But as long as the way CTFE extending precision is
>>> consistently done and clearly communicated,
>>
>> It never will be clearly communicated to everyone and it will also hit
>> people by accident who would have been aware of it.
>>
>> What is so incredibly awesome about /implicit/ 80 bit precision as to
>> justify the loss of control? If I want to use high precision for
>> certain constants precomputed at compile time, I can do it just as
>> well, possibly even at more than 80 bits such as to actually obtain
>> accuracy up to the last bit.
>
> On the other hand, what is so bad about CTFE-calculated constants being
> computed at a higher precision and then rounded down?  Almost any
> algorithm would benefit from that.
> ...

Some will subtly break, for some it won't matter and the others will 
work for reasons mostly hidden to the programmer and might therefore 
break later. Sometimes the programmer is aware of the funny language 
semantics and exploiting it cleverly, using 'float' during CTFE 
deliberately for performing 80-bit computations, confusing readers about 
the actual precision being used.


>> Also, maybe I will need to move the computation to startup at runtime
>> some time in the future because of some CTFE limitation, and then the
>> additional implicit gain from 80 bit precision will be lost and cause
>> a regression. The compiler just has no way to guess what precision is
>> actually needed for each operation.
>
> Another scenario that I find far-fetched.
> ...

Well, it's not. (The CTFE limitation could be e.g. performance.)

Basically, anytime that a programmer has wrong assumptions about why 
their code works correctly, this is slightly dangerous. It's not a good 
thing if the compiler tries to outsmart the programmer, because the 
compiler is not (supposed to be) smarter than the programmer.

>>> those people can always opt out and do it some other way.
>>> ...
>>
>> Sure, but now they need to _carefully_ maintain different
>> implementations for CTFE and runtime, for an ugly misfeature. It's a
>> silly magic trick that is not actually very useful and prone to errors.
>
> I think the idea is to give compile-time calculations a boost in
> precision and accuracy, thus improving the constants computed at
> compile-time for almost every runtime algorithm.  There may be some
> algorithms that have problems with this, but I think Walter and I are
> saying they're so few not to worry about, ie the benefits greatly
> outweigh the costs.

There are no benefits, because I can just explicitly compute at the 
precision I need myself, and I would prefer others to do the same, such 
that I have some clues about their reasoning when reading their code. 
Give me what I ask for. If you think I asked for the wrong thing, give 
me that wrong thing. If it is truly the wrong thing, I will see it and 
fix it.

If you still disagree, that's fine, just don't claim that I don't have a 
point, thanks.



More information about the Digitalmars-d mailing list