Always false float comparisons

Ola Fosheim Grøstad via Digitalmars-d digitalmars-d at puremagic.com
Thu May 19 04:00:31 PDT 2016


On Thursday, 19 May 2016 at 08:37:55 UTC, Joakim wrote:
> On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad 
> wrote:
>> On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
>>> In this case, not increasing precision gets the more accurate 
>>> result, but other examples could be constructed that 
>>> _heavily_ favor increasing precision.  In fact, almost any 
>>> real-world, non-toy calculation would favor it.
>>
>> Please stop saying this. It is very wrong.
>
> I will keep saying it because it is _not_ wrong.

Can you then please read this paper in it's entirety before 
continuing saying it. Because changing precision breaks 
properties of the semantics of IEEE floating point.

What Every Computer Scientist Should Know About Floating-Point 
Arithmetic

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#3377

«Conventional wisdom maintains that extended-based systems must 
produce results that are at least as accurate, if not more 
accurate than those delivered on single/double systems, since the 
former always provide at least as much precision and often more 
than the latter. Trivial examples such as the C program above as 
well as more subtle programs based on the examples discussed 
below show that this wisdom is naive at best: some apparently 
portable programs, which are indeed portable across single/double 
systems, deliver incorrect results on extended-based systems 
precisely because the compiler and hardware conspire to 
occasionally provide more precision than the program expects.»

> And that is what _you_ need to stop saying: there's _nothing 
> unpredictable_ about what D does.  You may find it unintuitive, 
> but that's your problem.

No. It is not my problem as I would never use a system with this 
kind of semantics for anything numerical.

It is a problem for D. Not for me.


> The notion that "error correction" can fix the inevitable 
> degradation of accuracy with each floating-point calculation is 
> just laughable.

Well, it is not laughable to computer scientists that accuracy 
depends on knowledge about precision and rounding... And I am a 
computer scientists, in case you have forgotten...



More information about the Digitalmars-d mailing list