Always false float comparisons

Manu via Digitalmars-d digitalmars-d at puremagic.com
Wed May 11 02:24:54 PDT 2016


On 11 May 2016 at 07:47, Walter Bright via Digitalmars-d
<digitalmars-d at puremagic.com> wrote:
> On 5/10/2016 12:31 AM, Manu via Digitalmars-d wrote:
>>
>> Think of it like this; a float doesn't represent a precise point (it's
>> an approximation by definition), so see the float as representing the
>> interval from the absolute value it stores, and that + 1 mantissa bit.
>> If you see float's that way, then the natural way to compare them is
>> to demote to the lowest common precision, and it wouldn't be
>> considered erroneous, or even warning-worthy; just documented
>> behaviour.
>
>
> Floating point behavior is so commonplace, I am wary of inventing new,
> unusual semantics for it.

Is it unusual to demote to the lower common precision? I think it's
the only reasonable solution.
It's never reasonable to promote a float, since it has already
suffered precision loss. It can't meaningfully be compared against
anything higher precision than itself.
What is the problem with this behaviour I suggest?

The reason I'm wary about emitting a warning is because people will
encounter the warning *all the time*, and for a user who doesn't have
comprehensive understanding of floating point (and probably many that
do), the natural/intuitive thing to do would be to place an explicit
cast of the lower precision value to the higher precision type, which
is __exactly the wrong thing to do__.
I don't think the warning improves the problem, it likely just causes
people to emit the same incorrect code explicitly.

Honestly, who would naturally respond to such a warning by demoting
the higher precision type? I don't know that guy, other than those of
us who have just watched Don's talk.


More information about the Digitalmars-d mailing list