Always false float comparisons
Joseph Rushton Wakeling via Digitalmars-d
digitalmars-d at puremagic.com
Mon May 16 04:18:45 PDT 2016
On Monday, 16 May 2016 at 10:57:00 UTC, Walter Bright wrote:
> On 5/16/2016 3:14 AM, Joseph Rushton Wakeling wrote:
>> 1.2999999523162841796875
>> 1.3000000000000000444089209850062616169452667236328125
>
> Note the increase in correctness of the result by 10 digits.
As Adam mentioned, you keep saying "correctness" or "accuracy",
when people are consistently talking to you about "consistency"
... :-)
I can always request more precision if I need or want it.
Getting different results for a superficially identical float *
double calculation, because one was performed at compile time and
another at runtime, is an inconsistency that it might be nicer to
avoid.
>> ... which is unintuitive, to say the least;
>
> It isn't any less intuitive than:
>
> f + f + 1.3f
>
> being calculated in 64 or 80 bit precision
It is less intuitive. If someFloat + 1.3f is calculated in 64 or
80 bit precision at runtime, it's still constrained by the fact
that someFloat only provides 32 bits of floating-point input to
the calculation.
If someFloat + 1.3f is calculated instead at compile time, the
reasonable assumption is that someFloat _still_ only brings 32
bits of floating-point input. But as we've seen above, it
doesn't.
> or for that matter:
>
> ubyte b = 200;
> ubyte c = 100;
> writeln(b + c);
>
> giving an answer of 300 (instead of 44), which every C/C++/D
> compiler does.
The latter result, at least (AIUI) is consistent depending on
whether the calculation is done at compile time or runtime.
More information about the Digitalmars-d
mailing list