[Issue 360] Compile-time floating-point calculations are sometimes inconsistent

Dave Dave_member at pathlink.com
Fri Sep 22 07:49:10 PDT 2006


Don Clugston wrote:
> Walter Bright wrote:
>> d-bugmail at puremagic.com wrote:
>>> ------- Comment #5 from clugdbug at yahoo.com.au  2006-09-22 02:25 -------
>>> (In reply to comment #4)
>>>> while (j <= (1.0f/STEP_SIZE)) is at double precision,
>>>> writefln((j += 1.0f) <= (1.0f/STEP_SIZE)) is at real precision.
>>> I don't understand where the double precision comes from. Since all 
>>> the values
>>> are floats, the only precisions that make sense are float and reals.
>>
>> The compiler is allowed to evaluate intermediate results at a greater 
>> precision than that of the operands.
>>
>>> Really, 0.2f should not be the same number as 0.2.
>>
>> 0.2 is not representable exactly, the only question is how much 
>> precision is there in the representation.
>>
>>> When you put the 'f' suffix
>>> on, surely you're asking the compiler to truncate the precision.
>>
>> Not in D. The 'f' suffix only indicates the type.
> 
> And therefore, it only matters in implicit type deduction, and in 
> function overloading. As I discuss below, I'm not sure that it's 
> necessary even there.
> In many cases, it's clearly a programmer error. For example in
> real BAD = 0.2f;
> where the f has absolutely no effect.
> 
> The compiler may
>> maintain internally as much precision as possible, for purposes of 
>> constant folding. Committing the actual precision of the result is 
>> done as late as possible.
>>
>>> It can be
>>> expanded to real precision later without problems. Currently, there's 
>>> no way to
>>> get a low-precision constant at compile time.
>>
>> You can by putting the constant into a static, non-const variable. 
>> Then it cannot be constant folded.
> 
> Actually, in this case you still want it to be constant folded.
>>
>>> (In fact, you should be able to write real a = 0.2 - 0.2f; to get the
>>> truncation error).
>>
>> Not in D, where the compiler is allowed to evaluate using as much 
>> precision as possible for purposes of constant folding. The vast 
>> majority of calculations benefit from delaying rounding as long as 
>> possible, hence D's bias towards using as much precision as possible.
>>
>> The way to write robust floating point calculations in D is to ensure 
>> that increasing the precision of the calculations will not break the 
>> result.
>>
>> Early versions of Java insisted that rounding to precision of floating 
>> point intermediate results always happened. While this ensured 
>> consistency of results, it mostly resulted in consistently getting 
>> inferior and wrong answers.
> 
> I agree. But it seems that D is currently in a halfway house on this 
> issue. Somehow, 'double' is privileged, and don't think it's got any 
> right to be.
> 
>     const XXX = 0.123456789123456789123456789f;
>     const YYY = 1 * XXX;
>     const ZZZ = 1.0 * XXX;
> 
>    auto xxx = XXX;
>    auto yyy = YYY;
>    auto zzz = ZZZ;
> 
> // now xxx and yyy are floats, but zzz is a double.
> Multiplying by '1.0' causes a float constant to be promoted to double.
> 
>    real a = xxx;
>    real b = zzz;
>    real c = XXX;
> 
> Now a, b, and c all have different values.
> 
> Whereas the same operation at runtime causes it to be promoted to real.
> 
> Is there any reason why implicit type deduction on a floating point 
> constant doesn't always default to real? After all, you're saying "I 
> don't particularly care what type this is" -- why not default to maximum 
> accuracy?
> 
> Concrete example:
> 
> real a = sqrt(1.1);
> 
> This only gives a double precision result. You have to write
> real a = sqrt(1.1L);
> instead.
> It's easier to do the wrong thing, than the right thing.
> 
> IMHO, unless you specifically take other steps, implicit type deduction 
> should always default to the maximum accuracy the machine could do.

Great point.



More information about the Digitalmars-d-bugs mailing list