[Issue 360] Compile-time floating-point calculations are sometimes inconsistent

Walter Bright newshound at digitalmars.com
Fri Sep 22 12:20:13 PDT 2006


Don Clugston wrote:
> Walter Bright wrote:
>> Not in D. The 'f' suffix only indicates the type.
> 
> And therefore, it only matters in implicit type deduction, and in 
> function overloading. As I discuss below, I'm not sure that it's 
> necessary even there.
> In many cases, it's clearly a programmer error. For example in
> real BAD = 0.2f;
> where the f has absolutely no effect.

It may come about as a result of source code generation, though, so I'd 
be reluctant to make it an error.


>> You can by putting the constant into a static, non-const variable. 
>> Then it cannot be constant folded.
> 
> Actually, in this case you still want it to be constant folded.

A static variable's value can change, so it can't be constant folded. To 
have it participate in constant folding, it needs to be declared as const.


> I agree. But it seems that D is currently in a halfway house on this 
> issue. Somehow, 'double' is privileged, and don't think it's got any 
> right to be.
> 
>     const XXX = 0.123456789123456789123456789f;
>     const YYY = 1 * XXX;
>     const ZZZ = 1.0 * XXX;
> 
>    auto xxx = XXX;
>    auto yyy = YYY;
>    auto zzz = ZZZ;
> 
> // now xxx and yyy are floats, but zzz is a double.
> Multiplying by '1.0' causes a float constant to be promoted to double.

That's because 1.0 is a double. A double*float => double.

>    real a = xxx;
>    real b = zzz;
>    real c = XXX;
> 
> Now a, b, and c all have different values.
> 
> Whereas the same operation at runtime causes it to be promoted to real.
> 
> Is there any reason why implicit type deduction on a floating point 
> constant doesn't always default to real? After all, you're saying "I 
> don't particularly care what type this is" -- why not default to maximum 
> accuracy?
> 
> Concrete example:
> 
> real a = sqrt(1.1);
> 
> This only gives a double precision result. You have to write
> real a = sqrt(1.1L);
> instead.
> It's easier to do the wrong thing, than the right thing.
> 
> IMHO, unless you specifically take other steps, implicit type deduction 
> should always default to the maximum accuracy the machine could do.

It is a good idea, but isn't that way for the reasons:

1) It's the way C, C++, and Fortran work. Changing the promotion rules 
would mean that, when translating solid, reliable libraries from those 
languages to D, one would have to be very, very careful.

2) Float and double are expected to be implemented in hardware. Longer 
precisions are often not available. I wanted to make it practical for a 
D implementation on those machines to provide a software long precision 
floating point type, rather than just making real==double. Such a type 
would be very slow compared with double.

3) Real, even in hardware, is significantly slower than double. Doing 
constant folding at max precision at compile time won't affect runtime 
performance, so it is 'free'.



More information about the Digitalmars-d-bugs mailing list