[Issue 360] Compile-time floating-point calculations are sometimes inconsistent

Don Clugston dac at nospam.com.au
Sat Sep 23 07:26:07 PDT 2006


Walter Bright wrote:
> Don Clugston wrote:
>> Walter Bright wrote:
>>> Not in D. The 'f' suffix only indicates the type.
>>
>> And therefore, it only matters in implicit type deduction, and in 
>> function overloading. As I discuss below, I'm not sure that it's 
>> necessary even there.
>> In many cases, it's clearly a programmer error. For example in
>> real BAD = 0.2f;
>> where the f has absolutely no effect.
> 
> It may come about as a result of source code generation, though, so I'd 
> be reluctant to make it an error.
> 
> 
>>> You can by putting the constant into a static, non-const variable. 
>>> Then it cannot be constant folded.
>>
>> Actually, in this case you still want it to be constant folded.
> 
> A static variable's value can change, so it can't be constant folded. To 
> have it participate in constant folding, it needs to be declared as const.

But if it's const, then it's not float precision! I want both!

>> I agree. But it seems that D is currently in a halfway house on this 
>> issue. Somehow, 'double' is privileged, and don't think it's got any 
>> right to be.
>>
>>     const XXX = 0.123456789123456789123456789f;
>>     const YYY = 1 * XXX;
>>     const ZZZ = 1.0 * XXX;
>>
>>    auto xxx = XXX;
>>    auto yyy = YYY;
>>    auto zzz = ZZZ;
>>
>> // now xxx and yyy are floats, but zzz is a double.
>> Multiplying by '1.0' causes a float constant to be promoted to double.
> 
> That's because 1.0 is a double. A double*float => double.
> 
>>    real a = xxx;
>>    real b = zzz;
>>    real c = XXX;
>>
>> Now a, b, and c all have different values.
>>
>> Whereas the same operation at runtime causes it to be promoted to real.
>>
>> Is there any reason why implicit type deduction on a floating point 
>> constant doesn't always default to real? After all, you're saying "I 
>> don't particularly care what type this is" -- why not default to 
>> maximum accuracy?
>>
>> Concrete example:
>>
>> real a = sqrt(1.1);
>>
>> This only gives a double precision result. You have to write
>> real a = sqrt(1.1L);
>> instead.
>> It's easier to do the wrong thing, than the right thing.
>>
>> IMHO, unless you specifically take other steps, implicit type 
>> deduction should always default to the maximum accuracy the machine 
>> could do.
> 
> It is a good idea, but isn't that way for the reasons:
> 
> 1) It's the way C, C++, and Fortran work. Changing the promotion rules 
> would mean that, when translating solid, reliable libraries from those 
> languages to D, one would have to be very, very careful.

That's very important. Still, those languages don't have implicit type 
deduction. Also, none of those languages guarantee accuracy of 
decimal->binary conversions, so there's always some error in decimal 
constants. Incidentally, I recently read that GCC uses something like 
160 bits for constant folding, so it's always going to give results that 
are different to those on other compilers.

Why doesn't D behave like C with respect to 'f' suffixes?
(Ie, do the conversion, then truncate it to float precision).
Actually, I can't imagine many cases where you'd actually want a 'float' 
constant instead of a 'real' one.

> 2) Float and double are expected to be implemented in hardware. Longer 
> precisions are often not available. I wanted to make it practical for a 
> D implementation on those machines to provide a software long precision 
> floating point type, rather than just making real==double. Such a type 
> would be very slow compared with double.

Interesting. I thought that 'real' was supposed to be the highest 
accuracy fast floating point type, and would therefore be either 64, 80, 
or 128 bits. So it could also be a double-double?
For me, the huge benefit of the 'real' type is that it guarantees that 
optimisation won't change the results. In C, using doubles, it's quite 
unpredictable when a temporary will be 80 bits, and when it will be 64 
bits. In D, if you stick to real, you're guaranteed that nothing weird 
will happen. I'd hate to lose that.

> 3) Real, even in hardware, is significantly slower than double. Doing 
> constant folding at max precision at compile time won't affect runtime 
> performance, so it is 'free'.

In this case, the initial issue remains: in order to write code which 
maintains accuracy regardless of machine precision, it is sometimes 
necessary to specify the precision that should be used for constants.
The original code was an example where weird things happened because
that wasn't respected.



More information about the Digitalmars-d-bugs mailing list