We need to clarify if 'real' is the 'default floating point type' or not.

Don Clugston dac at nospam.com.au
Mon Mar 3 00:58:08 PST 2008


The spec is not quite clear on what 'real' can be and when it should be used.
For DMD, 'real' is the 80-bit (actually 79bit) X87 extended floating-point type. 
This has the following characteristics:
a. it is the highest precision floating-point type supported by the hardware;
b. it is the highest precision floating-point type which is fast;
c. it is almost as fast as double;
d. it is the precision used internally for all floating point calculations (And 
this makes it the only precision which is anomoly-free).
e. it is an IEEE floating-point type.

In the spec, 'real' is defined as the "largest hardware implemented floating 
point size (Implementation Note: 80 bits for Intel CPUs)", so we can apparently 
rely on (a); but unfortunately, that's not enough to give us usage rules.
Here are some of the important scenarios:

If you are writing exclusively for X87, the usage rule is simple: use float or 
double for arrays,
(for cache efficiency), use real for everything else. Using 'real' instead of 
'double' almost never costs you anything, and it frequently provides benefits 
from the increased precision, and by avoiding anomolies caused by characteristic 
(d).

On AMD64, it's possible to implement a D compiler which only uses SSE floating 
point, ignoring the X87 entirely. Then 'real' is 'double'. Points 'a'..'e' are 
all supported, albeit trivially. It makes no difference whether you use 'real' 
or 'double'.
But, a compiler which primarily uses SSE but also supports X87 is also possible. 
Then 'real'=80 bits, but point 'd' does not apply -- double*double uses 'double' 
as the intermediate type.

Now consider the SPARC (late models only). This has 'real' = quadruple, 128 
bits. In theory it's
supported by hardware, but many operatons are emulated in software in response 
to 'unimplemented opcode' traps.
Consequently, it's slow. (10X slower than double?).
If writing for SPARC, the usage rule is: use 'real' when you truly need the 
extra precision. Otherwise, use double.

On the PowerPC, GDC currently implements 'real' as 'doubledouble' (128 bits) -- 
with a longer mantissa than 'real' on X87, but with an exponent range the same 
as 'double'. This isn't a true IEEE type (subnormal numbers don't work 
properly). Because PowerPC has a FMA instruction, this type is reasonably fast 
(~3X slower than double?) It satisfies (b) but not (c).

Q1. Is this 'doubledouble' acceptable as 'real'?

Q2. Is 'real' the 'default type to use, unless you are in a speed-critical code',
OR is it 'the type to use when precision is useful and speed is irrelevant'
OR something else?

IMPLICATIONS: If 'real' is allowed to be relatively slow, then 'double' is the 
most important precision, and the math library functions need explicit overloads 
for double precision (since 'double' is guaranteed to be fast on all platforms).

Nothing in the compiler needs to change, but we need some more guarantees in the 
spec for what 'real' can be and how it is allowed to behave.



More information about the Digitalmars-d mailing list