Why use float and double instead of real?

BCS none at anon.com
Tue Jun 23 09:01:43 PDT 2009


Hello Witold,

> Dnia 2009-06-23, wto o godzinie 14:44 +0200, Lars T. Kyllingstad
> pisze:
> 
>> Is there ever any reason to use float or double in calculations? I
>> mean, when does one *not* want maximum precision? Will code using
>> float or double run faster than code using real?
>> 
> yes they are faster and are smaller, and accurate enaugh.

IIRC on most systems real will only be slower as a result of I/O costs. For 
example on x86 the FPU only computes using 80-bit. 

> float and double types conforms to IEEE 754 standard. real type not.

I think you are in error here. IIRC IEEE-754 has some stuff about "extended 
precision" values that work like the normal types but with more bits. That 
is what 80 bit reals are. If you force rounding to 64-bits after each op, 
I think things will come out exactly the same as for a 64-bit FPU. 

> and many application (scientific computations, simultions, interval
> arithmetic) absolutly needs IEEE 754 semantic (correct rounding, known
> error behaviour, and so on).

> additionally
> real have varying precission on multiple platforms, and varing size,
> or are just not supported.

reals are /always/ supported if the platform supports FP, even if only with 
16-bit FP types.




More information about the Digitalmars-d-learn mailing list