Why use float and double instead of real?

Jarrett Billingsley jarrett.billingsley at gmail.com
Tue Jun 23 07:57:48 PDT 2009


On Tue, Jun 23, 2009 at 8:44 AM, Lars T.
Kyllingstad<public at kyllingen.nospamnet> wrote:
> Is there ever any reason to use float or double in calculations? I mean,
> when does one *not* want maximum precision? Will code using float or double
> run faster than code using real?

As Witold mentioned, float and double are the only types SSE (and
similar SIMD instruction sets on other architectures) can deal with.
Furthermore most 3D graphics hardware only uses single or even
half-precision (16-bit) floats, so it makes no sense to use 64- or
80-bit floats in those cases.

Also keep in mind that 'real' is simply defined as the largest
supported floating-point type.  On x86, that's an 80-bit real, but on
most other architectures, it's the same as double anyway.


More information about the Digitalmars-d-learn mailing list