Size of the real type
Don Clugston
dac at nospam.com.au
Fri Mar 10 00:19:15 PST 2006
Walter Bright wrote:
> "Jarrett Billingsley" <kb3ctd2 at yahoo.com> wrote in message
> news:dupgi5$g9f$2 at digitaldaemon.com...
>> Well if the only difference is in the alignment, why isn't just the
>> real.alignof field affected? An x86-32 real is 80 bits, period. Or does
>> it have to do with, say, C function name mangling? So a C function that
>> takes one real in Windows would be _Name at 80 but in Linux it'd be _Name at 96
>> ?
>
> It's 96 bits on linux because gcc on linux pretends that 80 bit reals are
> really 96 bits long. What the alignment is is something different again.
> Name mangling does not drive this, although the "Windows" calling convention
> will have different names as you point out, but that doesn't matter.
>
> 96 bit convention permeates linux, and since D must be C ABI compatible with
> the host system's default C compiler, 96 bits it is on linux.
>
> If you're looking for mantissa significant bits, etc., use the various
> .properties of float types.
The 128 bit convention makes some kind of sense -- it means an 80-bit
real is binary compatible with the proposed IEEE quad type (it just sets
the last few mantissa bits to zero).
But the 96 bit case makes no sense to me at all.
pragma's DDL lets you (to some extent) mix Linux and Windows .objs.
Eventually, we may need some way to deal with the different padding.
More information about the Digitalmars-d
mailing list