128 bit signed and unsigned integer types
Andrei Alexandrescu
SeeWebsiteForEmail at erdani.org
Sun Dec 28 09:23:45 PST 2008
Daniel Keep wrote:
>
>
> bearophile wrote:
>> ...
>>
>> A possible use is for runtime test for overflows: you can perform
>> operations among 64 bit integers with 128 bit precision, and then you
>> can look at the result if it fits still in 64 bits. If not, you can
>> raise an overflow exception (probably there other ways to test for
>> overflow, but this seems simple to implement). You can use 64 bits to
>> implement the safe operations among 32-16-8 bit numbers on the 64 bit
>> CPUs).
>>
>> Bye,
>> bearophile
>
> It's always annoyed me that the CPU goes to all the trouble of doing
> multiplies in 64-bit, keeping track of overflows, etc., and yet there's
> no way in any language I've ever seen (aside from assembler) to get that
> information.
>
> Personally, rather than working around the problem with 128-bit types,
> I'd prefer to see something (roughly) like this implemented:
>
> bool overflow;
> ubyte high;
>
> ubyte a = 128;
>
> // overflow == false
> pragma(OverflowFlag, overflow) ubyte c = a + a;
> // overflow == true
>
> // high == 0
> pragma(ResultHigh, high) ubyte d = a * 2;
> // high == 1, d == 0
>
> As for 128-bit types themselves, I'm sure *someone* would find a use for
> them, and they'd be their favourite feature. Personally, I prefer
> arbitrary-precision once you start getting that big, but there you go.
If 128-bit built-in integer can't be made more efficient by moving them
in the core, then the question is really "do you need 128-bit literals?"
because that's all the built-in feature would bring over a library.
Andrei
More information about the Digitalmars-d
mailing list