Integer overflow and underflow semantics?

Artur Skawina via Digitalmars-d digitalmars-d at puremagic.com
Tue Jul 22 14:22:20 PDT 2014


On 07/22/14 18:39, Iain Buclaw via Digitalmars-d wrote:
> On 22 July 2014 12:40, Artur Skawina via Digitalmars-d
> <digitalmars-d at puremagic.com> wrote:
>> On 07/22/14 08:15, Iain Buclaw via Digitalmars-d wrote:
>>> On 21 Jul 2014 22:10, "Artur Skawina via Digitalmars-d" <digitalmars-d at puremagic.com <mailto:digitalmars-d at puremagic.com>> wrote:
>>>> For D that is not possible -- if an expression is valid at run-time
>>>> then it should be valid at compile-time (and obviously yield the same
>>>> value).
>>>
>>> ...most of the time.
>>>
>>> CTFE is allowed to do things at an arbitrary precision in mid-flight when evaluating an expression.
>>
>> That will work for FP, where excess precision is allowed, but will not work
>> for integer arithmetic. Consider code which uses hashing and hash-folding
>> functions which rely on wrapping arithmetic. If you increase the precision
>> then those functions will yield different values. Now a hash value
>> calculated at CT is invalid at RT...
> 
> I can still imagine a possibility of such occurring if cross-compiling
> from a (doesn't exist) platform at does integer operations at 128bit
> to x86, which at runtime is 64bit.

In D integer widths are well defined; exposing the larger range
would not be possible.

   static assert (100_000^^2!=100_000L^^2);

[Whether requiring specific integer widths was a good idea or not, 
 redefining them /now/ is obviously not a practical option.]


artur


More information about the Digitalmars-d mailing list