Integer overflow and underflow semantics?

Artur Skawina via Digitalmars-d digitalmars-d at puremagic.com
Tue Jul 22 04:39:55 PDT 2014


On 07/22/14 05:12, via Digitalmars-d wrote:
> On Monday, 21 July 2014 at 21:10:43 UTC, Artur Skawina via Digitalmars-d wrote:
> 
>> For D that is not possible -- if an expression is valid at run-time
>> then it should be valid at compile-time (and obviously yield the same
>> value). Making this aspect of CT evaluation special would make CTFE
>> much less useful and add complexity to the language for very little gain.
> 
> CT and runtime give different results for floats.

Both CT and RT evaluation must yield correct results, where "correct"
means "as specified". If RT FP is allowed to use extra precision (or
is otherwise loosely specified) then this also applies to CT FP.
But integer overflow _is_ defined in D (unlike in eg C), so CT has to
obey the exact same rules as RT. Would you really like to use a language
in which 'enum x = (a+b)/2;' and 'immutable x = (a+b)/2;' results in
different values?... And functions containing such 'a+b' expressions,
which rely on wrapping arithmetic, are not usable at CT?...

> Overflow in the end result without explicit truncation should be considered a bug. Bugs can yield different results.
 
Integer overflow is defined in D. It's not a bug. It can be relied upon.
(Well, now it can, after Iain recently fixed GDC ;) )

> Overflow checks on add/sub expressions mess up reordering optimizations. You only care about overflows in the end result.

This would be an argument _against_ introducing the checks.

> Exact, truncating, masking/wrapping or saturating math results should be explicit. 

That's how it is in D - the arguments are only about the /default/, and in
this case about /using a different default at CT and RT/. Using a non-wrapping
default would be a bad idea (perf implications, both direct and indirect -
bounds checking would make certain optimizations invalid), and using different
evaluation modes for CT and RT would be, well, insane.

> Ideally all ctfe would be done as real intervals with rational bounds, then checked against the specified precision of the end result (or numerically solving the whole expression to the specified precision).

Not possible (for integers), unless you'd be ok with getting different
results at CT.

>> Trying to handle just a subset of the problem would make things even
>> worse -- /some/ code would not be CTFE-able and /some/ overflows wouldn't
>> be caught.
>>
>>    int f(int a, int b) { return a*b; }
>>    enum v = f(100_000, 100_000);
> 
> NUMBER f(NUMBER a, NUMBER b) ...

Not sure what you mean here. 'f' is a perfectly fine existing
function, which is used at RT. It needs to be usable at CT as is.
The power of D's CTFE comes from being able to execute normal D
code and not having to use a different dialect.

artur


More information about the Digitalmars-d mailing list