Signed integer overflow undefined behavior or not?

Don via Digitalmars-d digitalmars-d at puremagic.com
Fri Nov 13 01:09:31 PST 2015


On Friday, 13 November 2015 at 05:47:03 UTC, deadalnix wrote:
> Signed overflow are defined as well, as wraparound.

Can we please, please, please not have that as official policy 
without carefully thinking through the implications?

It is undefined behaviour in C and C++, so we are not constrained 
by backwards compatibility with existing code.

I have never seen an example where signed integer overflow 
happened, which was not a bug. In my opinion, making it legal is 
an own goal, an unforced error.

Suppose we made it an error. We'd be in a much better position 
than C. We could easily add a check for integer overflow into 
CTFE. We could allow compilers and static analysis tools to 
implement runtime checks for integer overflow, as well.
Are we certain that we want to disallow this?

At the very least, we should change the terminology on that page. 
The word "overflow" should not be used when referring to both 
signed and unsigned types. On that page, it is describing two 
very different phenomena, and gives the impression that it was 
written by somebody who does not understand what they are talking 
about.
The usage of the word "wraps" is sloppy.

That page should state something like:
For any unsigned integral type T, all arithmetic is performed 
modulo (T.max + 1).
Thus, for example, uint.max + 1 == 0.
There is no reason to mention the highly misleading word 
"overflow".

For a signed integral type T, T.max + 1 is not representable in 
type T.
Then, we have a choice of either declaring it to be an error, as 
C does; or stating that the low bits of the infinitely-precise 
result will be interpreted as a two's complement value. For 
example, T.max + 1 will be negative.

(Note that unlike the unsigned case, there is no simple 
explanation of what happens).

Please let's be precise about this.




More information about the Digitalmars-d mailing list