Overflow-safe use of unsigned integral types

Jonathan M Davis jmdavisProg at gmx.com
Sun Nov 10 13:48:40 PST 2013


On Sunday, November 10, 2013 12:10:23 Joseph Rushton Wakeling wrote:
> One of the challenges when working with unsigned types is that automatic
> wraparound and implicit conversion can combine to unpleasant effect.
> 
> Consider e.g.:
> 
>      void foo(ulong n)
>      {
>          writeln(n);
>      }
> 
>      void main()
>      {
>          foo(-3);
>      }
> 
> ... which will output: 18446744073709551613 (or, ulong.max + 1 - 3).
> 
> Is there a recommended way to handle this kind of potential wraparound where
> it is absolutely unacceptable?  I've considered the following trick:
> 
>      void bar(T : ulong)(T n)
>      {
>          static if (isSigned!T)
>          {
>              enforce(n >= 0);    // or assert, depending on your priorities
>          }
>          writeln(n);
>      }
> 
> ... but it would be nice if there was some kind of syntax sugar in place
> that would avoid such a verbose solution.

If you wanted to do that, you could simply do

void bar(T)(T n)
    if(isUnsigned!T)
{
    ...
}

and make it so that only unsigned types (and therefore only positive values) 
are accepted without casting.

> I know that there have been requests for runtime overflow detection that is
> on by default (bearophile?), but it could be good to have some simple way
> to indicate "really, no overflow" even where by default it's not provided.
> 
> (Motivation: suppose that you have some kind of function that takes a size_t
> and uses that to determine an allocation.  If a negative number gets passed
> by accident, the function will thus try to allocate 2^64 - n elements, and
> your computer will have a very happy time...:-)

Honestly, I wouldn't worry about it. It's a good idea to avoid unsigned types 
when you don't need them in order to avoid problems like this (though size_t 
is one case, where you have to deal with an unsigned type), but in my 
experience, overflow problems like this are rare, and testing usually finds 
problems like this in the rare cases that they do happen. I think that the 
only reason to really be concerned about it is if you feel that you need to be 
paranoid about it for some reason.

But if you really need to protect against it, I think that the only way to 
really do it cleanly is to create your own int struct type which protects you  
- similar to a NonNullable struct but for protecting against integer overflow. 
And I really think that the situation is very similar to that of nullable 
references/pointers. You _can_ run into problems with them, but it's rarely a 
problem unless you do a lot with null and aren't careful about it, and if 
you're paranoid about it, you use NonNullable. It's just that in this case, 
instead of worrying about null, you're worrying about integer overflow. 
Ultimately, I think that it's about the same thing and the the solutions for 
the two are about the same.

- Jonathan M Davis


More information about the Digitalmars-d-learn mailing list