Dynamic arrays allocation size

Steven Schveighoffer schveiguy at yahoo.com
Tue Mar 26 11:32:13 PDT 2013


On Tue, 26 Mar 2013 14:20:30 -0400, bearophile <bearophileHUGS at lycos.com>  
wrote:

> Steven Schveighoffer:
>
>> If we treated it as an error, then it would be very costly to  
>> implement, every operation would have to check for overflow.
>
> I have used similar tests and it's not very costly, not significantly  
> more costly than array bound tests.

Array bounds tests are removed for release code.  And an array bounds test  
is unequivocally an error.

In many cases, overflowing integers are not a problem, easily proven not  
to occur, or are expected.  Such designs would have to fight the compiler  
to get efficient code if the compiler insisted on checking overflows and  
possibly throwing errors.

>
> In the meantime Clang has introduced similar run-time tests for C/C++  
> code. So C/C++ are now better (more modern, safer) than the D  
> language/official compiler in this regard.
>
> (And Issue 4835 is about compile-time constants. CFFE is already plenty  
> slow, mostly because of memory allocations. Detecting overflow in  
> constants is not going to significantly slow down compilation, and it  
> has no effect on the runtime. Even GCC 4.3.4 performs such compile-time  
> tests.)

If CTFE did something different than real code, that would be a problem.   
Again, you should be able to construct the needed types with a struct, to  
use in both CTFE and real code.

>
>> The CPU does not assist in this.
>
> The X86 CPUs have overflow and carry flags that help.

What I mean is the cost is not free.  Like null pointer checks are free.

For code that is specifically designed to be very fast and is properly  
designed not to experience overflow, it would be needlessly penalized.

The simple for loop:

for(int i = 0; i < 10; ++i)

would now have to deal with uselessly checking i for overflow.  This could  
add up quickly.

-Steve


More information about the Digitalmars-d mailing list