Maximum array size

Ben Phillips Ben_member at pathlink.com
Wed Apr 26 04:25:39 PDT 2006


In article <e2nj5p$2nj4$1 at digitaldaemon.com>, Ivan Kazmenko says...
>
>Hi again!
>
>me:
>>> What is the 16M limit on the array length for?
>>> I wonder what's the 'reason' on that limit.
>
>Lionello Lunesu:
>>I've put the same limit in my own array template I use for C++ projects. 
>>  My reasoning was that you shouldn't be using a dynamic array for such 
>>huge memory blocks because of the potential costly resize. The limit 
>>helped me catch some bugs where array were being used and caused a 
>>significant performance bottleneck. I rewrote those parts with a simple 
>>malloc/free or VirtualAlloc. In D there might be the added complexities 
>>of the garbage collector. Remember that it keeps scanning the allocated 
>>memory for pointers. For these huge blocks it's better to use some other 
>>memory allocation scheme.
>
>I definitely won't want to resize such a huge array, at least not too often. My
>array is static. For resizing, I usually use the technique of doubling the
>requested index when it exceeds the current limit.
>
>If my too-many-M array is scanned for pointers, which I do not store there, do I
>have a way of forbidding it?
>
>Dave:
>>Why the need for such a large static array? Can't a dynamic array be used in
>>your code instead?
>
>I could use dynamic array, yes, but I don't see any benefit since its size is
>fixed. Please explain why it could be better.

Actually static arrays are fixed size and dynamic arrays are not.

>
>The current problem which requires such a big array is related to prime numbers
>- it's a program for a programming contest at http://recmath.org/contest/ (you
>know, various contests are the best way to learn something new, if you have
>enough free time, of course). I use a sort of Eratosphenes sieve for prime
>numbers, and the larger the sieve, the faster the program runs (for large
>numbers, it uses Miller-Rabin primality test which is significantly slower than
>seeking a bit in an array, with paging, caching, and other modern RAM techniques
>included). I work with a teammate, we share ideas but write different
>implementations. He uses Delphi, and I decided to try D, but currently my
>implementation sucks 'cause I have no 'legal' way to use an array that big.
>
>Current problem aside, there are many ways to use my current amount of RAM,
>which is 768M, indexing it with only a couple of arrays. It's not the first time
>I face the restriction even working on the current problem!
>
>Ivan Kazmenko.

An array is the wrong way to go if you need that much memory. Anyways, the
Eratosphenes sieve is iterative so you could use a linked list instead. Since
the extra 768M (*2 if doubly linked) links would increase the memory requirement
by a large amount you could consider using a custom, hybrid collection. Maybe by
having a linked list of arrays each with a size of 50000.





More information about the Digitalmars-d mailing list