why does array appending add a page size instead of doubling ?
Ali Çehreli
acehreli at yahoo.com
Sun Feb 3 07:36:45 PST 2013
On 02/03/2013 03:57 AM, Steven Schveighoffer wrote:
> On Sat, 02 Feb 2013 23:37:10 -0500, Ali Çehreli <acehreli at yahoo.com>
wrote:
>
>> Rather, a better growth factor is 150%. It has been shown that 150%
>> growth factor works much better with memory allocators.
>
> in fact, D's appender uses something more than doubling. I did not write
> that part (though I did massage it for optimizations), so I don't know
> the rationale behind it or how it is designed to work, but the code is
> here:
>
>
https://github.com/D-Programming-Language/druntime/blob/master/src/rt/lifetime.d#L1651
>
>
> -Steve
If the comments are current, the implementer is Dave Fladebo. That
algorithm considers arrays differently by how large they are:
/*
* Better version by Dave Fladebo:
* This uses an inverse logorithmic algorithm to pre-allocate a bit more
* space for larger arrays.
* - Arrays smaller than PAGESIZE bytes are left as-is, so for the most
* common cases, memory allocation is 1 to 1. The small overhead added
* doesn't affect small array perf. (it's virtually the same as
* current).
* - Larger arrays have some space pre-allocated.
* - As the arrays grow, the relative pre-allocated space shrinks.
* - The logorithmic algorithm allocates relatively more space for
* mid-size arrays, making it very fast for medium arrays (for
* mid-to-large arrays, this turns out to be quite a bit faster than the
* equivalent realloc() code in C, on Linux at least. Small arrays are
* just as fast as GCC).
* - Perhaps most importantly, overall memory usage and stress on the GC
* is decreased significantly for demanding environments.
*/
The comments include some test results as well and where the diminishing
return is.
Ali
More information about the Digitalmars-d-learn
mailing list