dynamic array capacity

Steven Schveighoffer schveiguy at yahoo.com
Wed Dec 29 10:48:48 PST 2010


On Wed, 29 Dec 2010 13:14:29 -0500, spir <denis.spir at gmail.com> wrote:

> I've done some timings using reserve and Appender. Seems not to work on  
> my use case (decomposition of a string [actually a sequence of code  
> points] according to NFD). (see sample code below)
> * use reserve (to source string's length) with builtin append (operator  
> '~=') --> 20% slower
> * use Appender w/o reserve --> 3 times slower
> * user Appender + its own reserve --> 1.5 times slower (i.e. divide  
> above time per 2)

What is the baseline for this?  I.e. what is it 20% slower than? FWIW,  
Appender should be much faster than builtin append, even without reserve.

However, Appender has a recently fixed bug (not fixed in 2.051) where  
appending *arrays* of elements goes very slow.  I see you are doing that  
in a couple spots.

> I'm surprised that reserve does not speed up builtin appending, since it  
> can only avoid numerous reallocations. How to interpret that?
> I'm even more surprised of Appender's results on this use case, after  
> having read about it's performance several times on the list. Strange.  
> Can it be due to the fact that I only append sub-sequences? (the  
> decomposition '*p' below is also a mini-array)

It should speed up appending.  If it doesn't, then it's either a bug or  
pilot error.  As I said before, Appender in 2.051 and earlier has a bug  
where appending an array is very slow.

But builtin appending should be faster if you reserve.

Simple tests I run prove that this is true.  Recent developments in how  
the appending array grows mitigate this quite a bit, but it certainly will  
result in less memory being consumed, and it always runs faster.

-Steve


More information about the Digitalmars-d-learn mailing list