[D-runtime] Fw: [phobos] newCapacity function in array appending

Fawzi Mohamed fawzi at gmx.ch
Wed Jul 14 05:07:34 PDT 2010


On 13-lug-10, at 22:59, Steve Schveighoffer wrote:

> Didn't know this list existed until just now.
>
> Moving the related discussion below to the runtime list
>
> -Steve
>
>
>
> ----- Forwarded Message ----
>> From: Steve Schveighoffer <schveiguy at yahoo.com>
>> To: Phobos <phobos at puremagic.com>
>> Sent: Tue, July 13, 2010 3:54:09 PM
>> Subject: [phobos] newCapacity function in array appending
>>
>> There is a couple issues with the growth rate of arrays in the  
>> array append
>> function.
>>
>> First, the newCapacity function is only called when the  array must  
>> be
>> reallocated.  I believe this to be an error, I think the   
>> newCapacity function
>
>> should always be used, even if pages can be  added.  This would use  
>> less cycles
>>
>> searching for pages to  append.

I am not sure I follow, you mean using the new function to determine  
how many pages really commit?
well indeed you would spare some cycles, but at the cost of using  
memory, not sure it is really a win (the use of pages already
>>
>> Second, the newCapacity function seems to use the size of the   
>> element to
>> determine a percentage growth.  I feel this is an  error.  I think  
>> the growth
>> factor should strictly be related to the  amount of memory being  
>> requested,
>> which
>>
>> already takes into account the size  of the element.
>>
>> At present, this means that arrays of ints grow 4x as  fast as  
>> arrays of
>> bytes.
>>
>> And that's 4x in percentage terms, not in 4x  the memory used.  The  
>> memory used
>>
>> actually grows 16x  faster.
>>
>> Does everyone agree with this?

Again I am not too sure, I flip flopped a couple of times implementing http://github.com/fawzi/blip/blob/master/blip/util/Grow.d 
  , a version of which I had also used in tango.

basically if you append to an array do you want the appending be  
determined by the number of elements of the array, or its size?
Code will probably work depending on the number of elements of the  
array, not on its memory usage.
In the end it is just a factor, but if you have many small elements I  
feel that it if "reasonable" to allocate a smaller amount of extra  
space.
On the other hand as you pointed out if you thing strictly from the  
memory point of view, one can argue that the percentile of extra space  
should depend only on the space used, not on how it is used internally.

Finally I kept the "per element" view, but that was done without any  
real testing
>>
>> The relevant bug for this  is
>> http://d.puremagic.com/issues/show_bug.cgi?id=4412
>>
>> -Steve
>>
>>
>>
>>
>> _______________________________________________
>> phobos  mailing list
>> phobos at puremagic.com
>> http://lists.puremagic.com/mailman/listinfo/phobos
>>
>
>
>
> _______________________________________________
> D-runtime mailing list
> D-runtime at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/d-runtime



More information about the D-runtime mailing list