log(n) amortized growth for containers
Marco Nembrini via Digitalmars-d
digitalmars-d at puremagic.com
Sat Dec 12 14:21:20 PST 2015
On Saturday, 12 December 2015 at 02:12:13 UTC, Andrei
Alexandrescu wrote:
> On 12/11/15 9:02 PM, Timon Gehr wrote:
>> On 12/11/2015 04:55 AM, Andrei Alexandrescu wrote:
>>>
>>> Here's a quick thought on growth schedule for arrays. The
>>> classic
>>> approach is to grow by a geometric schedule. For example: 1,
>>> 2, 4, 8,
>>> 16, etc. That way, n memory moves are necessary for achieving
>>> a capacity
>>> equal to n, so on average we spend a constant amount of time
>>> per
>>> appended element.
>>>
>>> The main disadvantage with this growth schedule is there are
>>> on average
>>> n/2 slack memory slots. Also, a growth factor of 2 is
>>> unfriendly to the
>>> memory allocator (long discussion) so a smaller one (such as
>>> the golden
>>> cut or less) would be better.
>>>
>>> Anyhow, I was thinking why O(1) amortized growth and not
>>> O(log n)? The
>>> latter is considered scalable all right and the hope is to
>>> get a
>>> corresponding reduction in the slack memory space. ...
>>
>>
>> E.g. the following sequence of capacities leads to amortized
>> Θ(log n).
>>
>> c(0) = 0, c(k+1) = c(k)+c(k)/⌊log₂(c(k)+2)⌋+1.
>
> How do you prove that? -- Andrei
Forget about going to continuous extensions, and substitute the
derivative of s(n) with (s(n+1) - s(n)) / 1.
Plugging into s(n) / s'(n) = log(n) and solving for s(n+1) gives
you a recursion for s:
s(n+1) = s(n) + s(n) / log(n).
Then you decide what you want for s(0) and fix small things like
log(0) not being defined or s(n) / log(n) not always being a
whole number.
You'll notice that Timon has log(c(k)) and not log(k) in his
expression, which I think comes from some confusion in your
original post between the number of elements n ( which is c(k)?)
and the number of times you have to do move elements (k used by
Timon)
More information about the Digitalmars-d
mailing list