D array expansion and non-deterministic re-allocation
Steven Schveighoffer
schveiguy at yahoo.com
Mon Nov 23 05:16:54 PST 2009
On Sat, 21 Nov 2009 17:19:08 -0500, Bartosz Milewski
<bartosz-nospam at relisoft.com> wrote:
>> int[] a = [0];
>> auto b = a;
>> a ~= 1;
>> b ~= 2;
>> What is a[1]?
>>
>> Is this considered "stomping" and requiring a re-allocation?
>
> Can this question be answered without the recourse to implementation,
> and the MRU cache in particular?
>
> I thought about an abstract definition of "stomping".Something like
> that: The expanded part of the array is never implicitly shared with any
> other array. But I'm not sure about this case:
>
> int[] a = [1, 2, 3];
> int[] b = a[0..1];
> b ~= 4;
> what is a[1]?
>
> By my definition, this would be considered stomping, but I couldn't find
> this example in TDPL.
I believe that's because Andrei is hoping this problem will be fixed
before TDPL comes out...
>
> I'm also not sure what to make of this statement: "client code is
> guaranteed to benefit of good average performance over a large number of
> appends to the same array". Since "good" is open to interpretation, such
> a guarantee is meaningless. If it is meant as "amortized O(1)", it's
> impossible to fulfill, unless the MRU cache is infinite and a lookup is
> O(1).
If a reasonably small size limit is put on the MRU cache (defined by
Andrei as 8), then MRU cache lookup is O(1). You still need a lock, but
that's not any different than today's requirement, and it certainly
wouldn't be any worse than re-allocating for every append. I agree there
needs to be a better solution if you want the highest performance, but the
simplest and most convenient solution should also perform reasonably well,
and shouldn't corrupt data.
> Then there is the problem of multithreading. A global MRU cache would
> require potentially expensive synchronization. A thread-local cache, on
> the other hand, makes thread creation even more heavy-weight than it is
> right now. My prediction is that D will be evolving towards user-level
> threads and fine granularity concurrency, which requires very small
> thread footprint.
You mean more heavyweight than allocating a new heap for each thread? The
only construct which makes sense to tie individual MRU caches to is a
heap, and I think adding the 64 bytes required for an MRU cache to a heap
is not that big a deal.
-Steve
More information about the Digitalmars-d
mailing list