std.container.BinaryHeap + refCounted = WTF???

Steven Schveighoffer schveiguy at yahoo.com
Wed Nov 17 11:32:16 PST 2010


On Wed, 17 Nov 2010 13:58:55 -0500, dsimcha <dsimcha at yahoo.com> wrote:

> == Quote from Steven Schveighoffer (schveiguy at yahoo.com)'s article
>> On Wed, 17 Nov 2010 12:09:11 -0500, dsimcha <dsimcha at yahoo.com> wrote:
>> > == Quote from Steven Schveighoffer (schveiguy at yahoo.com)'s article
>> >> The issue is that if you append to such an array and it adds more  
>> pages
>> >> in
>> >> place, the block length location will move.  Since each thread caches
>> >> its
>> >> own copy of the block info, one will be wrong and look at array data
>> >> thinking it's a length field.
>> >> Even if you surround the appends with a lock, it will still cause
>> >> problems
>> >> because of the cache.  I'm not sure there's any way to reliably  
>> append
>> >> to
>> >> such data from multiple threads.
>> >> -Steve
>> >
>> > Would assumeSafeAppend() do the trick?
>> >
>> No, that does not affect your cache.  I probably should add a function  
>> to
>> append without using the cache.
>> -Steve
>
> I thought the whole point of assumeSafeAppend is that it puts the  
> current ptr and
> length into the cache as-is.

All the cache does is store the block info -- block start, block size, and  
block flags.  The length is stored in the block directly.  The cache  
allows me to skip a call to the GC (and lock the GC's global mutex) by  
getting the block info directly from a small cache.  The block info is  
then used to determine where and how the "used" length is stored.

Since the length is stored at the end, a change in block size in one cache  
while being unchanged in another cache can lead to problems.

assumeSafeAppend sets the "used" block length as the given array's length  
so the block can be used again for appending.  It does not affect the  
cache.

Another option is to go back to the mode where the "used" length is stored  
at the beginning of large blocks (this caused alignment problems for some  
people).

-Steve


More information about the Digitalmars-d mailing list