New linked list

Sean Kelly sean at f4.ca
Thu May 11 16:04:54 PDT 2006


Walter Bright wrote:
>>
>> Thanks.  Assuming I did this, would it be technically legal to destroy 
>> this list after the iteration ends but before opApply returns? 
>> Otherwise, I'm not entirely certain how to go about processing the 
>> removals.
> 
> I'm a bit reluctant to say yes on that, worrying that I'm forgetting 
> something. What you can do is not use foreach, but just a for loop.

I think I could for Thread, though doing so may require an iterator 
implementation (threads are now stored in a linked list).  I have fewer 
options for ThreadGroup as it's implemented using an AA, but perhaps I 
can come up with something.  However, using foreach offers a tremendous 
advantage that other options don't: I can use a 'synchronized' block 
inside opApply and get thread safety without any user involvement. 
Better yet, on platforms with reentrant locks (ie. all platforms D 
currently runs on) I can synchronize on the same object in container 
method calls and pay for only a single lock/unlock for the entire loop, 
regardless of what the user does to the container.  The alternative with 
traditional loops would be to use the iterator as an RAII mechanism for 
this lock, but that seems a disaster waiting to happen.  Alternately, I 
could follow the traditional approach of not locking within the 
container at all, but that seems a tad odd for code specifically 
intended for multithreaded programming.

To redirect discussion a bit, what about something akin to iterator 
categories in C++.  For example, it seems likely that a compiler might 
try to optimize loops involving a random_access_iterator, but far less 
likely that this would occur for other iterator types (forward_only, 
input_iterator, etc).  Perhaps there's a way to indicate to the compiler 
what sort of sequence is being represented as a way to suggest (or 
restrict) possible optimizations that may be performed?  Thus I may be 
overjoyed to have the compiler figure out something clever to do with my 
BitArray iteration and may even be willing to provide hints so that it 
can do so, but I wouldn't necessarily want or expect this to occur for 
my SqlCursor object.  This would also allow me to restrict optimizations 
in instances when dynamic manipulation of the sequence is more important 
to me than iteration performance.

This idea just came to me so I don't have any brilliant suggestions for 
how best to implement it, but one obvious method would be to have 
separate opApply signatures for each optimization category.  However, it 
would be nice if there were some way to represent in code how the built 
in types should be handled as well.  Herb Sutter's Concur project 
attempts to solve this by including an additional qualifier in the loops 
themselves:

     for_each( c.depth_first(), f ); // sequential
     for_each( c.depth_first(), f, parallel ); // fully parallel
     for_each( c.depth_first(), f, ordered ); // ordered parallel

And while I don't entirely like that the user must specify the 
optimization/parallelization level to use, I do like that the choice 
isn't completely taken out of the hands of the programmer.  Does the 
general idea seem reasonable?


Sean



More information about the Digitalmars-d-announce mailing list