parallelFuture
dsimcha
dsimcha at yahoo.com
Fri Oct 23 17:39:14 PDT 2009
== Quote from Tim Matthews (tim.matthews7 at gmail.com)'s article
> dsimcha wrote:
> > For now, parallelFuture was designed with a single producer, multiple worker
> > model. Absolutely no attempt was made to allow for tasks running in the task pool
> > to themselves submit jobs to the same task pool, because it would have made things
> > more complicated and I couldn't think of any use cases.
> like recursion a function is gona need to call another function, a
> thread needs to be able to spawn threads and task should be able to
> create new tasks. Making newly spawned tasks stay on the same thread is
> good optimization. This shouldn't need a specific use case.
> > I designed the lib with
> > the types of use cases I encounter in my work in mind. (mathy, pure throughput
> > oriented computing on large, embarrassingly parallel problems.) If someone comes
> > up with a compelling use case, though, I'd certainly consider adding such
> > abilities provided they don't interfere with performance or API simplicity for the
> > more common cases.
> >
> > To make this discussion simple, let's define F1 as a future/task submitted by the
> > main producer thread, and F2 as a task/future submitted by F1. The queue is (for
> > now) strictly FIFO, except that if you have a pointer to the Task/Future object
> > you can steal a job. When F1 submits F2 to the queue, F2 goes to the back of the
> > queue like anything else. This means when F1 waits on F2, it is possible to have
> > a cyclical dependency (F1 waiting on F2, F2 waiting for a worker thread populated
> > by F1). This is mitigated by work stealing (F1 may just steal F2 and do it in its
> > own thread).
> I don't like that ^ idea of simple discussion with the many F1 and F2
> all over the place. I hope this video can help visualize some ideas:
> http://channel9.msdn.com/pdc2008/TL26/
> >
> > In parallel map and foreach, I should probably document this, but for now it's
> > undefined behavior for the mapping function or parallel foreach loop body to
> > submit jobs to the task pool and wait on them, and in practice will likely result
> > in deadlocks.
> You want to document that as undefined behavior? It can be made to work.
Ok, I added the ability for tasks to submit more tasks to the pool by implementing
what I'd call a "selfish thread" model (not sure if there's a more formal name for
it). Basically, a thread will steal any job it's waiting on if the job hasn't
been started yet, no matter where the job was in the queue.
This was made somewhat difficult to implement because I was using lazy submitting
for parallel foreach and map and recycling objects. This enables parallel foreach
over random access ranges without generating any heap activity. It also enables
parallel foreach over lazy forward ranges that might not even fit in memory if
they were converted to arrays up front, without having to synchronize on every
call to front(). Before each submission, a small portion of the range is
converted to an array, and the memory used for this is recycled when the object is
recycled. However, the effort was worth it, as I thought of a killer use case:
Nested parallel foreach over matrices.
Again, code:
http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/parallelFuture.d
Docs:
http://cis.jhu.edu/~dsimcha/parallelFuture.html
More information about the Digitalmars-d-announce
mailing list