[dmd-concurrency] blip parallelization

Fawzi Mohamed fawzi at gmx.ch
Tue Jan 12 14:56:02 PST 2010


On 12-gen-10, at 23:43, Andrei Alexandrescu wrote:

> Thanks for sharing. I read the document but got left with the  
> impression that the foreplay is not followed by something more  
> fleshed out.

well you know it is just working code ;)

> How do you compare and contrast your work with the existing work  
> stealing frameworks e.g. Cilk?

I have to say that I don't really know the details of cilk (but I was  
aware of it and shortly looked at it), so what I am about to say might  
be a bit off.
I think that the main difference is that I handle explicitly recursive  
tasks (keeping in account the nesting level of the task to decide how  
to schedule it).
About the stealing algorithm itself I suppose they do something  
similar (using the numa hierarchy), but probably the details are  
different, and if they don't handle the recursive calls explicitly  
they cannot guarantee to steal the root tasks (but as they steal from  
the other side of the queue, it is likely that they do.
A big difference is that I expose numa and schedulers, and that one  
can easily create a task that is pinned, or submit work with a given  
distribution.
cilk hides away these things.

Fawzi

>
>
> Andrei
>
> Fawzi Mohamed wrote:
>> I tried to stay out of the discussion, because I am too busy, but  
>> concurrency is a topic that really interest me so I decided to  
>> write a document explaining the parallelization concepts behind blip.
>> Blip is a library that I am developing, I still haven't really  
>> announced it, because I don't think it is really ready for prime  
>> time (inching toward v 0.5: useful for other programmers that want  
>> to thinker with it), but I already talked about it a couple of  
>> times when some relevant topic came up.
>> For the parallelization I think that it is important to have good  
>> stable primitives, new languages features are not really necessary,  
>> so I worked mainly with D 1.0 and tango, but I did try to write  
>> portable code, the whole output is mostly independent of tango, as  
>> it is based on a sink (void delegate(char[])), in D 2.0 it should  
>> be delegate(const char[]), and probably several other things, but I  
>> tried to be portable. The thing that I missed most in D 1.0 are  
>> real closures.
>> I just (finally!) switched to a scheduler that takes advantage of  
>> numa information (after some not so good experiments with a more  
>> hierarchical approach), it seems to work but I am pretty sure that  
>> there are still bugs around.
>> Anyway I have quite clear ideas about what I want as  
>> parallelization framework, and it is rather different from the what  
>> was presented, so I think that it could be interesting to present  
>> the ideas behind it.
>> http://github.com/fawzi/blip/blob/master/ParallelizationConcepts.txt
>> ciao
>> Fawzi
>> _______________________________________________
>> dmd-concurrency mailing list
>> dmd-concurrency at puremagic.com
>> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency
> _______________________________________________
> dmd-concurrency mailing list
> dmd-concurrency at puremagic.com
> http://lists.puremagic.com/mailman/listinfo/dmd-concurrency



More information about the dmd-concurrency mailing list