Thoughts on parallel programming?

Tobias Pfaff nospam at spam.no
Fri Nov 12 02:17:10 PST 2010


On 11/12/2010 12:44 AM, dsimcha wrote:
> == Quote from Tobias Pfaff (nospam at spam.no)'s article
>> On 11/11/2010 08:10 PM, Russel Winder wrote:
>>> On Thu, 2010-11-11 at 18:24 +0100, Tobias Pfaff wrote:
>>> [ . . . ]
>>>> Unfortunately I only know about the standard stuff, OpenMP/OpenCL...
>>>> Speaking of which: Are there any attempts to support lightweight
>>>> multithreading in D, that is, something like OpenMP ?
>>>
>>> I'd hardly call OpenMP lightweight.  I agree that as a meta-notation for
>>> directing the compiler how to insert appropriate code to force
>>> multithreading of certain classes of code, using OpenMP generally beats
>>> manual coding of the threads.  But OpenMP is very Fortran oriented even
>>> though it can be useful for C, and indeed C++ as well.
>>>
>>> However, given things like Threading Building Blocks (TBB) and the
>>> functional programming inspired techniques used by Chapel, OpenMP
>>> increasingly looks like a "hack" rather than a solution.
>>>
>>> Using parallel versions of for, map, filter, reduce in the language is
>>> probably a better way forward.
>>>
>>> Having a D binding to OpenCL (and OpenGL, MPI, etc.) is probably going
>>> to be a good thing.
>>>
>> Well, I am looking for an easy&  efficient way to perform parallel
>> numerical calculations on our 4-8 core machines. With C++, that's OpenMP
>> (or GPGPU stuff using CUDA/OpenCL) for us now. Maybe lightweight was the
>> wrong word, what I meant is that OpenMP is easy to use, and efficient
>> for the problems we are solving. There actually might be better tools
>> for that, honestly we didn't look into that much options -- we are no
>> HPC guys, 1000-cpu clusters are not a relevant scenario and we are happy
>> that we even started parallelizing our code at all :)
>> Anyways, I was thinking about the logical thing to use in D for this
>> scenario. It's nothing super-fancy, in cases just a parallel_for we
>> will, and sometimes a map/reduce operation...
>> Cheers,
>> Tobias
>
> I think you'll be very pleased with std.parallelism when/if it gets into Phobos.
> The design philosophy is exactly what you're looking for:  Simple shared memory
> parallelism on multicore computers, assuming no fancy/unusual OS-, compiler- or
> hardware-level infrastructure.  Basically, it's got parallel foreach, parallel
> map, parallel reduce and parallel tasks.  All you need to fully utilize it is DMD
> and a multicore PC.
>
> As a reminder, the docs are at
> http://cis.jhu.edu/~dsimcha/d/phobos/std_parallelism.html and the code is at
> http://dsource.org/projects/scrapple/browser/trunk/parallelFuture/std_parallelism.d .
>   If this doesn't meet your needs in its current form, I'd like as much
> constructive criticism as possible, as long as it's within the scope of simple,
> everyday parallelism without fancy infrastructure.

I did a quick test of the module, looks really good so far, thanks for 
providing this ! (Is this module scheduled for inclusion in phobos2 ?)
If I find issues with it I'll let you know.


More information about the Digitalmars-d mailing list