What's the "right" way to do openmp-style parallelism?

Meta via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Sun Sep 6 20:41:21 PDT 2015


On Monday, 7 September 2015 at 02:56:04 UTC, Charles wrote:
> Friends,
>
> I have a program that would be pretty easy to parallelize with 
> an openmp pragra in C. I'd like to avoid the performance cost 
> of using message passing, and the shared qualifier seems like 
> it's enforcing guarantees I don't need. Essentially, I have
>
> x = float[imax][jmax]; //x is about 8 GB of floats
> for(j = 0; j < jmax; j++){
> //create some local variables.
>     for(i = 0; i < imax; i++){
>         x[j][i] = complicatedFunction(i, x[j-1], other, local, 
> variables);
>     }
> }
>
> In C, I'd just stick a #pragma omp parallel for around the 
> inner loop (since the outer loop obviously can't be 
> parallelized).
>
> How should I go about this in D? I want to avoid copying data 
> around if it's possible since these arrays are huge.
>
> Cheers,
> Charles.

I believe this is what you want: 
http://dlang.org/phobos/std_parallelism.html#.parallel.

I believe that all you would need to change is to have your inner 
loop become:

foreach(i, ref f; x[j].parallel)
{
     f = complicatedFUnction(i, x[j-1], etc...);
}

Don't quote me on that, though, as I'm not very experienced with 
std.parallelism.


More information about the Digitalmars-d-learn mailing list