challenge #3 - Parallel for loop
Bill Baxter
dnewsgroup at billbaxter.com
Sat Jan 27 04:37:43 PST 2007
janderson wrote:
> I would like to be able to run a for loop in parallel, using syntax like:
>
> //example 1
> int sum = 0;
> foreach_parallel(int a; array)
> {
> sum += array[a]; //This could be anything
> }
>
> //Example 2
> int sum = 0;
> for_parallel(int a; a<array.length; ++a) //These could be anything
> {
> sum += array[a]; //This could be anything
> }
>
> 1) The call syntax is a simple generic one statement.
> 2) It needs to represent a foreach/for-loop as close as possible,
> although it doesn't need to look like a D foreach/for-loop.
> 3) It needs to handle things like adding all the parts in an array (this
> is the difficult part).
> 4) Since foreach_parallel always works on an array, you may take some
> concessions for this loop, taking the array of operation into account
> (ie, you may split the array).
> 5) If we can't do it, what syntax would you recommend to close this gap?
>
> -Joel
That's the kind of thing that OpenMP does (but only for C/C++/Fortran).
http://www.openmp.org. The syntax is more like
#pragma omp parallel for
for(int i=0; i<N; i++) {
// operation
}
But from the FAQ:
"""
Q8: What if I just want loop-level parallelism?
A8: OpenMP fully supports loop-level parallelism. Loop-level parallelism
is useful for applications which have lots of coarse loop-level
parallelism, especially those that will never be run on large numbers of
processors or for which restructuring the source code is either
impractical or disallowed. Typically, though, the amount of loop-level
parallelism in an application is limited, and this in turn limits the
scalability of the application.
OpenMP allows you to use loop-level parallelism as a way to start
scaling your application for multiple processors, but then move into
coarser grain parallelism, while maintaining the value of your earlier
investment. This incremental development strategy avoids the all-or-none
risks involved in moving to message-passing or other parallel
programming models.
"""
--bb
More information about the Digitalmars-d
mailing list