std.parallelism and multidimensional arrays
Stefan Frijters via Digitalmars-d-learn
digitalmars-d-learn at puremagic.com
Fri May 22 03:54:35 PDT 2015
I have a code which does a lot of work on 2D/3D arrays, for which
I use the 2.066 multidimensional slicing syntax through a fork of
the Unstandard package [1].
Many times the order of operations doesn't matter and I thought I
would give the parallelism module a try to try and get some easy
speedups (I also use MPI, but that has some additional overhead).
The way I currently have my foreach loops set up, p is a
size_t[2], the payload of the array v is double[9] and the array
is indexed directly with a size_t[2] array and all works fine:
foreach(immutable p, ref v, arr) { double[9] stuff; arr[p] =
stuff; }
If I naively try
foreach(immutable p, ref v, parallel(arr)) { ... }
I first get errors of the type "Error: foreach: cannot make v
ref". I do not understand where that particular problem comes
from, but I can possibly live without the ref, so I went for
foreach(immutable p, v, parallel(arr)) { ... }
Which gets me "Error: no [] operator overload for type
(complicated templated type of some wrapper struct I have for
arr)". I'm guessing it doesn't like that there is no such thing
as a simple one-dimensional slicing operation for a
multidimensional array?
Should I define an opSlice function that takes the usual two
size_t arguments for the upper and lower bounds and doesn't
require a dimension template argument and somehow map this to my
underlying two-dimensional array? Will it need an opIndex
function that takes only takes a single size_t as well?
Or is this just taking the simple parallel(...) too far and
should I try to put something together myself using lower-level
constructs?
Any hints would be appreciated!
[1] http://code.dlang.org/packages/unstandard
More information about the Digitalmars-d-learn
mailing list