Simplest multithreading example
ag0aep6g via Digitalmars-d-learn
digitalmars-d-learn at puremagic.com
Tue Sep 5 02:44:08 PDT 2017
On 09/05/2017 03:15 AM, Brian wrote:
> Thanks very much for your help, I finally had time to try your
> suggestions. The initial example you showed does indeed have the same
> problem of not iterating over all values :
>
>
> double [] hugeCalc(int i){
> // Code that takes a long time
> import core.thread: Thread;
> import std.datetime: seconds;
> Thread.sleep(1.seconds);
> return [i];
> }
>
> static import std.range;
> import std.parallelism: parallel;
> auto I = std.range.iota(0, 100);
> double[][int] _hugeCalcCache;
> foreach(i ; parallel(I))
> _hugeCalcCache[i] = hugeCalc(i);
>
>
> writeln( _hugeCalcCache.keys ); // this is some random subset of
> (0,100)
Yeah. As expected, associative array accesses are apparently not
thread-safe.
A simple writeln is a terrible way to figure that out, though. I'd
suggest sorting the keys and comparing that to `I`:
----
import std.algorithm: equal, sort;
auto sortedKeys = _hugeCalcCache.keys.sort;
assert(sortedKeys.equal(I));
----
> but this does seem to work using your other method of initialization :
>
>
> auto _hugeCalcCache = new double[][](100);
> foreach(i ; parallel(I))
> _hugeCalcCache[i] = hugeCalc(i);
>
> foreach( double[] x ; _hugeCalcCache)
> writeln( x ); // this now contains all values
>
>
> so I guess initializing the whole array at compile time makes it thread
> safe ?
There's nothing compile-timey about the code. The initialization is done
at run-time, but before the parallel stuff starts.
Note that the type of `_hugeCalcCache` here is different from above.
Here it's `double[][]`, i.e. a dynamic array. Above it's
`double[][int]`, i.e. an associative array. Those types are quite
different, despite their similar names.
You can prepare an associative array in a similar way, before doing the
parallel stuff. Then it might be thread-safe (not sure):
----
double[][int] _hugeCalcCache; /* associative array */
/* First initialize the elements serially: */
foreach(i; I) _hugeCalcCache[i] = [];
/* Then do the huge calculations in parallel: */
foreach(i; parallel(I)) _hugeCalcCache[i] = hugeCalc(i);
----
But if your keys are consecutive numbers, I see no point in using an
associative array.
More information about the Digitalmars-d-learn
mailing list