No of threads
Temtaime
temtaime at gmail.com
Wed Dec 20 16:43:18 UTC 2017
On Wednesday, 20 December 2017 at 13:41:06 UTC, Vino wrote:
> On Tuesday, 19 December 2017 at 18:42:01 UTC, Ali Çehreli wrote:
>> On 12/19/2017 02:24 AM, Vino wrote:
>> > Hi All,
>> >
>> > Request your help in clarifying the below. As per the
>> document
>> >
>> > foreach (d; taskPool.parallel(xxx)) : The total number of
>> threads that
>> > will be created is total CPU -1 ( 2 processor with 6 core :
>> 11 threads)
>> >
>> > foreach (d; taskPool.parallel(xxx,1)) : The total number of
>> threads that
>> > will be created is total CPU -1 ( 2 processor with 6 core :
>> 12 threads)
>>
>> That parameter is workUnitSize, meaning the number of elements
>> each thread will process per work unit. So, when you set it to
>> 100, each thread will work on 100 elements before they go pick
>> more elements to work on. Experiment with different values to
>> find out which is faster for your work load. If each element
>> takes very short amount of time to work on, you need larger
>> values because you don't want to stop a happy thread that's
>> chugging along on elements. It really depends on each program,
>> so try different values.
>>
>> > foreach (d; taskPool.parallel(xxx,20)) : As in Windows 2008
>> whatever
>> > value is set for the parallel the total number of threads
>> does not
>> > increase more than 12.
>>
>> taskPool is just for convenience. You need to create your own
>> TaskPool if you want more threads:
>>
>> import std.parallelism;
>> import core.thread;
>> import std.range;
>>
>> void main() {
>> auto t = new TaskPool(20);
>> foreach (d; t.parallel(100.iota)) {
>> // ...
>> }
>> Thread.sleep(5.seconds);
>> t.finish();
>> }
>>
>> Now there are 20 + 1 (main) threads.
>>
>> Ali
>
> Hi Ali,
>
> Thank you very much, below are the observations, our program
> is used to calculate the size of the folders, and we don't see
> any improvements in the execution speed from the below test,
> are we missing something. Basically we expected the total
> execution time for the test 2 , as the time taken to calculate
> the size of the biggest folder + few additional mins, the
> biggest folder size is of 604 GB. Memory usage is just 12 MB,
> whereas the server has 65 GB and hardly 30% - 40% is used at
> any given point in time, so there is no memory constrain.
>
>
> Test 1:
> foreach (d; taskPool.parallel(dFiles[],1))
> auto SdFiles = Array!ulong(dirEntries(d, SpanMode.depth).map!(a
> => a.size).fold!((a,b) => a + b) (x))[].filter!(a => a > Size);
>
> Execution Time is 26 mins with 11+1 (main) threads and 1
> element per thread
>
> Test 2:
> auto TL = dFiles.length;
> auto TP = new TaskPool(TL);
> foreach (d; TP.parallel(dFiles[],1))
> auto SdFiles = Array!ulong(dirEntries(d, SpanMode.depth).map!(a
> => a.size).fold!((a,b) => a + b) (x))[].filter!(a => a > Size);
> Thread.sleep(5.seconds); TP.finish();
>
> Execution Time is 27 mins with 153+1 (main) threads and 1
> element per thread
>
>
> From,
> Vino.B
GC collect stops the worlds so there's no gain.
More information about the Digitalmars-d-learn
mailing list