DMD 1.034 and 2.018 releases

Walter Bright newshound1 at digitalmars.com
Sun Aug 10 00:26:44 PDT 2008


bearophile wrote:
> D code with +:

I found the results to be heavily dependent on the data set size:

C:\mars>test5 1000 10000
array len= 8000  nloops= 10000
     vec time= 0.0926506 s
non-vec time= 0.626356 s

C:\mars>test5 2000 10000
array len= 16000  nloops= 10000
     vec time= 0.279727 s
non-vec time= 1.70048 s

C:\mars>test5 3000 10000
array len= 24000  nloops= 10000
     vec time= 0.795482 s
non-vec time= 2.47597 s

C:\mars>test5 4000 10000
array len= 32000  nloops= 10000
     vec time= 2.36905 s
non-vec time= 3.90906 s

C:\mars>test5 5000 10000
array len= 40000  nloops= 10000
     vec time= 3.12636 s
non-vec time= 3.70741 s

For smaller sets, it's a 2x speedup, for larger ones only a few percent.

What we're seeing here is most likely the effects of the data set size 
exceeding the cache. It would be a fun project for someone to see if 
somehow the performance for such large data sets could be improved, 
perhaps by "warming" up the cache?


More information about the Digitalmars-d-announce mailing list