Standard D, Mir D benchmarks against Numpy (BLAS)

p.shkadzko p.shkadzko at gmail.com
Thu Mar 12 20:39:59 UTC 2020


On Thursday, 12 March 2020 at 15:34:58 UTC, 9il wrote:
> On Thursday, 12 March 2020 at 14:37:13 UTC, Pavel Shkadzko 
> wrote:
>> On Thursday, 12 March 2020 at 14:00:48 UTC, 9il wrote:
>>> On Thursday, 12 March 2020 at 12:59:41 UTC, Pavel Shkadzko 
>>> wrote:
>>>>[...]
>>>
>>> Generally speaking, the D/Mir code of the benchmark is slow 
>>> by how it has been written.
>>> I am not arguing you to use  D/Mir. Furthermore, sometimes I 
>>> am arguing my clients to do not to use it if you can. On the 
>>> commercial request, I can write the benchmark or an applied 
>>> algorithm so D/Mir will beat numpy in all the tests including 
>>> gemm. --Ilya
>>
>> Didn't understand. You argue against D/Mir usage when talking 
>> to your clients?
>
> It depends on the problem they wanted me to solve.
>
>> Actually, I feel like it is also useful to have unoptimized D 
>> code benchmarked because this is how most people will write 
>> their code when they first write it. Although, I can hardly 
>> call these benchmarks unoptimized because I use LDC 
>> optimization flags as well as some tips from you.
>
> Agreed.  I just misunderstood the table at the forum, it was 
> misaligned for me. The numbers look cool, thank you for the 
> benchmark. Mir sorting looks slower then Phobos, it is 
> interesting, and need a fix. You can use Phobos sorting with 
> ndslice the same way with `each`.
>
> Minor updates
> https://github.com/tastyminerals/mir_benchmarks/pull/1

I am actually intrigued with the timings of huge matrices. Why 
Mir D and Standard D are so much better than NumPy? Once we get 
to 500x600, 1000x1000 sizes there is a huge drop in performance 
for NumPy and not so much for D. You mentioned L3 cache but CPU 
architecture is equal for all the benchmarks so what's going on?


More information about the Digitalmars-d mailing list