Standard D, Mir D benchmarks against Numpy (BLAS)
jmh530
john.michael.hall at gmail.com
Thu Mar 12 14:12:14 UTC 2020
On Thursday, 12 March 2020 at 12:59:41 UTC, Pavel Shkadzko wrote:
> [snip]
Looked into some of those that aren't faster than numpy:
For dot product, (what I would just call matrix multiplication),
both functions are using gemm. There might be some quirks that
have caused a difference in performance, but otherwise I would
expect to be pretty close and it is. It looks like you are
allocating the output matrix with the GC, which could be a driver
of the difference.
For the L2-norm, you are calculating the L2 norm entry-wise as a
Froebenius norm. That should be the same as the default for
numpy. For numpy, the only difference I can tell between yours
and there is that it re-uses its dot product function. Otherwise
it looks the same.
More information about the Digitalmars-d
mailing list