A look at Chapel, D, and Julia using kernel matrix calculations

data pulverizer data.pulverizer at gmail.com
Fri May 22 14:12:27 UTC 2020


On Friday, 22 May 2020 at 13:46:21 UTC, bachmeier wrote:
> On Friday, 22 May 2020 at 01:58:07 UTC, data pulverizer wrote:
>> https://github.com/dataPulverizer/KernelMatrixBenchmark
>
> Nice post. You said "adding SIMD support could easily put D 
> ahead or on par with Julia at the larger data size". It's not 
> clear precisely what you mean. Does this package help?
>
> https://code.dlang.org/packages/intel-intrinsics

Sorry it wasn't clear, I have amended the statement. I meant 
adding SIMD support to my matrix object could put D's performance 
at the largest data set on par or ahead of Julia since Julia 
edges D out on that data set and has SIMD support whereas my 
matrix does not, so I'm betting that that is the "x-factor" in 
Julia's performance at that scale. I've removed "easily" because 
it's too strong a word - more of an "educated" speculation. 
Probably something to look at next. I need to do some reading on 
SIMD. Thanks for the link, it's code that will get me started.


More information about the Digitalmars-d mailing list