On the D Blog: A Looat at Chapel, D, and Julia Using Kernel Matrix Calculations
data pulverizer
data.pulverizer at gmail.com
Wed Jun 3 17:37:59 UTC 2020
On Wednesday, 3 June 2020 at 16:15:41 UTC, jmh530 wrote:
> Also, I'm curious if you know how the Julia functions (like
> pow/log) are implemented, i.e. are they also calling C/Fortran
> functions or are they natively implemented in Julia?
It's not 100% clear but Julia does appear to implement a fair few
mathematical functions apart from things like abs, sqrt, pow,
which all come from C/llvm:
*
https://github.com/JuliaLang/julia/blob/c3f6542aa3f90485f4b5fbac0486c390df7284d5/src/runtime_intrinsics.c#L902
*
https://github.com/JuliaLang/julia/blob/be9ab4873d42f52bc776aa29d6e301d55b314033/src/julia_internal.h#L1006
However I can't see definitions for basic functions like log, exp
there (not just special math functions):
* https://github.com/JuliaLang/julia/tree/v1.4.2/base/special
But in the math.jl file:
* https://github.com/JuliaLang/julia/blob/v1.4.2/base/math.jl all
the basic math functions are imported from .Base:
Which might mean that some basic definition/declaration is
imported from somewhere to be overridden by functions declared in
special so it is entirely possible that things like sin, cos and
tan defined in special/trig.jl file are being used as the
de-facto Julia trig functions, but I'm not 100% on that one. The
long and short is that at least *some* basic math functions come
from C/LLVM.
In addition Julia has fast math options - LLVM fast math
implementation, which I only just remembered when looking at
their code:
*
https://github.com/JuliaLang/julia/blob/479097cf8c5a7675689cb069568d6b1077df8ba7/base/fastmath.jl
Which are obviously Clang/LLVM based.
I think it's a good idea to get std.math implementations more
competitive performance wise because people will naturally
gravitate towards that as the standard library for basic math
functions.
> Typo (other than Mike's headline):
> "In our exercsie"
> "Chapel’s arrays are more difficult to get started with than
> Julia’s but are designed to be run on single-core, multicore,
> and computer clusters using the same or very similar code,
> which is a good unique selling point." (should have comma
> between Julia's and but)
>
> This is unclear:
> The chart below shows matrix implementation times minus ndslice
> times; negative means that ndslice is slower, indicating that
> the implementation used here does not negatively represent D’s
> performance.
Fair point on the typo grammar.
Thanks
More information about the Digitalmars-d-announce
mailing list