Library for Linear Algebra?
Fawzi Mohamed
fmohamed at mac.com
Sun Mar 22 05:13:49 PDT 2009
On 2009-03-22 09:45:32 +0100, Don <nospam at nospam.com> said:
> Trass3r wrote:
>> Don schrieb:
>>> I abandoned it largely because array operations got into the language;
>>> since then I've been working on getting the low-level math language
>>> stuff working.
>>> Don't worry, I haven't gone away!
>>
>> I see.
>>
>>>>
>>>>> http://www.dsource.org/projects/lyla
>>
>> Though array operations still only give us SIMD and no multithreading (?!).
>
> There's absolutely no way you'd want multithreading on a BLAS1
> operation. It's not until BLAS3 that you become computation-limited.
not true, if your vector is large you could still use several threads.
but you are right that using multiple thread at low level is a
dangerous thing, because it might be better to use just one thread, and
parallelize another operation at a higher level.
Thus you need sort of know how many threads are really available for
that operation.
I am trying to tackle that problem in blip, by having a global
scheduler, that I am rewriting.
>> I think the best approach is lyla's, taking an existing, optimized C
>> BLAS library and writing some kind of wrapper using operator
>> overloading etc. to make programming easier and more intuitive.
blyp.narray.NArray does that if compiled with -version=blas, but I
think that for large vector/matrixes you can do better (exactly using
multithreading).
> In my opinion, we actually need matrices in the standard library, with
> a very small number of primitive operations built-in (much like Fortran
> does). Outside those, I agree, wrappers to an existing library should
> be used.
More information about the Digitalmars-d
mailing list