GSoC 2012 Proposal: Continued Work on a D Linear Algebra library (SciD - std.linalg)

Michael Chen sth4nth at gmail.com
Mon Apr 9 17:36:09 PDT 2012


Hi, Cristi,
>From change log of D 2.059. I saw that uniform function call syntax
was implemented. I hope you can leverage this feature to make non
member function calls look nicer. Another suggestion, please use
shorter function name for example M.t() instead of M.transpose() so
that long expression is easy to read.

Best,
Mo

On Mon, Apr 9, 2012 at 6:52 AM, Cristi Cobzarenco
<cristi.cobzarenco at gmail.com> wrote:
> On 8 April 2012 18:59, Caligo <iteronvexor at gmail.com> wrote:
>>
>> On Tue, Apr 3, 2012 at 6:20 AM, Cristi Cobzarenco
>> <cristi.cobzarenco at gmail.com> wrote:
>> >
>> > The point of these is to have light-weight element wise operation
>> > support.
>> > It's true that in theory the built-in arrays do this. However, this
>> > library
>> > is built on top BLAS/LAPACK, which means operations on large matrices
>> > will
>> > be faster than on D arrays.
>> >
>>
>> I can't agree with building it on top of LAPACK or any other BLAS
>> implementation, but perhaps I shouldn't complain because I'm not the
>> one who's going to be implementing it.  I like the approach Eigen has
>> taken where it offers its own BLAS implementation and, iirc, other
>> BLAS libraries can be used as optional back-ends.
>
>
> Yes, I agree with you. I already built naive implementations for BLAS &
> LAPACK functions as part of my last year project, using external libraries
> is optional (building with version=nodeps ensures absolutely no dependencies
> are needed). The argument still stands, though. If you _do_ use external
> BLAS libraries then using value arrays will _still_ be faster.
>
>>
>>
>> > Also, as far as I know, D doesn't support
>> > allocating dynamic 2-D arrays (as in not arrays of arrays), not to
>> > mention
>> > 2-D slicing (keeping track of leading dimension).
>> >
>>
>> I fail to see why there is any need for 2D arrays.  We need to make
>> sure multidimensional arrays (matrices) have data in very good
>> arrangements.  This is called tiling and it requires 1D arrays:  2D
>> arrays are stored as 1D arrays together with an indexing mechanism.
>
>
> That is precisely what I meant. We need wrappers around D arrays, because
> they, by themselves, do not support 2-D indexing. By providing a wrapper we
> allow the nice syntax of matrix[ a, b ]. This kind of wrapping is already in
> SciD by CowMatrix. I meant we shouldn't use D built-in arrays directly, not
> not at all. Also, as mentioned before, we can't use new for allocation,
> because we want the library to be GC-independent.
>>
>>
>> > Also I'm not sure how a case like this will be compiled, it may or may
>> > not
>> > allocate a temporary:
>> >
>> > a[] = b[] * c[] + d[] * 2.0;
>> >
>> > The expression templates in SciD mean there will be no temporary
>> > allocation
>> > in this call.
>> >
>>
>> Why are expression templates used?
>
>
> As H. S. Teoh rightfully pointed out, it is important not to allocate
> temporaries in matrix operations. You want to evaluate A = B * 3.0 + D * 2.0
> (where .* is element-wise multiplication) as (in BLAS terms):
>   copy( B, A );
>   scal( 3.0, A );
>   axpy( D, 2.0, A );
>
> Or with naive expression template evaluation:
>   for( i = 0 ; i < n ; ++ i ) {
>      A[ i ] = B[ i ] * 3.0 + D * 2.0;
>   }
>
> The D compiler would instead evaluate this as:
>   // allocate temporaries
>   allocate( tmp1, A.length );
>   allocate( tmp2, A.length );
>   allocate( tmp3, A.length );
>
>   // compute tmp1 = B * 3.0
>   copy( B, tmp1 );
>   scal( 3.0, tmp1 );
>
>   // compute tmp2 = D * 2.0;
>   copy( D, tmp2 );
>   scal( 2.0, tmp2 );
>
>   // compute tmp3 = tmp1 + tmp2;
>   copy( tmp1, tmp3 );
>   axpy( tmp2, 1.0, tmp1 );
>
>   // copy tmp3 into A
>   copy( tmp3, A );
>
> Plenty of useless computation. Note this is not a fault of the compiler, it
> has no way of knowing which temporaries can be optimised away for user types
> (i.e. our matrices).
>
>> Are you pretty much rewriting Eigen in D?
>
> No. It is just the interface - and even that only to a certain extent - that
> will mimic Eigen. The core the library would be very similar to what I
> implemented for SciD last year, which, you will find, is very D-like and not
> at all like Eigen.
>
>


More information about the Digitalmars-d mailing list