Matrix class
Bill Baxter
dnewsgroup at billbaxter.com
Wed May 9 13:13:11 PDT 2007
Silverling wrote:
>> Is it an arbitrary MxN sized matrix?
> Yes. It is resizeable after creation for purposes of multiplication and transpose.
>> Will it use vendor-provided accelerated BLAS libraries where available?
> Never heard of BLAS, but my I don't see why one couldn't change the code to use such libraries. Anyway, I'm keeping my module's dependencies down (currently depends only on std.string).
BLAS is good if you're going to be working with big matrices. But for a
4x4 view mat, there's not much point.
>> Is it parameterized on element type?
> I'm unsure of what you mean, but as I said before it is a template class. The matrices values could have any type, even other classes.
Yeh, that's what I mean. "parameterized on type" == The type of the
elements is a template parameter.
>> Storage format?
> Currently Type[row][col].
Got it. You may be better off with Type[row*col]. Type[row][col] is an
array of col pointers to 1D arrays of rows, rather than densely packed
memory.
>
>> Can it handle different storage schemes (e.g. sparse formats like CSC, CSR, banded, symmetric).
> Not yet, but it can be altered to support them, probably not by me. I'll use it primarily to calculate view frustums (which usually don't have a lot of '0' justifying the current storage scheme)
For 3D linear algebra (or 4D homogeneous linalg) there's helix.
http://www.dsource.org/projects/helix It's pretty decent -- even
includes a polar decomposition routine. It didn't have 2D classes,
which I missed, so I added those to my own copy.
There've been a few other folks to implement 3D linalg stuff in D as
well. Don't have any links for those though. Anyway, for 3D stuff I
think using a generic MxN matrix class is overkill.
> and vectorial calculus.
> Implementing such schemes would only need to override the opIndex and opIndexAssign.
> The current implementation still accesses the matrix's data directly,
> but that can be easily changed (which I will, due to your idea).
But if you have different storage schemes you want to be able to do
things like sparse_mat * dense_vec, or dense_mat +
upper_triangular_mat. It's a multiple dispatch problem so it requires
some thinking. The approach I've seen is to
> I've started this module because I wanted to learn operator overloading on D. I'm having a small issue on overloading the opMul. I templatized it but I need a specialization to multipliy by a matrix. DMD is not accepting
> void opMul(T:Matrix)(T mult)
My understanding is that if you use specialization in D it disables IFTI
(the thing that allows to to call a template function without specifying
parameters). So to use that you'd need to call it explicitly as
A.opMul!(typeof(B))(B). Not so nice. The workaround is generally to
use some static ifs or static asserts inside the template instead.
void opMul(T)(T mult) {
static assert(is(T:Matrix), "Mult only works for things
derived from Matrix");
}
--bb
More information about the Digitalmars-d-learn
mailing list