The Matrix to end all Matrix classes (Let's dream!)

TomD t_demmer at nospam.web.de
Tue Nov 20 23:51:05 PST 2007


Bill Baxter Wrote:
[...]
> > The third case could probably in most cases be left out, 
> 
> I implemented the third case in Murray (nee MultiArray)
>     www.dsource.org/projects/multiarray
> 
> Mainly because I originally wanted an easy way port my NumPy code to D.
> But I have to agree that after spending a lot of time on it, I would go 
> with #2 if I were starting over.  Fix the number of dimensions at 
> compile time.
> 
> > but the first 
> > two make an interesting separation. Knowing the size at compile time is 
> > mostly an advantage for smaller matrices (eg 4x4, as commonly used for 
> > geometric translations). Larger matrices (eg 1000x1000) probably gain 
> > much less from compile time known sizes. There is also the matter of 
> > template bloat that means that compile time quantities are best left to 
> > the smaller numbers.
> 
> For #1, I've never had a need for anything besides 2x2,3x3,4x4.  Maybe 
> occasionally things like 2x3 to represent affine transformations, but 
> then interpretation starts getting in the way.  You can't handle it like 
> a generic 2x3 matrix.  So to me for the compile time size case, you can 
> pretty much get by with just a fixed set of classes.  Of course if you 
> can generate those efficiently from a big template, then that's great. 
> But there are usually a lot of special cases.  If you look at a 
> geometry-oriented linalg lib like Helix you can find many little 
> differences in the API between Matrix22 and Matrix33.  They can be taken 
> care of with static ifs, but the question is if the loss of readability 
> it's worth it in the end.

In 2D Finite Elements some other fixed size small arrays are common, 6x6, 9x9, 
depending on the type of elements you use, also some non-square ones.
[...]
> >> 4. For storage we need packed, triangular, symmetric, banded, and sparse
> >> matrix support. (Possibly even 'calculated' matrix support, for
> >> user-defined matrix types which generate entries on-demand).
> > 
> > For generality, at least sparse (and possibly calculated) are useful for 
> > vectors and higher dimensional arrays too.
>  >
> >> 5. Can be generalized to tensors.
> 
> Numbers 4 and 5 are a bit conflicting.  There is no "triangular storage" 
> for anything other than 2-D arrays.  Same goes for the commonly used 
> compressed matrix formats: they are 2D-specific.

Well, that becomes something like "pyramidal" structure in 3D.
They are quite common in 3D partial differential equations, like
Laplace solvers, or in Fluid Mechanics.

> 
> Convenient usage for linear algebra also suggests that opMul should do 
> linear algebraic multiplication, but that doesn't make sense for 
> tensors.  Changing opMul to do different things depending on the 
> dimension seems inconsistent.

For Tensors we'd need something like ., :, and x, dot product, double dot
product and cross product. It may seem inconsistent, but it is math :-)

Ciao
Tom

> 
> My feeling is that the generic N-D array concept needs to be separated 
> from the linear algebra concepts.
> 
> However, I think all these needs can be accommodated by layering things 
> appropriately.
> 
> first Storage
> then  N-D array using some specific storage
> then  Matrix using 2-D array, and Vector using 1-D array.
> 
> Implementations of the storage concept can be dimension-specific (like 
> CCS sparse), or generic (like strided memory).
> 
> N-D array will have only operators that make sense for N-D arrays 
> without reference to specific interpretations.  So opMul would be 
> element-wise multiplication if anything.
> 
> Matrix and Vector can implement the linear algebraic rules.
> 
> > Yet I'm not convinced that it's impossible. What are the other
> >> requirements?
> > 
> > Naturally, different element types (as well UDT). 
> 
> I think the UDT part in particular is difficult to do well without 
> reference return values.  Even now I think with a complex matrix you 
> have the problem that you can't set the real part of an element using 
> opIndexAssign:
>    mat[3,2].re = 5.  // oops you just set the real part of a copy!
> 
> Or for increment for any type of matrix (+= 1.0).  But that's old news.
> 
> 
> I think you can achieve something decent using current D, but the syntax 
> will not be ideal.   And doing all the above will be a very large effort.
> 
> I started collecting a list of possible libraries to draw inspiration 
> from over at the Murray forum: 
> http://www.dsource.org/forums/viewtopic.php?t=3307&sid=b05ae6bad8b91d51286b0096cd4ef9d2
> 
> --bb




More information about the Digitalmars-d mailing list