Vectors and matrices

Fawzi Mohamed fmohamed at mac.com
Thu Apr 16 02:24:25 PDT 2009


On 2009-04-16 09:12:33 +0200, Lars Kyllingstad 
<public at kyllingen.NOSPAMnet> said:

> Robert Jacques wrote:
>> On Wed, 15 Apr 2009 10:07:38 -0400, Lars Kyllingstad 
>> <public at kyllingen.nospamnet> wrote:
>> 
>>> I am writing a D library based some of the stuff in SLATEC, and I've 
>>> come to a point where I need to decide on a way to manipulate vectors 
>>> and matrices. To that end, I have some ideas and questions I would like 
>>> comments on from the community.
>>> 
>>> Ideally, I want to restrict the user as little as possible, so I'm 
>>> writing heavily templated code in which one can use both 
>>> library-defined vector/matrix types and built-in arrays (both static 
>>> and dynamic). My reasons for this are:
>>> 
>>>     a) Different problems may benefit from different types. Sparse 
>>> matrices, dense matrices, triangular matrices, etc. can all be 
>>> represented differently based on efficiency and/or memory requirements.

yes the problem is that depending on the representation some operation 
might be not efficient or maybe even not worth being implemented, 
especially for sparse matrixes.
This field is quite difficult, I think Bill idea of having your wrapper 
is probably the easiest, second possibility would be the have template 
arguments for the operations you use.
You could even have a template parameter that acts just a way to 
"select" at compiletime the correct function to call. Doable but I 
think that it would need some thinking to get it right.

>>>     b) I hope that, at some point, my library will be of such a quality 
>>> that it may be useful to others, and in that event I will release it. 
>>> Interoperability with other libraries is therefore a goal for me, and a 
>>> part of this is to let the user choose other vector/matrix types than 
>>> the ones provided by me.

Good, Bill sparse matrixes lib and Don Glade lib apart (that it seem 
you already found) maybe you can also take a look at my blip lib
	http://dsource.org/projects/blip
It gives dense multidimensional arrays that can call lapack through 
nice to use wrappers eigv for eigenvalues, dot for dot product, easy 
subslicing, basically never copying unless really needed...
It needs the svn version of tango though...

>>>     c) Often, for reasons of both efficiency and simplicity, it is 
>>> desirable to use arrays directly.
>>> 
>>> My first question goes to those among you who do a lot of linear 
>>> algebra in D: Do you think supporting both library  types and arrays is 
>>> worth the trouble? Or should I just go with one and be done with it?
>> 
>> I'd say its worth the trouble.

if for example you need just matrix,vector multiplication I would try 
to abstract that away.
If your needs are more complex then if you manage to abstract away 
nicely then go for it, but beware of tying to abstract away the things 
before beginning, as the problem is common and well known, and there 
are different approaches non compatible with each other... Start with 
what you use, and try to abstract that away.

>> 
>>> A user-defined matrix type would have opIndex(i,j) defined, and to 
>>> retrieve elements one would write m[i,j]. However, the syntax for 
>>> two-dimensional arrays is m[i][j], and this means I have to put a lot 
>>> of static ifs around my code, in order to check the type every time I 
>>> access a matrix. This leads me to my second question, which is a 
>>> suggestion for a language change, so I expect a lot of resistance. :)
>> 
>> I consider m[i][j] to be a jagged array, which is logically different 
>> from matrix types. (i.e. its not square, etc.)
>> 
>>> Would it be problematic to define m[i,j,...] to be equivalent to 
>>> m[i][j][...] for built-in arrays, so that arrays and user-defined types 
>>> could be used interchangeably?
>> 
>> Actually, I'd prefer actual dense arrays over syntactic sugar for 
>> jagged arrays.
> 
> Me too, but that's a bigger language change. At least there ought to be 
> a matrix type in the standard library.

This is a prime example of something that sparse matrixes might not 
define so efficiently, looping over the indexes, maybe row-wise or 
column-wise this is more likely to be always efficient, indexing... mhh 
not so sure.
So think about your needs and start from there, but yes try to do 
algorithms that need just matrix vector multiplication abstract.

>>> (And, importantly, are there anyone but me who think they would benefit 
>>> from this?)

I would appreciate it :)

ciao
Fawzi
>>> 
>>> 
>>> -Lars
>> 
>> Good numerics and linear algebra is always appreciated. To that end 
>> there's a nice performance speedup in storing machine/byte strides 
>> instead of logical/element strides. (See: 
>> http://dobbscodetalk.com/index.php?option=com_content&task=view&id=502&Itemid=52 
>> Also, my lab maintains a vector/numerics/robotics package that might be 
>> of interest https://trac.lcsr.jhu.edu/cisst)
> 
> Thanks for the tip!
> 
> -Lars





More information about the Digitalmars-d mailing list