3D Math Data structures/SIMD

Bill Baxter dnewsgroup at billbaxter.com
Fri Dec 21 22:34:51 PST 2007


Janice Caron wrote:
> On 12/21/07, Lukas Pinkowski <Lukas.Pinkowski at web.de> wrote:
>>> What's wrong with Vector!(3,float), Matrix!(4,4,real),
>>> Matrix!(3,4,cdouble), etc.?
>> They are not builtin types.
> 
> And this is a problem because...?
> 
> 
>> Multiplication should be component-wise
>> multiplication, exactly like addition is component-wise addition
> 
> Now that's just nonsense! Matrix multiplication should be matrix
> multiplication, and nothing else. For example, multiplying a (square)
> matrix by the identity matrix (of the same size) should leave it
> unchanged, not zero every element not on the main diagonal!
> 
> Likewise, vector multiplication must mean vector multiplication, and
> nothing else. (Arguably, there are two forms of vector multiplication
> - dot product and cross product - however, cross product only has
> meaning in three-dimensions, whereas dot product has meaning in any
> number of dimensions, so dot production is more general).

As pointed out, there is also the outer product that creates an NxN 
matrix.  Also defined for any N.  And I believe analogues of the cross 
product exist for all odd-dimensioned vectors.  Can't remember exactly 
on that one -- heard it listening to a Geometric Algebra talk too long ago.

> Componentwise multiplication... Pah! That's just not mathemathical.
> (Imagine doing that for complex numbers instead of proper complex
> multiplication!) No thanks! I'd want my multiplications to actually
> give the right answer!

The analogy is bad because for a number of reasons.

1) there's little practical value in component-wise multiplication of 
complex numbers.  Whereas component-wise multiplication of vectors is 
very often useful in practice.

2) In math the product of two complex numbers a and b is written just 
like the product of two scalars: ab.  Writing two vectors next to each 
other is a linear algebra "syntax error".  It's an invalid operation 
unless you transpose one of the vectors.  So if anything, in a 
programming language * on vectors should just not be allowed.  But 
making it do nothing is not very useful.

3) In numerical applications it's useful to define all kinds of 
non-linear algebra operators too.  For instance shading languages 
usually define a < b to be a componentwise comparison yielding a vector 
of booleans.  In terms of linear algebra this is meaningless but it's 
darn useful, and kind of goes along with the idea that + and - work 
component-wise.  And if you allow that, then why not just be consistent 
all the way and say that all the binary operators that yield a scalar 
result are defined componentwise?  And use things like dot() cross() and 
outer() for the various specialized vector products.

4) Componentwise multiplication of vectors is not really "nonsense" even 
in terms of linear algebra.  You just have to think of a*b as being 
defined to mean diag(a)*b.  That is, one of the operands is first 
implicitly converted to a diagonal matrix.

--bb



More information about the Digitalmars-d mailing list