SIMD benchmark

Timon Gehr timon.gehr at gmx.ch
Tue Jan 17 17:50:00 PST 2012


On 01/18/2012 02:32 AM, F i L wrote:
> Timon Gehr wrote:
>> Are they really a general solution? How do you use vector ops to
>> implement an efficient matrix multiply, for instance?
>
> struct Matrix4
> {
> float4 x, y, z, w;
>
> auto transform(Matrix4 mat)
> {
> Matrix4 rmat;
>
> float4 cx = {mat.x.x, mat.y.x, mat.z.x, mat.w.x};
> float4 cy = {mat.x.y, mat.y.y, mat.z.y, mat.w.y};
> float4 cz = {mat.x.z, mat.y.z, mat.z.z, mat.w.z};
> float4 cw = {mat.x.w, mat.y.w, mat.z.w, mat.w.w};
>
> float4 rx = {mat.x.x, mat.x.y, mat.x.z, mat.x.w};
> float4 ry = {mat.y.x, mat.y.y, mat.y.z, mat.y.w};
> float4 rz = {mat.z.x, mat.z.y, mat.z.z, mat.z.w};
> float4 rw = {mat.w.x, mat.w.y, mat.w.z, mat.w.w};
>
> rmat.x = cx * rx; // simd
> rmat.y = cy * ry; // simd
> rmat.z = cz * rz; // simd
> rmat.w = cw * rw; // simd
>
> return rmat;
> }
> }

The parameter is just squared and returned?

Anyway, I was after a general matrix*matrix multiplication, where the 
operands can get arbitrarily large and where any potential use of 
__restrict is rendered unnecessary by array vector ops.


More information about the Digitalmars-d mailing list