Memoize and other optimization.

David d at dav1d.de
Thu Aug 16 15:40:43 PDT 2012


I have this code:


struct CubeSideData {
     float[3][4] positions; // 3*4, it's a cube!
     float[3] normal;
}

immutable CubeSideData[6] CUBE_VERTICES = [...]


Vertex[] simple_block(Side side, byte[2][4] texture_slice) pure {
     return simple_block(side, texture_slice, nslice);
}

Vertex[] simple_block(Side side, byte[2][4] texture_slice) pure {
     CubeSideData cbsd = CUBE_VERTICES[side];

     float[3][6] positions = to_triangles(cbsd.positions);
     byte[2][6] texcoords = to_triangles(texture_slice);
     byte[2][6] mask;
     if(mask_slice == nslice) {
         mask = texcoords;
     } else {
         mask = to_triangles(mask_slice);
     }

     Vertex[] data;

     foreach(i; 0..6) {
         data ~= Vertex(positions[i][0], positions[i][1], positions[i][2],
                        cbsd.normal[0], cbsd.normal[1], cbsd.normal[2],
                        texcoords[i][0], texcoords[i][1],
                        mask[i][0], mask[i][1],
                        0, 0);
     }

     return data;
}


Is using std.functional.memoize useful for that function, or is the 
lookup slower? This isn't calculation intensive but has quite a few 
array lookups.

Is there an approximate value how expensive an AA-lookup is (something I 
can compare with)?

PS:/ I just noticed that I wanted to optimize "arr ~= …". Better using a 
static array or std.array.appender with reserve?


More information about the Digitalmars-d-learn mailing list