<div><br></div><br><div class="gmail_quote">On 3 April 2012 02:37, Caligo <span dir="ltr"><<a href="mailto:iteronvexor@gmail.com">iteronvexor@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I've read **Proposed Changes and Additions**, and I would like to<br>
comment and ask a few questions if that's okay. BTW, I've used Eigen<br>
a lot and I see some similarities here, but a direct rewrite may not<br>
be the best thing because D > C++.<br>
<br>
2. Change the matrix & vector types, adding fixed-sized matrix<br>
support in the process.<br>
<br>
This is a step in the right direction I think, and by that I'm talking<br>
about the decision to remove the difference between a Vector and a<br>
Matrix. Also, fixed-size matrices are also a must. There is<br>
compile-time optimization that you won't be able to do for<br>
dynamic-size matrices.<br>
<br>
<br>
3. Add value arrays (or numeric arrays, we can come up with a good name).<br>
<br>
I really don't see the point for these. We have the built-in arrays<br>
and the one in Phobos (which will get even better soon).<br></blockquote><div><br></div><div>The point of these is to have light-weight element wise operation support. It's true that in theory the built-in arrays do this. However, this library is built on top BLAS/LAPACK, which means operations on large matrices will be faster than on D arrays. Also, as far as I know, D doesn't support allocating dynamic 2-D arrays (as in not arrays of arrays), not to mention 2-D slicing (keeping track of leading dimension).</div>
<div>Also I'm not sure how a case like this will be compiled, it may or may not allocate a temporary:</div><div><br></div><div>a[] = b[] * c[] + d[] * 2.0;</div><div><br></div><div>The expression templates in SciD mean there will be no temporary allocation in this call.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
4. Add reductions, partial reductions, and broadcasting for matrices and arrays.<br>
<br>
This one is similar to what we have in Eigen, but I don't understand<br>
why the operations are member functions (even in Eigen). I much<br>
rather have something like this:<br>
<br>
rowwise!sum(mat);<br>
<br>
Also, that way the users can use their own custom functions with much ease.<br></blockquote><div><br></div><div>There is a problem with this design. You want each matrix type (be it general, triangular, sparse or even an expression node) do be able to define its own implementation of sum: calling the right BLAS function and making whatever specific optimisations they can. Since D doesn't have argument dependent look-up (ADL), users can't provide specialisations for their own types. The same arguments apply to rowwise() and columnwise() which will return proxies specific to the matrix type. You could do something like this, in principle:</div>
<div><br></div><div>auto sum( T )( T mat ) {</div><div> return mat.sum();</div><div>}</div><div><br></div><div>And if we want that we can add it, but this will provide no addition in extensibility. By the way, you can use std.algorithm with matrices since they offer range functionality, but it will be much slower to use reduce!mySumFunction(mat) than mat.sum() which uses a BLAS backend.</div>
<div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
6. Add support for interoperation with D built-in arrays (or pointers).<br>
<br>
So I take that Matrix is not a sub-type? why? If we have something like this:<br>
<br>
struct Matrix(Real, size_t row, size_t col) {<br>
<br>
Real[row*col] data;<br>
alias data this;<br>
}<br>
<br>
then we wouldn't need any kind of interoperation with built-in arrays,<br>
would we? I think this would save us a lot of headache.<br>
<br>
That's just me and I could be wrong.<br></blockquote><div><br></div><div>Inter-operation referred more to having a matrix object wrapping a pointer to an already available piece of memory - maybe allocated through a region allocator, maybe resulting from some other library. This means we need to take care of different strides and different storage orders which cannot be handled by built-in arrays. Right now, matrices wrap ref-counted copy-on-write array types (ArrayData in the current code) - we decided last year that we don't want to use the garbage collector, because of its current issues. Also I would prefer not using the same Matrix type for pointer-wrappers and normal matrices because the former must have reference semantics while the latter have value semantics. I think it would be confusing if some matrices would copy their data and some would share memory.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I've got to tell you though, I'm very excited about this project and<br>
I'll be watching it closely.<br>
<br>
cheers.<br>
</blockquote></div><div>---</div>Cristi Cobzarenco<div>BSc in Artificial Intelligence and Computer Science</div><div>University of Edinburgh<br>Profile: <a href="http://www.google.com/profiles/cristi.cobzarenco" target="_blank">http://www.google.com/profiles/cristi.cobzarenco</a></div>
<br>