Generalized Linear Models and Stochastic Gradient Descent in D

Nicholas Wilson via Digitalmars-d-announce digitalmars-d-announce at puremagic.com
Sat Jun 10 17:40:23 PDT 2017


On Saturday, 10 June 2017 at 20:03:16 UTC, data pulverizer wrote:
> Hi all,
>
> I have written a draft article on Generalized Linear Models and 
> Stochastic Gradient Descent in D 
> (https://github.com/dataPulverizer/glm-stochastic-gradient-descent-d/blob/master/article.ipynb) and would greatly appreciate your suggestions.
>
> Apologies for the spartan nature of the article formatting. 
> That will change for the final draft.
>
> Thank you in advance
>
> DP

Maybe its the default rendering but the open math font is hard to 
read as the sub scripts get vertically compressed.

My suggestions:

Distinguish between the likelihood functions for gamma and normal 
rather than calling them both L(x). Maybe L subscript uppercase 
gamma and L subscript N?

Links to wikipedia for the technical terms (e.g. dispersion, chi 
squared, curvature), again the vertical compression of the math 
font does not help here (subscripts of fractions) . It will 
expand your audience if they don't get lost in the introduction!

Speaking of not losing your audience: give a link to the NRA 
and/or a brief explanation of how it generalises to higher 
dimensions (graph or animation for the 2D case would be good, 
perhaps take something from wikipedia)

I dont think it is necessary to show the signature of the BLAS 
and Lapacke function, just a short description and link should 
suffice. also any reason you don't use GLAS?

I would just have gParamCalcs as its own function (unless you are 
trying to show off that particular feature of D).

omit the parentheses of .array() and .reduce()

You use .array a lot: how much of that is necessary? I dont think 
it is in zip(k.repeat().take(n).array(), x, y, mu)

`return(curv);` should be `return curve;`

Any reason you don't square the tolerance rather than sqrt the 
parsDiff?

for(int i = 0; i < nepochs; ++i) => foreach(i; iota(epochs))?

zip(pars, x).map!(a => a[0]*a[1]).reduce!((a, b) => a + b); 
=>dot(pars,x)?

Theres a lot of code and text, some images and graphs would be 
nice, particularly in combination with a more real world example 
use case.

Factor out code like a[2].repeat().take(a[1].length) to a 
function, perhaps use some more BLAS routines for things like

.map!( a =>
                         zip(a[0].repeat().take(a[1].length),
                             a[1],
                             a[2].repeat().take(a[1].length),
                             a[3].repeat().take(a[1].length))
                         .map!(a => -a[2]*(a[0]/a[3])*a[1])
                         .array())
                     .array();

to make it more obvious what the calculation is doing.

It might not be the point of the article but it would be good to 
show some performance figures, I'm sure optimisation tips will be 
forthcoming.


More information about the Digitalmars-d-announce mailing list