D perfomance

John Colvin john.loughran.colvin at gmail.com
Sun Apr 26 11:40:49 UTC 2020


On Saturday, 25 April 2020 at 10:34:44 UTC, Joseph Rushton 
Wakeling wrote:
> In any case, I seriously doubt those kinds of optimization have 
> anything to do with the web framework performance differences.
>
> My experience of writing number-crunching stuff in D and Rust 
> is that Rust seems to have a small but consistent performance 
> edge that could quite possibly be down the kind of 
> optimizations that Arine mentions (that's speculation: I 
> haven't verified).  However, it's small differences, not 
> order-of-magnitude stuff.
>
> I suppose that in a more complicated app there could be some 
> multiplicative impact, but where high-throughput web frameworks 
> are concerned I'm pretty sure that the memory allocation and 
> reuse strategy is going to be what makes 99% of the difference.
>
> There may also be a bit of an impact from the choice of futures 
> vs. fibers for managing asynchronous tasks (there's a context 
> switching cost for fibers), but I would expect that to only 
> make a difference at the extreme upper end of performance, once 
> other design factors have been addressed.
>
> BTW, on the memory allocation front, Mathias Lang has pointed 
> out that there is quite a nasty impact from `assumeSafeAppend`.
>  Imagine that your request processing looks something like this:
>
>     // extract array instance from reusable pool,
>     // and set its length to zero so that you can
>     // write into it from the start
>     x = buffer_pool.get();
>     x.length = 0;
>     assumeSafeAppend(x);   // a cost each time you do this
>
>     // now append stuff into x to
>     // create your response
>
>     // now publish your response
>
>     // with the response published, clean
>     // up by recycling the buffer back into
>     // the pool
>     buffer_pool.recycle(x);
>
> This is the kind of pattern that Sociomantic used a lot.  In D1 
> it was easy because there was no array stomping prevention -- 
> you could just set length == 0 and start appending.  But having 
> to call `assumeSafeAppend` each time does carry a performance 
> cost.
>
> IIRC Mathias has suggested that it should be possible to tag 
> arrays as intended for this kind of re-use, so that stomping 
> prevention will never trigger, and you don't have to 
> `assumeSafeAppend` each time you reduce the length.

I understand that it was an annoying breaking change, but aside 
from the difficulty of migrating I don't understand why a custom 
type isn't the appropriate solution for this problem. I think I 
heard "We want to use the built-in slices", but I never 
understood the technical argument behind that, or how it stacked 
up against not getting the desired behaviour.

My sense was that the irritation at the breakage was influencing 
the technical debate.


More information about the Digitalmars-d mailing list