Was: Re: Vote for std.process

Vladimir Panteleev vladimir at thecybershadow.net
Fri Apr 12 06:08:11 PDT 2013


On Friday, 12 April 2013 at 11:37:14 UTC, Regan Heath wrote:
> The initial point was a vague one, not a specific one.  Manu 
> wasn't attempting to block std.process, he had a general 
> concern which I share.

OK, but so far my interpretation and replies were mostly in the 
context of std.process - this module being an example where 
performance improvements would have a very small real-life 
benefit. I agree that (generally speaking) improving the 
performance of the code in std.algorithm/array/range would be 
worth the effort and complexity.

> It very much matters *who* that 1 user is.  And, the count may 
> be higher, and we might never "hear" from these people as they 
> find other solutions.  We're lucky that some people who try D 
> and have issues tell us about them, they may be 5% of the total 
> for all we know.

The same applies to the other side of the argument. A buggy 
standard library probably leaves a worse impression than a slow 
standard library...

> In reality the suggested improvements would add only very minor 
> complexity and prevent none of the current crop of contributors 
> from working with/on std.process.

Well, how do you qualify the amount of optimization that is 
appropriate?

For example, the code in std.process would be even faster, if it 
was completely written in assembler. I hope we'll agree than in 
practice, this would be absurd. Now, what set of well-defined 
arguments would conclude that rewriting it in assembler is 
pointless, but optimizing memory allocations is not? All three 
versions of std.process would perform as well as far as the 
end-user can perceive.

> Yes, as well as the users of their applications.  True, none of 
> them will even realise they could have been less happy, so none 
> of them will realise the effort that went into it, but all of 
> them will be better off.

Absolutely - if you ignore the costs. 100%-correct faster code is 
always better than 100%-correct slower code, but the costs are 
the counter-argument.

> Add the missing items, without a doubt - which is why no-one is 
> suggesting blocking std.process over this issue.

Blocking is one thing, but asking for faster code where it 
doesn't really matter - when there are areas where D could be 
improved at much higher gain per effort - is another.

>>> D is a systems programming language, there is hope that it
> Why?
>
> There exist platforms and environments where memory and 
> performance are concerns, if the D standard library code is not 
> "careful" in it's use of both then it will be less suitable 
> than C (for example) and so D will not penetrate those 
> platforms.

OK, but once again - how does that line up with the purpose of 
std.process? I can see how std.algorithm can be useful in 
low-spec embedded/gaming systems, but std.process?

> Manu is using D for games development on modern high-end gaming 
> PCs and he is still concerned with memory and performance.

In Manu's case, every bit of performance counts in the code that 
runs in tight loops, e.g. for every game frame. However, does 
that include std.process?

> All true, but performance is one of D's top draw cards:
>
> <quote>The D programming language. Modern convenience. Modeling 
> power. Native **efficiency**.</quote> (**emphasis mine**)
>
> So, it behoves us to make sure the standard library keeps that 
> in mind.

Again, I don't (generally) disagree for the general case, however 
I think it pays to mind the context and perspective. When the 
context is std.process and the perspective is the relative cost 
of process creation, it seems like quite a pointless argument.


More information about the Digitalmars-d mailing list