foreach - premature optimization vs cultivating good habits

BBaz via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Fri Jan 30 04:08:11 PST 2015


On Friday, 30 January 2015 at 11:55:16 UTC, Laeeth Isharc wrote:
> Hi.
>
> The standard advice is not to worry about memory usage and 
> execution speed until profiling shows you where the problem is, 
> and I respect Knuth greatly as a thinker.
>
> Still, one may learn from others' experience and cultivate good 
> habits early.  To say that one should not prematurely optimize 
> is not to say that one should not try to avoid cases that tend 
> to be really bad, and I would rather learn from others what 
> these are then learn only the hard way.
>
> For the time being I am still at early development stage and 
> have not yet tested things with the larger data sets I 
> anticipate eventually using.  It's cheap to make slightly 
> different design decisions early, but much more painful further 
> down the line, particularly given my context.
>
> As I understand it, foreach allocates when a simple C-style for 
> using an array index would not.  I would like to learn more 
> about when this turns particularly expensive, and perhaps I 
> could put this up on the wiki if people think it is a good idea.
>
> What exactly does it allocate, and how often, and how large is 
> this in relation to the size of the underlying data 
> (structs/classes/ranges)?  Are there any cache effects to 
> consider?  Happy to go to the source code if you can give me 
> some pointers.
>
> Thanks in advice for any thoughts.
>
>
>
> Laeeth.

foreach() is good for linked list because it uses a delegate so 
it avoids to cross the items from 0 to i at each iteration.

for() is good for arrays because of the pointer arithmetic.

But the D way for foreach'es is to use ranges: popFront etc, 
although i often feel too lazy to use them...

You'll probably get more technical answers...



More information about the Digitalmars-d-learn mailing list