range behaviour

via Digitalmars-d digitalmars-d at puremagic.com
Wed May 14 04:23:14 PDT 2014


On Wednesday, 14 May 2014 at 09:47:56 UTC, Marc Schütz wrote:
> I wouldn't say never, because optimizers have become damn good.

Well, "never" without a proof is of course a too strong claim. 
:-) I agree with that, but I don't think optimizers are that good 
yet even though some of my troubles have been solved.

> I believe the additional overhead of lazy initialization is 
> mostly optimized away where it matters, i.e. inner loops, 
> because  the compiler can see that the initialization has 
> already been performed in the first iteration. All it requires 
> is unrolling the first iteration of the loop and treating it 
> specially.

You can do this for some generators/iterators, but you have 
different ways to construct loops and generators and consumers 
that affect what is most efficient and how many registers you use 
etc.

Abstractions hide what is going on, so the consumer might miss 
more efficient way of structuring the algorithm.

Example: if you know that the end comes after 0, then you only 
need to test empty() after you read 0. You don't need to test for 
0, you just branch up to the start of the loop if the zero-flag 
is set. Quasi ASM:

loop:
     write(x)
start:
     x = generate_next()
     branch_if_not_zero loop
     test( empty() )
     branch_if_zero loop
     write(0)

It is easy to construct examples where naive ranges will be 
suboptimal.

For more complex iterators you also need to deal with sparse 
datastructures, skiplists, run length encoded data, SIMD 
alignment, guard protected stream ends… Meta information you 
exploit when you need faster algorithms.


More information about the Digitalmars-d mailing list