Microsoft working on new systems language

Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang at gmail.com> Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang at gmail.com>
Tue Dec 31 11:53:27 PST 2013


On Tuesday, 31 December 2013 at 17:52:56 UTC, Chris Cain wrote:
> Well, that's certainly a good point. There's _probably_ some 
> extra optimizations that could be done with a compiler 
> supported new. Maybe it could make some significantly faster 
> code, but this assumes many things:
>
> 1. The compiler writer will actually do this analysis and write 
> the optimization (my bets are that DMD will likely not do many 
> of the things you suggest).

I think many optimizations become more valuable when you start 
doing whole program anlysis.

> 2. The person writing the code is writing code that is 
> allocating several times in a deeply nested loop.

The premise of efficient high level/generic programming is that 
the optimizer will undo naive code. Pseudo code example:

inline process(inarray, allocator){
    a = allocator.alloc(Array)
    a.init()
    for e in inarray { a.append(foo(e)) }
    return a
}

b = process(emptyarray,myallocator)
dosomething(b)
myallocator.free(b)

The optimizer should get rid of all of this. But since alloc() 
followed by free() most likely leads to side effects, it can't 
and you end up with:

b = myallocator.alloc(1000)
myallocator.free(b)

> 3. Despite the person making the obvious critical error of 
> allocating several times in a deeply nested loop, he must not 
> have made any other significant errors or those other errors 
> must also be covered by optimizations

I disagree that that inefficiencies due to high level programming 
is a mistake if the compiler has opportunity to get rid of it. I 
wish D would target high level programming in the global scope 
and low level programming in limited local scopes. I think few 
applications need hand optimization globally, except perhaps 
raytracers and compilers.

> Manual optimization in this case isn't too unreasonable.

I think manual optimization in most cases should be privided by 
the programmer as compiler hints and constraints.

> Think of replacing library calls when it's noticed that it's an 
> allocate function. It's pretty dirty and won't actually happen 
> nor do I suggest it should happen, but it's actually still also 
> _possible_.

Yes, why not? As long as the programmer has the means to control 
it. Why not let the compiler choose allocation strategies based 
on profiling for instance?


More information about the Digitalmars-d mailing list