Compilation strategy

Dmitry Olshansky dmitry.olsh at gmail.com
Tue Dec 18 11:58:36 PST 2012


12/18/2012 9:30 PM, Walter Bright пишет:
> On 12/18/2012 8:57 AM, Dmitry Olshansky wrote:
>> But adequate bytecode designed for interpreters (see e.g. Lua) are
>> designed for
>> faster execution. The way CTFE is done now* is a polymorphic call per
>> AST-node
>> that does a lot of analysis that could be decided once and stored in
>> ... *ehm*
>> ... IR. Currently it's also somewhat mixed with semantic analysis
>> (thus rising
>> the complexity).
>
> The architectural failings of CTFE are primary my fault from taking an
> implementation shortcut and building it out of enhancing the constant
> folding code.
>
> They are not a demonstration of inherent superiority of one scheme or
> another. Nor does CTFE's problems indicate that modules should be
> represented as bytecode externally.
>
Agreed. It seemed to me that since CTFE implements an interpreter for D 
it would be useful to define a flattened representation of semantically 
analyzed AST that is tailored for execution. The same bytecode then 
could be be used for external representation.

There is however a problem of templates that can only be analyzed on 
instantiation. Then indeed we can't fully "precompile" semantic step 
into bytecode meaning that it won't be much beyond flattened result of 
parse step. So on this second thought it may not that useful after all.

>> Another point is that pointer chasing data-structures is not a recipe
>> for fast
>> repeated execution.
>>
>> To provide an analogy: executing calculation recursively on AST tree of
>> expression is bound to be slower then running the same calculation
>> straight on
>> sanely encoded flat reverse-polish notation.
>>
>> A hit below belt: also peek at your own DMDScript - why bother with
>> plain IR
>> (_bytecode_!) for JavaScript if it could just fine be interpreted as
>> is on AST-s?
>
> Give me some credit for learning something over the last 12 years! I'm
> not at all convinced I'd use the same design if I were doing it now.
>
OK ;)

> If I was doing it, and speed was paramount, I'd probably fix it to
> generate native code instead of bytecode and so execute code directly.
> Even simple JITs dramatically speeded up the early Java VMs.

Granted JIT is faster but I'm personally more interested in portable 
interpreters. I've been digging around and gathering techniques and so 
far it looks rather promising.
Though I need more field testing... and computed gotos in D! Or more 
specifically a way to _force_ tail-call.

-- 
Dmitry Olshansky


More information about the Digitalmars-d mailing list