Compilation strategy

Walter Bright newshound2 at digitalmars.com
Tue Dec 18 09:30:40 PST 2012


On 12/18/2012 8:57 AM, Dmitry Olshansky wrote:
> But adequate bytecode designed for interpreters (see e.g. Lua) are designed for
> faster execution. The way CTFE is done now* is a polymorphic call per AST-node
> that does a lot of analysis that could be decided once and stored in ... *ehm*
> ... IR. Currently it's also somewhat mixed with semantic analysis (thus rising
> the complexity).

The architectural failings of CTFE are primary my fault from taking an 
implementation shortcut and building it out of enhancing the constant folding code.

They are not a demonstration of inherent superiority of one scheme or another. 
Nor does CTFE's problems indicate that modules should be represented as bytecode 
externally.

> Another point is that pointer chasing data-structures is not a recipe for fast
> repeated execution.
>
> To provide an analogy: executing calculation recursively on AST tree of
> expression is bound to be slower then running the same calculation straight on
> sanely encoded flat reverse-polish notation.
>
> A hit below belt: also peek at your own DMDScript - why bother with plain IR
> (_bytecode_!) for JavaScript if it could just fine be interpreted as is on AST-s?

Give me some credit for learning something over the last 12 years! I'm not at 
all convinced I'd use the same design if I were doing it now.

If I was doing it, and speed was paramount, I'd probably fix it to generate 
native code instead of bytecode and so execute code directly. Even simple JITs 
dramatically speeded up the early Java VMs.




More information about the Digitalmars-d mailing list