What are AST Macros?
a at a.a
Tue Jul 13 10:46:51 PDT 2010
"Nick Sabalausky" <a at a.a> wrote in message
news:i1i8o8$1kb1$1 at digitalmars.com...
> "Don" <nospam at nospam.com> wrote in message
> news:i1h6br$2ngl$1 at digitalmars.com...
>> Rainer Deyke wrote:
>>> On 7/13/2010 01:03, Nick Sabalausky wrote:
>>>> "Rainer Deyke" <rainerd at eldwood.com> wrote in message
>>>> news:i1gs16$1oj3$1 at digitalmars.com...
>>>>> The great strength of string mixins is that you can use them to add
>>>>> AST macros to D. The great weakness of string mixins is that doing so
>>>>> requires a full (and extendable) CTFE D parser, and that no such
>>>>> is provided by Phobos.
>>>> Seems to me that would lead to unnecessary decreases in compilation
>>>> performance. Such as superfluous re-parsing. And depending how exactly
>>>> DMD does CTFE, a CTFE D parser could be slower than just simply having
>>>> DMD do the parsing directly.
>>> True. I tend to ignore compile-time costs. The performance of computers
>>> is increasing exponentially. The length of the average computer program
>>> is more or less stable. Therefore this particular problem will
>>> eventually solve itself.
>>> Of course, this is just my perspective as a developer who uses a
>>> compiler maybe twenty times a day. If I was writing my own compiler
>>> which was going to be used thousands of times a day by thousands of
>>> different developers, I'd have a different attitude.
>> CTFE isn't intrinsically slow. The reason it's so slow in DMD right now
>> is that the treatment of CTFE variables is done in an absurdly
>> inefficient way.
> Doesn't it at least have some intrinsic overhead? The best way I can think
> of to do it (in fact, the way Nemerle does it, and I really wish D did
> too) is to actually fully compile the CTFE then dynamiclly link it in and
> run it natively (erm, well, Nemerle is .NET, not native, but you get the
> idea). Even that would seem to have a little bit of overhead compared to
> just running code that's already built-in to the compiler (it has to
> compile a chunk of code, and then link it, both of which take more time
> than just calling an internal function). And if you're not doing it that
> way then you're interpreting, which inherently adds some fetch/dispatch
> Yea, certainly it could be made to be fairly fast, and the overhead
> doesn't *have* to be big enough to be noticable. But why not just do it
> the *really* fast way right from the start? Especially in a language like
> D that's supposed to have an emphasis on short compile times.
Plus, there's the whole duplication of code: D parser in C++, plus a D
parser in D CTFE.
More information about the Digitalmars-d