Article: Increasing the D Compiler Speed by Over 75%
qznc
qznc at web.de
Thu Jul 25 12:26:30 PDT 2013
On Thursday, 25 July 2013 at 19:07:02 UTC, Walter Bright wrote:
> On 7/25/2013 11:30 AM, Adam D. Ruppe wrote:
>> The biggest compile time killer in my experience is actually
>> running out of
>> memory and hitting the swap.
>>
>> My work app used to compile in about 8 seconds (on Linux btw).
>> Then we added
>> more and more stuff and it went up to about 20 seconds. It
>> uses a fair amount of
>> CTFE and template stuff, looping over almost every function in
>> the program to
>> generate code.
>>
>> Annoying... but then we added a little bit more and it
>> skyrocketed to about 90
>> seconds to compile! That's unbearable.
>>
>> The cause was the build machine had run out of physical memory
>> at the peak of
>> the compile process, and started furiously swapping to disk.
>>
>> I "fixed" it by convincing them to buy more RAM, and now we're
>> back to ~15
>> second compiles, but at some point the compiler will have to
>> address this. I
>> know donc has a dmd fork where he's doing a lot of work,
>> completely
>> re-engineering CTFE, so it is coming, but that will probably
>> be the next speed
>> increase, and we could be looking at as much as 5x in cases
>> like mine!
>
> I know the memory consumption is a problem, but it's much
> harder to fix.
Obstacks are a popular approach in compilers. Allocation is the
simple pointer-bump, so it should maintain the new speed.
Deallocation can be done blockwise. Works great, if you know the
lifetime of the objects.
More information about the Digitalmars-d-announce
mailing list