On the performance of building D programs

Vladimir Panteleev vladimir at thecybershadow.net
Thu Apr 4 15:49:08 PDT 2013


Recently I studied the performance of building a vibe.d example:
https://github.com/rejectedsoftware/vibe.d/issues/208

I wrote a few tools in the process, perhaps someone might find 
them useful as well.

However, I'd also like to discuss a related matter:

I noticed that compiling D programs in the usual manner (rdmd) is 
as much as 40% slower than it can be.

This is because before rdmd knows a full list of modules to be 
built, it must run dmd with -v -o-, and read its verbose output. 
Then, it feeds that output back to the compiler again, and passes 
all modules on the command line of the second run.

The problem with this approach is that DMD needs to parse, lex, 
run CTFE, instantiate templates, etc. etc. - everything except 
actual code generation / optimization / linking - twice. And code 
generation can actually be a small part of the total compilation 
time.

D code already compiles pretty quickly, but here's an opportunity 
to nearly halve that time (for some cases) - by moving some of 
rdmd's basic functionality into the compiler. DMD already knows 
which modules are used in the program, so it just needs two new 
options: one to enabled this behavior (say, -r for recursive 
compilation), and one to specify an exclusion list, to indicate 
which modules are already compiled and will be found in a library 
(e.g. -rx). The default -rx settings can be placed in 
sc.ini/dmd.conf. I think we should seriously consider it.

Another appealing thing about the idea is that the compiler has 
access to information that would allow it to recompile programs 
more efficiently in the future. For example, it would be possible 
to get a hash of a module's public interface, so that a change in 
one function's code would not trigger a recompile of all modules 
that import it (assuming no CTFE).


More information about the Digitalmars-d mailing list