Practical parallelization of D compilation

Guillaume Lathoud gsub at glat.info
Wed Jan 8 04:40:02 UTC 2020


Hello,

One of my D applications grew from a simple main and a few source
files to more than 200 files. Although I minimized usage of
templating and CTFE, the compiling time is now about a minute.

I did not find any solution to take advantage of having multiple
cores during compilation, lest I would write a makefile, or split
the code into multiple packages and use a package manager.

(If I missed such a possibility, feel free to write it here.)

For now I came up with a solution that compiles each D source file
it finds into an object file (in parallel), then links them into 
an
executable. In subsequent runs, only touched files are recompiled:

https://github.com/glathoud/d_glat/blob/master/dpaco.sh

Practical results (real time) using LDC2 (1.10.0):

  * first run (compiling everything): 50% to 100% slower than
    classical compilation, depending on the hardware, resp. on an 
old
    4-core or a more recent 8-core.

  * subsequent runs (only a few files touched): 5 to 10 seconds, 
way
    below the original time (about a minute).

Now (1) I hope this is of interest to readers, and (2) not knowing
anything about the inners of D compilers, I wonder if some 
heuristic
roughly along these lines - when enough source files and enough
cores, do parallel and/or re-use - could be integrated into the
compilers, at least in the form of an option.

Best regards,
Guillaume

Bonus: dpaco.sh also outputs a short list of the files having the
worst individual compile time (user time).



More information about the Digitalmars-d-learn mailing list