Linking is the slowest part of D's compilation process– can we use mold to speed it up?
deadalnix at gmail.com
Fri Feb 26 02:24:58 UTC 2021
On Thursday, 25 February 2021 at 15:42:22 UTC, H. S. Teoh wrote:
> This is very interesting. I wonder if there's a way to
> incrementally update the executable, instead of starting from
> scratch each time?
> E.g., hypothetically, if the linker emitted not only the
> executable but also some kind of map file describing the
> various parts that compose the executable, together with some
> extra information about offsets/addresses that depend on each
> other between parts, then in theory, if we change n object
> files (where n is significantly less than the total number N of
> all object files), we ought to be able to regenerate the
> executable by copying most of its current data, move a few
> sections around, and patch up some references.
> If the executable format is flexible enough (I think ELF is,
> don't know about PE), we could also pad the executable with
> some extra unused space between sections to allow for growth of
> individual sections up to some limit. Then we might be able
> patch in updated object files in-place, along with updating
> some references as needed, as long as said object files don't
> grow beyond the size of the extra space.
> This could significantly speed up the code-compile-run cycle
> during development. For releases, of course, you'd want to
> compact the executable, but generally it's expected that
> release builds are OK to take longer.
It is still quite experimental. Author have written about the
techniques they use. There are very interesting things they do
both for speed (like preloading .o as soon as they finish
compiling in a daemon) and incremental link (this require to
maintain extra metadata about where things are).
https://kristoff.it/blog/zig-new-relationship-llvm/#designing-machine-code-for-incremental-compilation for more details on how this works.
More information about the Digitalmars-d