Lib change leads to larger executables

Kristian Kilpi kjkilpi at gmail.com
Thu Feb 22 05:03:07 PST 2007


On Thu, 22 Feb 2007 09:49:23 +0200, kris <foo at bar.com> wrote:
[snip]
> 9) The Fake dependencies cause the linker to pick up and bind whatever  
> module happens to satisfy it's need for the typeinfo resolution. In this  
> case, the linker sees Core.obj with a char[][] decl exposed, so it say  
> "hey, this is the place!" and binds it. Along with everything else that  
> Core.obj actually requires.
>
> 10) The linker is entirely wrong, but you can't really blame it since  
> the char[][] decl is scattered throughout the library modules. It thinks  
> it get's the /right/ one, but in fact it could have chosen *any* of  
> them. This is now getting to the heart of the problem.
>
> 11) If there's was only one exposed decl for char[][], e.g. like int[],  
> there would be no problem. In fact you can see all the prepackaged  
> typeinfo bound to any D executable. There's lots of them. However,  
> because the compiler injects this typeinfo into a variety of objects  
> (apparently wherever char[][] is used), then the linker is boondoggled.
>
> 12) If the linker were smart, and could link segments instead of entire  
> object modules, this would still be OK (a segment is an isolated part of  
> the object module). But the linker is not smart. It was written to be  
> fast, in pure assembler, decades ago.
[snip]

As long as the linker will operate at the .obj file level, the linker will  
pull in some bloat to the executable, in practice. The question is how  
much the executable will be bloated.

And if the compiler generates false dependencies, the size of the bloat  
will enlarge.

So, a solution would be a new linker operating at the section level. (Not  
necessary *the* solution, but *a* solution.)

Oh, the linker was written in assembly, how hardcore. :) I don't think  
there's much of a point in writing a new linker (that is, if someone will  
do that) in assembly... not that anyone was considering using assembly...  
<g> If the linking times were a bit (or two) slower (because of more  
complex algorithm), I think it would be okay for a lot of people (all of  
them?). (If linking times will be a issue, for example when building debug  
executables, the old linker could be used for that.) Hmm, I'm wondering  
how much slower the current linker would be if it had been written in  
C/C++/D instead of assembly. I mean, today processors are so much faster  
than a decade ago, and hard disks had not got any faster (well, not  
significally). Usually the hard disk is the bottleneck, not the processor.



More information about the Digitalmars-d mailing list