Make DMD emit C++ .h files same as .di files

H. S. Teoh hsteoh at quickfur.ath.cx
Tue Feb 26 07:07:34 UTC 2019


On Mon, Feb 25, 2019 at 05:24:00PM -0800, Manu via Digitalmars-d wrote:
> On Mon, Feb 25, 2019 at 2:55 PM H. S. Teoh via Digitalmars-d
[...]
> > It's very simple. The build description is essentially a DAG whose
> > nodes represent files (well, any product, really, but let's say
> > files for a concrete example), and whose edges represent commands
> > that transform input files into output files. All the build system
> > has to do is to do a topological walk of this DAG, and execute the
> > commands associated with each edge to derive the output from the
> > input.
> 
> Problem #1:
> You don't know the edges of the DAG until AFTER you run the compiler
> (ie, discovering imports/#includes, etc from the source code)

Yes, that's what scanners are for.  There can be standard scanners for
common languages like C, C++, Java, C#, etc..  I didn't say that you
have to write DAG nodes and edges by hand.  But my point is that
prebaked automatic scanning rules of this sort should not *exclude* you
from directly adding your own DAG nodes and edges.  Build systems like
SCons offer an interface for building your own scanners, for example.


> You also want to run the build with all 64 cores in your machine.

Build systems like SCons offer parallel building out-of-the-box, and
require no additional user intervention. That's proper design.
Makefiles require special care when writing rules in order not to break,
and you (last time I checked) have to explicitly specify which rules are
parallelizable.  That's bad design.


> File B's build depends on file A's build output, but it can't know
> that until after it attempts (and fails) to build B...
> 
> How do you resolve this tension?

There is no tension. You just do a topological walk on the DAG and run
the steps in order. If a step fails, all subsequent steps related to
that target are aborted. (Any other products that didn't fail may still
continue in that case.)  Parallelization works by identifying DAG nodes
that aren't dependent on each other and running them in parallel. A
proper build system handles this automatically without user
intervention.

Unless you're talking about altering the DAG as you go -- SCons *does*
in fact handle this case.  You just have to sequence your build steps
such that any new products/targets that are introduced don't invalidate
prior steps. A topological walk usually already solves this problem, as
long as you don't ask for impossible things like building target A also
adds a new dependency to unrelated target B. In the normal case,
building A adds dependency to downstream target C (which depends on A),
but that's no problem because the topological walk guarantees A is built
before C, and by then, we already know of the new dependency and can
handle it correctly.

I'm starting to sound like I'm promoting SCons as the best thing since
sliced bread, but actually SCons has its own share of problems.  But I'm
just using it as an example of a design that got *some* things right. A
good number of things, in fact, in spite of the warts that still exist.
It's a lot saner than, say, make, and that's my point.  Such a design is
possible, and has been done (the multi-stage website build I described
in my previous post, btw, is an SCons-based system -- it's not perfect,
but already miles ahead of ancient junk like makefiles).


> There's no 'simple' solution to this problem that I'm aware of. You
> start to address this with higher-level structure, and that is not a
> 'simple DAG' anymore.

It's still a DAG.  You just have some fancy automatic scanning /
generation at the higher level structure, but it all turns into a DAG in
the end.  And here is my point: the build system should ALLOW the user
to enter custom DAG nodes/edges as needed, rather than force the user to
only use the available prebaked rules -- because there will always be a
situation where you need to do something the build tool authors haven't
thought of. You should always have the option of going under the hood
when you need to. You should never be limited only to what the authors
had in mind.  I have nothing against prebaked automatic scanners -- but
that should not preclude writing your *own* custom scanners if you
wanted to.  And it should not prevent you from adding rules to the DAG
directly.

The correct design is always the one that empowers the user, not the one
that spoonfeeds the user yet comes in a straitjacket.


> Now... whatever solution you concluded; express that in make, ninja,
> MSBuild, .xcodeproj...

The fact that doing all of this in make (or whatever else) is such a
challenge is exactly proof of what I'm saying: these build systems are
fundamentally b0rken, and for no good reason. All the technology
necessary to make sane builds possible already exists.  It's just that
too many build systems are still living in the 80's and refusing to move
on.

And in the meantime, even better build systems are already being
implemented, like Tup, where the build time is proportional to the size
of change rather than the size of the workspace (an SCons wart).  Yet
people still use make like it's still 1985, and people still invent
build systems with antiquated designs like it's still 1985.


T

-- 
Give a man a fish, and he eats once. Teach a man to fish, and he will sit forever.


More information about the Digitalmars-d mailing list