Make DMD emit C++ .h files same as .di files

Rubn where at is.this
Tue Feb 26 01:33:50 UTC 2019


On Monday, 25 February 2019 at 22:55:18 UTC, H. S. Teoh wrote:
> On Mon, Feb 25, 2019 at 10:14:18PM +0000, Rubn via 
> Digitalmars-d wrote:
>> On Monday, 25 February 2019 at 19:28:54 UTC, H. S. Teoh wrote:
> [...]
>> > <off-topic rant>
>> > This is a perfect example of what has gone completely wrong 
>> > in the world
>> > of build systems. Too many assumptions and poor designs over 
>> > an
>> > extremely simple and straightforward dependency graph walk 
>> > algorithm,
>> > that turn something that ought to be trivial to implement 
>> > into a
>> > gargantuan task that requires a dedicated job title like 
>> > "build
>> > engineer".  It's completely insane, yet people accept it as 
>> > a fact of
>> > life. It boggles the mind.
>> > </off-topic rant>
> [...]
>> I don't think it is as simple as you make it seem. Especially 
>> when you need to start adding components that need to be build 
>> that isn't source code.
>
> It's very simple. The build description is essentially a DAG 
> whose nodes represent files (well, any product, really, but 
> let's say files for a concrete example), and whose edges 
> represent commands that transform input files into output 
> files. All the build system has to do is to do a topological 
> walk of this DAG, and execute the commands associated with each 
> edge to derive the output from the input.
>
> This is all that's needed. The rest are all fluff.
>
> The basic problem with today's build systems is that they 
> impose arbitrary assumptions on top of this simple DAG. For 
> example, all input nodes are arbitrarily restricted to source 
> code files, or in some bad cases, source code of some specific 
> language or set of languages. Then they arbitrarily limit edges 
> to be only compiler invocations and/or linker invocations.  So 
> the result is that if you have an input file that isn't source 
> code, or if the output file requires invoking something other 
> than a compiler/linker, then the build system doesn't support 
> it and you're left out in the cold.
>
> Worse yet, many "modern" build systems assume a fixed depth of 
> paths in the graph, i.e., you can only compile source files 
> into binaries, you cannot compile a subset of source files into 
> an auxiliary utility that in turn generates new source files 
> that are then compiled into an executable.  So automatic code 
> generation is ruled out, preprocessing is ruled out, etc., 
> unless you shoehorn all of that into the compiler invocation, 
> which is a ridiculous idea.

What build systems are you talking about here? I mean I can 
search for programs that do certain things and I'll most 
definitely find more subpar ones than any spectacular ones. 
Especially if they are free. So we are on the same page on which 
build systems you are referring to.

> None of these restrictions are necessary, and they only 
> needlessly limit what you can do with your build system.
>
> I understand that these assumptions are primarily to simplify 
> the build description, e.g., by inferring dependencies so that 
> you don't have to specify edges and nodes yourself (which is 
> obviously impractical for large projects).  But these 
> additional niceties ought to be implemented as a SEPARATE layer 
> on top of the topological walk, and the user should not be 
> arbitrarily prevented from directly accessing the DAG 
> description.  The way so many build systems are designed is 
> that either you have to do everything manually, like makefiles, 
> which everybody hates, or the hood is welded shut and you can 
> only do what the authors decide that you should be able to do 
> and nothing else.
>
>
> [...]
>> It's easy to say build-systems are overly complicated until 
>> you actually work on a big project.
>
> You seem to think that I'm talking out of an ivory tower.  I 
> assure you I know what I'm talking about.  I have written 
> actual build systems that do things like this:
>
> - Compile a subset of source files into a utility;
>
> - Run said utility to transform certain input data files into 
> source
>   code;
>
> - Compile the generated source code into executables;
>
> - Run said executables on other data files to transform the 
> data into
>   PovRay scene files;
>
> - Run PovRay to produce images;
>
> - Run post-processing utilities on said images to crop / 
> reborder them;
>
> - Run another utility to convert these images into animations;
>
> - Install these animations into a target directory.
>
> - Compile another set of source files into a different utility;
>
> - Run said utility on input files to transform them to PHP 
> input files;
>
> - Run php-cli to generate HTML from said input files;
>
> - Install said HTML files into a target directory.
>
> - Run a network utility to retrieve the history of a specific 
> log file
>   and pipe it through a filter to extract a list of dates.
>
> - Run a utility to transform said dates into a gnuplot input 
> file for
>   generating a graph;
>
> - Run gnuplot to create the graph;
>
> - Run postprocessing image utilities to touch up the image;
>
> - Install the result into the target directory.

Yes doing all those things isn't all that difficult, it really is 
just a matter of calling a different program to generate the 
file. The difficulty of build systems comes in when you have an 
extremely large project that takes a long time to build.

> None of the above are baked-in rules. The user is fully capable 
> of specifying whatever transformation he wants on whatever 
> inputs he wants to produce whatever output he wants.  No 
> straitjackets, no stupid hacks to work around stupid build 
> system limitations. Tell it how you want your inputs to be 
> transformed into outputs, and it handles the rest for you.
>
> Furthermore, the build system is incremental: if I modify any 
> of the above input files, it automatically runs the necessary 
> commands to derive the updated output files AND NOTHING ELSE 
> (i.e., it does not needlessly re-derive stuff that hasn't 
> changed).  Better yet, if any of the intermediate output files 
> are identical to the previous outputs, the build stops right 
> there and does not needlessly recreate other outputs down the 
> line.
>
> The build system is also reliable: running the build in a dirty 
> workspace produces identical products as running the build in a 
> fresh checkout.  I never have to worry about doing the 
> equivalent of 'make clean; make', which is a stupid thing to 
> have to do in 2019. I have a workspace that hasn't been 
> "cleaned" for months, and running the build on it produces 
> exactly the same outputs as a fresh checkout.

It really depends on what you are building. Working on DMD I 
don't have to do a clean, doing a bisect though I effective have 
to do a clean every new commit.

> There's more I can say, but basically, this is the power that 
> having direct access to the DAG can give you.  In this day and 
> age, it's inexcusable not to be able to do this.
>
> Any build system that cannot do all of the above is a crippled 
> build system that I will not use, because life is far too short 
> to waste fighting with your build system rather than getting 
> things done.
>
>
> T

The build systems I've used can do all that, the problem is about 
functionality so much as the ease of achieving that 
functionality. I just use a script, don't need a build system but 
doing a fully build of my project only takes 10 seconds so I have 
that luxury.


More information about the Digitalmars-d mailing list