What are the worst parts of D?
Cliff via Digitalmars-d
digitalmars-d at puremagic.com
Wed Sep 24 13:23:57 PDT 2014
On Wednesday, 24 September 2014 at 20:12:40 UTC, H. S. Teoh via
Digitalmars-d wrote:
> On Wed, Sep 24, 2014 at 07:36:05PM +0000, Cliff via
> Digitalmars-d wrote:
>> On Wednesday, 24 September 2014 at 19:26:46 UTC, Jacob Carlborg
>> wrote:
>> >On 2014-09-24 12:16, Walter Bright wrote:
>> >
>> >>I've never heard of a non-trivial project that didn't have
>> >>constant
>> >>breakage of its build system. All kinds of reasons - add a
>> >>file,
>> >>forget to add it to the manifest. Change the file contents,
>> >>neglect
>> >>to update dependencies. Add new dependencies on some script,
>> >>script
>> >>fails to run on one configuration. And on and on.
>> >
>> >Again, if changing the file contents breaks the build system
>> >you're
>> >doing it very, very wrong.
>>
>> People do it very, very wrong all the time - that's the
>> problem :)
>> Build systems are felt by most developers to be a tax they
>> have to pay
>> to do what they want to do, which is write code and solve
>> non-build-related problems.
>
> That's unfortunate indeed. I wish I could inspire them as to
> how cool a
> properly-done build system can be. Automatic parallel building,
> for
> example. Fully-reproducible, incremental builds (never ever do
> `make
> clean` again). Automatic build + packaging in a single command.
> Incrementally *updating* packaging in a single command.
> Automatic
> dependency discovery. And lots more. A lot of this technology
> actually
> already exists. The problem is that still too many people think
> "make"
> whenever they hear "build system". Make is but a poor,
> antiquated
> caricature of what modern build systems can do. Worse is that
> most
> people are resistant to replacing make because of inertia. (Not
> realizing that by not throwing out make, they're subjecting
> themselves
> to a lifetime of unending, unnecessary suffering.)
>
>
>> Unfortunately, build engineering is effectively a specialty of
>> its own
>> when you step outside the most trivial of systems. It's
>> really no
>> surprise how few people can get it right - most people can't
>> even
>> agree on what a build system is supposed to do...
>
> It's that bad, huh?
>
> At its most fundamental level, a build system is really nothing
> but a
> dependency management system. You have a directed, acyclic
> graph of
> objects that are built from other objects, and a command which
> takes
> said other objects as input, and produces the target object(s)
> as
> output. The build system takes as input this dependency graph,
> and runs
> the associated commands in topological order to produce the
> product(s).
> A modern build system can parallelize independent steps
> automatically.
> None of this is specific to compiling programs, in fact, it
> works for
> any process that takes a set of inputs and incrementally derives
> intermediate products until the final set of products are
> produced.
>
> Although the input is the (entire) dependency graph, it's not
> desirable
> to specify this graph explicitly (it's far too big in
> non-trivial
> projects); so most build systems offer ways of automatically
> deducing
> dependencies. Usually this is done by scanning the inputs, and
> modern
> build systems would offer ways for the user to define new
> scanning
> methods for new input types. One particularly clever system,
> Tup
> (http://gittup.org/tup/), uses OS call proxying to discover the
> *exact*
> set of inputs and outputs for a given command, including hidden
> dependencies (like reading a compiler configuration file that
> may change
> compiler behaviour) that most people don't even know about.
>
> It's also not desirable to have to derive all products from its
> original
> inputs all the time; what hasn't changed shouldn't need to be
> re-processed (we want incremental builds). So modern build
> systems
> implement some way of detecting when a node in the dependency
> graph has
> changed, thereby requiring all derived products downstream to be
> rebuilt. The most unreliable method is to scan for file change
> timestamps (make). A reliable (but slow) method is to compare
> file hash
> checksums. Tup uses OS filesystem change notifications to
> detect
> changes, thereby cutting out the scanning overhead, which can
> be quite
> large in complex projects (but it may be unreliable if the
> monitoring
> daemon isn't running / after rebooting).
>
> These are all just icing on the cake; the fundamental core of a
> build
> system is basically dependency graph management.
>
>
> T
Yes, Google in fact implemented must of this for their internal
build systems, I am led to believe. I have myself written such a
system before. In fact, the first project I have been working on
in D is exactly this, using OS call interception for
validating/discovering dependencies, building execution graphs,
etc.
I haven't seen TUP before, thanks for pointing it out.
More information about the Digitalmars-d
mailing list