D build and SCons

Russel Winder russel at winder.org.uk
Thu Feb 1 12:56:28 UTC 2018


On Thu, 2017-12-28 at 10:21 -0800, H. S. Teoh via Digitalmars-d wrote:
> 
> […]

Apologies for taking so long to get to this.

> OK, I may have worded things poorly here.  What I meant was that with
> "traditional" build systems like make or SCons, whenever you needed
> to
> rebuild the source tree, the tool has to scan the *entire* source
> tree
> in order to discover what needs to be rebuilt. I.e., it's O(N) where
> N
> is the size of the source tree.  Whereas with tup, it uses the Linux
> kernel's inotify mechanism to learn about which file(s) being
> monitored
> have been changed since the last invocation, so that it can scan the
> changed files in O(n) time where n is the number of changed files,
> and
> in the usual case, n is much smaller than N. It's still linear in
> terms
> of the size of the change, but sublinear in terms of the size of the
> entire source tree.

This I can agree with. SCons definitely has to check hashes to
determine which files have changed in a "not just space change" way on
the leaves of the build ADG. I am not sure what Ninja does, but yes Tup
uses inotify to filter the list of touched, but not necessarily
changed, files. For my projects build time generally dominates check
time so I don't see much difference. Except that Ninja is way faster
than Make as a backend to CMake.

> I think it should be obvious that an approach whose complexity is
> proportional to the size of the changeset is preferable to an
> approach
> whose complexity is proportional to the size of the entire source
> tree,
> esp.  given the large sizes of today's typical software projects.  If
> I
> modify 1 file in a project of 10,000 source files, rebuilding should
> not
> be orders of magnitude slower than if I modify 1 file in a project of
> 100 files.

Is it obvious, but complexity is not everything, wall clock time is
arguably more important. As is actual build time versus preparation
time. SCons does indeed have a large up-front ADG check time for large
projects. I believe there is the Parts overlay on SCons for dealing
with big projects. I believe the plan for later in the year is for the
most useful parts of Parts to become part of the main SCons system. 

> In this sense, while SCons is far superior to make in terms of
> usability
> and reliability, its core algorithm is still inferior to tools like
> tup.

However Tup is not getting traction compared to CMake (and either Make
of preferably Ninja backend – I wonder if there is a Tup backend).

> Now, I've not actually used tup myself other than a cursory glance at
> how it works, so there may be other areas in which it's inferior to
> SCons.  But the important thing is that it gets us away from the O(N)
> of
> traditional build systems that requires scanning the entire source
> tree,
> to the O(n) that's proportional to the size of the changeset. The
> former
> approach is clearly not scalable. We ought to be able to update the
> dependency graph in proportion to how many nodes have changed; it
> should
> not require rebuilding the entire graph every time you invoke the
> build.

I am not using Tup much simply because I have not started using it
much, I just use SCons, Meson, and when I have to CMake/Ninja. In the
end my projects are just not big enough for me to investigate the
faster build times Tup reputedly brings.

> 
> […]

> Preferably, checking dependencies ought not to be done at all unless
> the
> developer calls for it. Network access is slow, and I find it
> intolerable when it's not even necessary in the first place.  Why
> should
> it need to access the network just because I changed 1 line of code
> and
> need to rebuild?

This was the reason for Waf, split the SCons system into a
configuration set up and build à la Autotools. CMake also does this. As
does Meson. I have a preference for this way. And yet I still use SCons
quite a lot!

> 
[…]
> The documentation does not help in this respect. The only thing I
> could
> find was a scanty description of how to invoke dub in its most basic
> forms, with little or no information (or hard-to-find information) on
> how to configure it more precisely.  Also, why should I need to
> hardcode
> a specific version of a dependent library just to suppress network
> access when rebuilding?! Sometimes I *do* want to have the latest
> libraries pulled in -- *when* I ask for it -- just not every single
> time
> I build.

If Dub really is to become the system for D as Cargo is for Rust, it
clearly needs more people to work on it and evolve the code and the
documentation. Whilst no-one does stuff, the result will be rhetorical
ranting on the email lists.

> […]
> 
> AFAIK, the only standard that Dub is, is a packaging system for D.  I
> find it quite weak as a build tool.  That's the problem, it tries to
> do
> too much.  It would have been nice if it stuck to just dealing with
> packaging, rather than trying to do builds too, and doing it IMO
> rather
> poorly.

No argument from me there, except Cargo. Cargo does a surprisingly good
job of being a package management and build system. Even the go command
is quite good at it for Go. So I am re-assessing my old dislike of this
way – I used to be a "separate package management and build, and leave
build to build systems" person, I guess I still am really. However
Cargo is challenging my view, where Dub currently does not.

Given the thought above, unless I and others actually get on a evolve
Dub, nothing will change.

> 
[…]
> Honestly, I don't care to have a "standard" build system for D. A
> library should be able to produce a .so or .a, and have an import
> path,
> and I couldn't care less how that happens; the library could be built
> by
> a hardcoded shell script for all I care. All I should need to do in
> my
> code is to link to that .so or .a and specify -I with the right
> import
> path(s). Why should upstream libraries dictate how my code is built?!

This last point is one of the biggest problems with the current Dub
system, and a reason many people have no intention of using Dub for
build.

Your earlier points in this paragraph should be turned into issues on
the Dub source repository, and indeed the last one as well. And then we
should create pull requests.

I actually think a standard way is a good thing, but that there should
be other ones as well. SCons, CMake, Meson, etc. all need ways of
building D for those who do not want to use the standard way. Seems
reasonable to me. However SCons and Meson support for D is not yet as
good as it would be to have it, and last tine I tried CMake-D didn't
work for me.

> To this end, a standard way of exporting import paths in a D library
> (it
> can be as simple as a text file in the code repo, or some script or
> tool
> akin to llvm-config or sdl-config that spits out a list of paths /
> libraries / etc) would go much further than trying to shoehorn
> everything into a single build system.

So let's do it rather than just talk about it?

-- 
Russel.
===========================================
Dr Russel Winder      t: +44 20 7585 2200
41 Buckmaster Road    m: +44 7770 465 077
London SW11 1EN, UK   w: www.russel.org.uk
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20180201/29f5dceb/attachment-0001.sig>


More information about the Digitalmars-d mailing list