Dub, Cargo, Go, Gradle, Maven
Russel Winder
russel at winder.org.uk
Wed Feb 21 10:46:22 UTC 2018
On Fri, 2018-02-16 at 10:16 -0800, H. S. Teoh via Digitalmars-d wrote:
>
[…]
> If a dependent node requires network access, it forces network access
> every time the DAG is updated. This is slow, and also unreliable:
> the
> shape of the DAG could, in theory, change arbitrarily at any time
> outside the control of the user. If I'm debugging a program, the
> very
> last thing I want to happen is that the act of building the software
> also pulls in new library versions that cause the location of the bug
> to
> shift, thereby ruining any progress I may have made on narrowing down
> its locus. It would be nice to locally cache such network-dependent
> nodes so that they are only refreshed on demand.
As with all build systems that involve a network accessed dependency
provider, there has to be a local cache *and* a mechanism for not
always doing network lookups. Some people just use fixed versions to
stop this, others also employ a "no lookups" flag or separate build and
checking of dependencies. Obviously there needs to be network access
for some change events, but it should be well controlled
> Furthermore, a malicious external entity can introduce arbitrary
> changes
> into the DAG, e.g., hijack an intermediate DNS server so that network
> lookups get redirected to a malicious server which then adds
> dependencies on malware to your DAG. The next time you update: boom,
> your software now contains a trojan horse. (Even better if you have
> integrated package dependencies with builds all the way to
> deployment:
> now all your customers have a copy of the trojan deployed on their
> machines, too.) To mitigate this, some kind of security model would
> be
> required (e.g., verifiable server certificates, cryptographically
> signed
> package payloads). Which adds to the cost of refreshing network
> nodes,
> and hence is another big reason why this should be done on-demand,
> NOT
> automatically every time you ask for a new software build.
This is a problem for all extant systems and until a solution is found
nothing can be done.
> Also, if the machine I'm working on happens to be offline, it would
> totally suck to be unable to build my project just because of that.
> The whole point of having a DAG is reliable builds, and having the
> graph
> depend on remote resources over an inherently unreliable network
> defeats
> the purpose. That is why caching is basically mandatory, as is
> control
> over when the network is accessed.
You have answered your own question, and all good build/dependency
management systems already do this via some mechanism.
> And furthermore, one always has to be mindful of the occasional need
> to
> rollback. Generally, source code control is used for the local
> source
> code component -- if you need to revert a change, just checkout an
> earlier revision from your repo. But if a network resource that used
> to
> provide library X v1.0 now has moved on to X v2.0, and has dropped
> all
> support for v1.0 so that it is no longer downloadable from the
> server,
> then rollback is no longer possible. You are now unable to reproduce
> a
> build you made 2 years ago. (Which you might need to, if a customer
> environment is still running the old version and you need to debug
> it.)
> IOW, the network is inherently unreliable. Some form of local
> caching /
> cache revision control is required.
I don't see this as a big issue since it is already possible in all
good systems.
[…]
--
Russel.
===========================================
Dr Russel Winder t: +44 20 7585 2200
41 Buckmaster Road m: +44 7770 465 077
London SW11 1EN, UK w: www.russel.org.uk
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20180221/5841eafd/attachment.sig>
More information about the Digitalmars-d
mailing list