DIP11: Automatic downloading of libraries

Steven Schveighoffer schveiguy at yahoo.com
Wed Jun 15 06:08:03 PDT 2011


On Tue, 14 Jun 2011 22:24:04 -0400, Nick Sabalausky <a at a.a> wrote:

> "Andrei Alexandrescu" <SeeWebsiteForEmail at erdani.org> wrote in message
> news:4DF7D92A.8050606 at erdani.org...
>> On 6/14/11 4:38 PM, Nick Sabalausky wrote:
>>> - Putting it in the compiler forces it all to be written in C++. As an
>>> external tool, we could use D.
>>
>> Having the compiler communicate with a download tool supplied with the
>> distribution seems to be a very promising approach that would address  
>> this
>> concern.
>>
>
> A two way "compiler <-> build tool" channel is messier than "build tool
> invoked compier", and I don't really see much benefit.

It's neither.  It's not a build tool, it's a fetch tool.  The build tool  
has nothing to do with getting the modules.

The drawback here is that the build tool has to interface with said fetch  
tool in order to do incremental builds.

However, we could make an assumption that files that are downloaded are  
rather static, and therefore, the target doesn't "depend" on them.  To  
override this, just do a rebuild-from-scratch on the rare occasion you  
have to update the files.

>>> - By default, it ends up downloading an entire library one inferred
>>> source
>>> file at a time. Why? Libraries are a packaged whole. Standard behavior
>>> should be for libraries should be treated as such.
>>
>> Fair point, though in fact the effect is that one ends up downloading
>> exactly the used modules from that library and potentially others.
>>
>
> I really don't see a problem with that. And you'll typically end up  
> needing
> most, if not all, anyway. It's very difficult to see this as an actual
> drawback.

When requesting a given module, it might be that it's part of a package (I  
would say most definitely).  The fetch tool could know to get the entire  
package and extract it into the cache.

>>> - Does every project that uses libX have to download it separately? If
>>> not
>>> (or really even if so), how does the compiler handle different versions
>>> of
>>> the lib and prevent "dll hell"? Versioning seems to be an afterthought  
>>> in
>>> this DIP - and that's a guaranteed way to eventually find yourself in  
>>> dll
>>> hell.
>>
>> Versioning is a policy matter that can, I think, be addressed within the
>> URL structure. This proposal tries to support versioning without
>> explicitly imposing it or standing in its way.
>>
>
> That's exactly my point. If you leave it open like that, everyone will  
> come
> up with thier own way to do it, many will not even give it any attention  
> at
> all, and most of those approaches will end up being wrong WRT avoiding  
> dll
> hell. Hence, dll hell will get in and library users will end up having to
> deal it. The only way to avoid it is to design it out of the system up  
> from
> *with explicitly imposing it*.

If the proposal becomes one where the include path specifies base urls,  
then the build tool can specify exact versions.

The cache should be responsible for making sure files named the same from  
different URLs do not conflict.

for example:

-Ihttp://url.to.project/v1.2.3

in one project and

-Ihttp://url.to.project/v1.2.4

in another.

I still feel that specifying the url in the source is the wrong approach  
-- it puts too much information into the source, and any small change  
requires modifying source code.  We don't specify full paths for local  
imports, why should we specify full paths for remote ones?

-Steve


More information about the Digitalmars-d mailing list