@api: One attribute to rule them All

Zach the Mystic via Digitalmars-d digitalmars-d at puremagic.com
Mon Jan 5 13:14:58 PST 2015


Hello everybody. My name is Zach, and I have a suggestion for the 
improvement of D. I've been looking at the following stalled pull 
request for a while now:

https://github.com/D-Programming-Language/dmd/pull/1877

...in which Walter Bright wants to introduce built-in-attribute 
inference for a relatively small set of functions. It seems like 
the most obvious thing in the world to me to desire this, and not 
even just for 'auto' and templated functions, but for *every* 
function. And there's no reason it can't be done. So long as the 
compiler has everything it needs to determine which attributes 
can be applied, there's no reason to demand anything from the 
programmer. Look how simple this function is:

int plusOne(int a) { return a+1; }

Let's say I later want to call it, however, from a fully 
attributed function:

int plusTwo(int a) pure nothrow @safe @nogc  {
   return plusOne(plusOne(a));
}

I get a compiler error. The only way to stop it is to add 
unnecessary visual noise to the first function. All of these 
attributes should be something that you *want* to add, not 
something that you *need*. The compiler can obviously figure out 
if the function throws or not. Just keep an additional internal 
flag for each of the attributes. When any attribute is violated, 
flip the bit and boom, you have your implicit function signature.

I think this is how it always should have been. It's important to 
remember that the above attributes have the 'covariant' property, 
which means they can always be called by any function without 
that property. Therefore no existing code will start failing to 
compile. Only certain things which would have *errored* before 
will stop. Plus new optimizations can be done.

So what's the problem? As you can read in the vehement opposition 
to pull 1877 above, the big fear is that function signatures will 
start changing willy-nilly, causing the exposed interface of the 
function to destabilize, which will cause linker errors or 
require code intended to be kept separate in large projects to be 
recompiled at every little change.

I find this depressing! That something so good should be ruined 
by something so remote as the need for separate compilation in 
very large projects? I mean, most projects aren't even very 
large. Also, because D compiles so much faster than its 
predecessors, is it even such a big deal to have to recompile 
everything?

But let's admit the point may be valid. Yes, under attribute 
inference, the function signatures in the exposed API will indeed 
find themselves changing every time one so much as adds a 
'printf' or calls something that throws.

But they don't *have* to change. The compiler doesn't need to 
include the inferred attributes when it generates the mangled 
name and the .di signature, only the explicit ones. From within 
the program, all the opportunities for inference and optimization 
could be left intact, while outside programs accessing the code 
in precompiled form could only access the functions as explicitly 
indicated.

This makes no change to the language, except that it allows new 
things to compile. The only hitch is this: What if you want the 
full advantages of optimization and inference from across 
compilation boundaries? You'd have to add each of the covariant 
function attributes manually to every function you exposed. From 
my perspective, this is still a chore.

I suggest a new attribute, @api, which does nothing more than to 
tell the compiler to generate the function signature and mangle 
the name only with its explicit attributes, and not with its 
inferred ones. Inside the program, there's no reason the compiler 
can't continue to use inference, but with @api, the exposed 
interface will be stabilized, should the programmer want that. 
Simple.

I anticipate a couple of objections to my proposal:

The first is that we would now demand that the programmer decide 
whether he wants his exposed functions stabilized or not. For a 
large library used by different people, this choice might pose 
some difficulty. But it's not that bad. You just choose: do you 
want to improve compilation times and/or closed-source 
consistency by ensuring a stable interface, or do you want to 
speed up runtime performance without having to clutter your code? 
Most projects would choose the latter. @api is made available for 
the those who don't. The opposition to attribute inference put 
forth in pull 1877 is thereby appeased.

A second objection to this proposal: Another attribute? Really? 
Well, yeah.

But it's not a problem, I say, for these reasons:

1. This one little attribute allows you to excise gajillions of 
unnecessary little attributes which are currently forced on the 
programmer by the lack of inference, simply by appeasing the 
opponents of inference and allowing it to be implemented.

2. It seems like most people will be okay just recompiling 
projects instead of preferring to stabilize their apis. Thus, 
@api will only be used rarely.

3. @api forces you to add all the attributes you want exposed to 
the world manually. It's a candid admission that you are okay 
littering your code with attributes, thereby lessening the pain 
at having to add one more.

4. Most @api functions will come in clusters. After all, it *is* 
an API you are exposing, so I think it's highly likely that a 
single "@api:" will work in most cases.


Now, "Bombard with your gunships."

Thank you.


More information about the Digitalmars-d mailing list