Mitigating the attribute proliferation - attribute inference for functions
Martin Nowak via Digitalmars-d
digitalmars-d at puremagic.com
Fri Jul 17 05:43:02 PDT 2015
I have to bring this up again, b/c I consider the heavy investment in
attributes one of the worst decisions lately.
It's very very hard and time consuming to write attribute correct code.
Consider dup
(https://github.com/D-Programming-Language/druntime/pull/760) a
seemingly simple piece, blown up 2-3x in complexity to deal with all the
attribute/inference craze.
This is how move would look like to support @safe inference.
void move(T)(ref T source, ref T target)
{
if (() @trusted { return &source == ⌖ }()) return;
// Most complicated case. Destroy whatever target had in it
// and bitblast source over it
static if (hasElaborateDestructor!T) () @trusted {
typeid(T).destroy(&target); }();
static if (hasElaborateAssign!T || !isAssignable!T)
() @trusted { memcpy(&target, &source, T.sizeof); }();
else
target = source;
// If the source defines a destructor or a postblit hook, we
must obliterate the
// object in order to avoid double freeing and undue aliasing
static if (hasElaborateDestructor!T ||
hasElaborateCopyConstructor!T)
{
// If T is nested struct, keep original context pointer
static if (__traits(isNested, T))
enum sz = T.sizeof - (void*).sizeof;
else
enum sz = T.sizeof;
auto init = typeid(T).init();
() @trusted {
if (init.ptr is null) // null ptr means initialize to 0s
memset(&source, 0, sz);
else
memcpy(&source, init.ptr, sz);
}();
}
}
There is a 3x overhead to write correct smart pointers/refs when taking
attributes into account. I don't think this complexity and ugliness is
justified.
=================== Attributes are hardly useful ======================
The assumption that code with attributes is better than code without
attributes is flawed.
- nothrow
Nice, the compiler must not emit exception handling code, but the real
problem is how bad dmd's EH code is.
https://issues.dlang.org/show_bug.cgi?id=12442
State of the art EH handling is almost "zero cost", particularly if
compared to other error handling schemes.
- @nogc
Why is this needed on a per function level? If one doesn't want to use
the GC it could be disabled on a per-thread or per-process level.
We now have a GC profiler, which is a good tool to find unwanted
allocations.
Of course we need to change many phobos functions to avoid
allocations, but @nogc doesn't help us to design better APIs.
- pure
The compiler can reuse the result of a strongly pure function call,
but compilers can do CSE [¹] for ages. CSE requires inlining to determine
whether a function has a sideeffect, but reusing of results is almost
exclusively useful for functions that are small enough to be inlined anyhow.
The result of a strongly pure functions has unique ownership and can
be implicitly cast to immutable. Nice insight, but is that any good?
- @safe
Nice idea in theory, but why not do this as a compiler switch for the
modules being compiled (with an @unsafe {} guard).
The way we do it currently with @trusted/@safe doesn't guarantee
anything for @trusted/@safe foreign code anyhow.
-Martin
[¹]: https://en.wikipedia.org/wiki/Common_subexpression_elimination
More information about the Digitalmars-d
mailing list