@trusted attribute should be replaced with @trusted blocks
Timon Gehr
timon.gehr at gmx.ch
Thu Jan 16 23:31:15 UTC 2020
On 16.01.20 12:50, Joseph Rushton Wakeling wrote:
> On Thursday, 16 January 2020 at 03:34:26 UTC, Timon Gehr wrote:
>> ...
>
>> @safe does not fully eliminate risk of memory corruption in practice,
>> but that does not mean there is anything non-absolute about the
>> specifications of the attributes.
>
> Would we be able to agree that the absolute part of the spec of both
> amounts to, "The emergence of a memory safety problem inside this
> function points to a bug either in the function itself or in the
> initialization of the data that is passed to it" ... ?
> ...
More or less. Two points:
- The _only_ precondition a @trusted/@safe function can assume for
guaranteeing no memory corruption is that there is no preexisting memory
corruption.
- For callers that threat the library as a black-box, this definition is
essentially sufficient. (This is why there is not really a reason to
treat the signatures differently to the point where changing from one to
the other is a breaking API change.) White-box callers get the
additional language guarantee that if the function corrupts memory, that
happens when it executed some bad @trusted code and this is the
motivation behind having both @safe and @trusted. @system exists because
in low-level code, sometimes you want to write or use functions that
have highly non-trivial preconditions for ensuring no memory corruption
happens.
> (In the latter case I'm thinking that e.g. one can have a perfectly,
> provably correct @safe function taking a slice as input, and its
> behaviour can still get messed up because the user initializes a slice
> in some crazy unsafe way and passes that in.)
> ...
That is preexisting memory corruption. If you use @trusted/@system code
to destroy an invariant that the @safe part of the language assumes to
hold for a given type, you have corrupted memory.
>> As I am sure you understand, if you see a @safe function signature,
>> you don't know that its implementation is not a single @trusted
>> function call
>
> Yes, on this we agree. (I even mentioned this case in one of my posts
> above.)
>
>> so the difference in signature is meaningless unless you adhere to
>> specific conventions
>
> Here's where I think we start having a disagreement. I think it is
> meaningful to be able to distinguish between "The compiler will attempt
> to validate the memory safety of this function to the extent possible
> given the @trusted assumptions injected by the developer" (which _might_
> be the entirety of the function), versus "The safety of this function
> will definitely not be validated in any way by the compiler".
>
> Obviously that's _more_ helpful to the library authors than users, but
> it's still informative to the user: it's saying that while the _worst
> case_ assumptions are the same (100% unvalidated), the best case are not.
> ...
It is possible to write a @trusted function that consists of a single
call to a @safe function, so you are assuming a convention where people
do not call @safe code from @trusted code in certain ways. Anyway, my
central point was that it is an implementation detail. That does not
mean it is necessarily useless to a user in all circumstances, but that
someone who writes a library will likely choose to hide it.
>> (which the library you will be considering to use as a dependency most
>> likely will not do).
>
> Obviously in general one should not assume virtue on the part of library
> developers. But OTOH in a day-to-day practical working scenario, where
> one has to prioritize how often one wants to deep-dive into
> implementation details -- versus just taking a function's signature and
> docs at face value and only enquiring more deeply if something breaks --
> it's always useful to have a small hint about the best vs. worst case
> scenarios.
> ...
Right now, the library developer has a valid incentive to actively avoid
@trusted functions in their API. This is because it is always possible,
@trusted is an implementation detail and changing this detail can in
principle break dependent code. (E.g., a template instantiated with a
@safe delegate will give you a different instantiation from the same
template instantiated with a @trusted delegate, and if e.g., you have
some static cache in your template function, a change from @safe to
@trusted in some API can silently slow down the downstream application
by a factor of two, change iteration orders through hash tables, etc.)
> It's not that @safe provides a stronger guarantee than @trusted, it's
> that @trusted makes clear that you are definitely in worst-case
> territory. It's not a magic bullet, it's just another data point that
> helps inform the question of whether one might want to deep-dive up
> front or not (a decision which might be influenced by plenty of other
> factors besides memory safety concerns).
>
> The distinction only becomes meaningless if one is unable to deep-dive
> and explore the library code.
> ...
I just think that if you are willing to do that, you should use e.g.
grep, not the function signature where a competent library author will
likely choose to hide @trusted as an implementation detail.
More information about the Digitalmars-d
mailing list