@trust is an encapsulation method, not an escape

Steven Schveighoffer via Digitalmars-d digitalmars-d at puremagic.com
Fri Feb 6 10:51:34 PST 2015


On 2/6/15 10:36 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang at gmail.com>" wrote:
> On Friday, 6 February 2015 at 15:10:18 UTC, Steven Schveighoffer wrote:
>> into suspect the whole function. So marking a function @safe, and
>> having it mean "this function has NO TRUSTED OR SYSTEM CODE in it
>> whatsoever, is probably the right move, regardless of any other changes.
>
> But that would break if you want to call a @safe function with a
> @trusted function reference as a parameter? Or did I misunderstand what
> you meant here?

The whole point of marking a function trusted instead of a block is that 
you have to follow the rules of function calling to get into that block, 
and your separate function only has access to variables you give it.

My point was that if you have @trusted escapes inside a function, 
whether it's marked @safe or not, you still have to review the whole 
function. If the compiler disallowed this outright, then you don't have 
that issue.

Separating the trusted code from the safe code via an API barrier has 
merits when it comes to code review.

Now, @trusted static nested functions that stand on their own are fine, 
they are no different than public ones, just not public.

@trusted static nested functions that are ONLY OK when called in certain 
ways, that is where we run into issues. At that point, you have to make 
a choice -- add (somewhat unnecessary) machinery to make sure the 
function is always called in a @safe way, or expand the scope of the 
@trusted portion, possibly even to the whole @safe function.

I see the point now that making sure @safe functions don't have escapes 
has the advantage of not requiring *as much* review as a @system or 
@trusted function. I am leaning so much towards H.S. Teoh's solution of 
making @trusted safe by default, and allowing escapes into @system code. 
That seems like the right abstraction.

> And... what happens if you bring in a new architecture that requires
> @trusted implementation of a library function that is @safe on other
> architectures?

Then you create a @trusted wrapper around that API, ensuring when called 
from @safe code it can't corrupt memory.

>
>> 1. A way to say "this function needs extra scrutiny"
>> 2. Mechanical verification as MUCH AS POSSIBLE, and especially for
>> changes to said function.
>>
>> Yes, we can do 2 manually if necessary. But having a compiler that
>> never misses on pointing out certain bad things is so much better than
>> not having it.
>
> I am not sure if it is worth the trouble. If you are gonna conduct a
> semi formal proof, then you should not have a mechanical sleeping pillow
> that makes you sloppy. ;-)

I see what you mean, but there are also really dumb things that people 
miss that a compiler won't. Having a mechanical set of eyes in addition 
to human eyes is still more eyes ;)

> Also if you do safety reviews they should be separate from the
> functional review and only focus on safety.
>
> Maybe it would be interesting to have an annotation for @notprovenyet,
> so that you could have regular reviews during development and then scan
> the source code for @trusted functions that need a safety review before
> you a release is permitted? That way you don't have to do the safety
> review for every single mutation of the @trusted function.

The way reviews are done isn't anything the language can require. 
Certainly we can provide guidelines, and we can require such review 
processes for phobos and druntime.

-Steve



More information about the Digitalmars-d mailing list