@safe leak fix?

Robert Jacques sandford at jhu.edu
Fri Nov 13 08:24:11 PST 2009


On Fri, 13 Nov 2009 06:42:24 -0500, Steven Schveighoffer  
<schveiguy at yahoo.com> wrote:

> On Thu, 12 Nov 2009 19:29:28 -0500, Robert Jacques <sandford at jhu.edu>  
> wrote:
>
>> On Thu, 12 Nov 2009 08:56:25 -0500, Steven Schveighoffer  
>> <schveiguy at yahoo.com> wrote:
>>
>>> On Thu, 12 Nov 2009 08:45:36 -0500, Jason House  
>>> <jason.james.house at gmail.com> wrote:
>>>
>>>> Walter Bright Wrote:
>>>>
>>>>> Jason House wrote:
>>>>> > At a fundamental level, safety isn't about pointers or references  
>>>>> to
>>>>> > stack variables, but rather preventing their escape beyond function
>>>>> > scope. Scope parameters could be very useful. Scope delegates were
>>>>> > introduced for a similar reason.
>>>>>
>>>>> The problem is, they aren't so easy to prove correct.
>>>>
>>>> I understand the general problem with escape analysis, but I've  
>>>> always thought of scope input as meaning @noescape. That should lead  
>>>> to easy proofs. If my @noescape input (or slice of an array on the  
>>>> stack) is passed to a function without @noescape, it's a compile  
>>>> error. That reduces escape analysis to local verification.
>>>
>>> The problem is cases like this:
>>>
>>> char[] foo()
>>> {
>>>    char buf[100];
>>>    // fill buf
>>>    return strstr(buf, "hi").dup;
>>> }
>>>
>>> This function is completely safe, but without full escape analysis the  
>>> compiler can't tell.  The problem is, you don't know how the outputs  
>>> of a function are connected to its inputs.  strstr cannot have its  
>>> parameters marked as scope because it returns them.
>>>
>>> Scope parameters draw a rather conservative line in the sand, and  
>>> while I think it's a good optimization we can get right now, it's not  
>>> going to help in every case.  I'm perfectly fine with @safe being  
>>> conservative and @trusted not, at least the power is still there if  
>>> you need it.
>>>
>>> -Steve
>>
>> Well something like this should work (note that I'm making the  
>> conversion  from T[N] to T[] explicit)
>>
>> auto strstr(T,U)(T src, U substring)
>>      if(isRandomAccessRange!T &&
>>         isRandomAccessRange!U &&
>>         is(ElementType!U == ElementType!T)
>> { /* Do strstr */ }
>>
>> char[] foo() {                     // Returns type char[]
>>     char buf[100];                  // Of type scope char[100]
>>     // fill buf                     // "hi" is type immutable(char)[]
>>     return strstr(buf[], "hi").dup; // returns a lent char[], which is  
>> dup-ed into a char[], which is okay to return
>> }
>>
>> char[] foo2() {                    // Returns type char[]
>>     char buf[100];                  // Of type scope char[100]
>>     // fill buf                     // "hi" is type immutable(char)[]
>>     return strstr(buf[], "hi");     // Error, strstr returns a lent  
>> char[], not char[].
>> }
>
> Your proposal depends on scope being a type modifier, which it currently  
> is not.  I think that's a separate issue to tackle.
>
> -Steve

Actually, scope is currently a somewhat-limited type modifier (i.e. scope  
classes, scope class allocation). My use of it here was mainly to  
illustrate the compiler's internal representation. Also, the use of scope  
keyword in the proposal was based on a blog by Walter, where 'scope'  
became a more universal type modifier.

The point was you can handle a large number of escape analysis cases  
correctly using only the type system (more, of course with type  
system+local analysis).



More information about the Digitalmars-d mailing list