New @safe rule: defensive closures
Steven Schveighoffer
schveiguy at gmail.com
Fri May 27 22:16:30 UTC 2022
We have this cool feature in D where if you try take the address of a
function, and the compiler can't prove that it doesn't escape where the
actual stack frame is, it allocates the stack frame on the GC, and then
it becomes a "closure".
Why can't we just do this in `@safe` code whenever it has the same
problem? Consider this code snippet:
```d
void bar(int[]) @safe;
void foo() @safe
{
int[5] arr;
bar(arr[]);
}
```
Currently, without DIP1000 enabled, this compiles, and if `bar`
squirrels away the array, you have a memory issue.
With DIP1000, this becomes an error, the compiler yells at you to put
scope on the bar parameter (and then you can't squirrel it away).
But what if instead, with DIP1000 seeing that, it just says now we have
a closure situation, and allocates `foo`'s frame on the heap.
A sufficiently smart optimizer might be able to detect that actually
`bar` doesn't squirrel it away, and will still allocate on the stack.
If you want to ensure this doesn't happen, just like with other
closures, you annotate with @nogc. And then you can have suggestions
about putting scope on `bar`'s parameter.
This gives us safe code that may not perform as expected, but at least
it *is safe*. And it doesn't spew endless errors to the user. Consider
that there are already so many cases where closures are allocated, with
std.algorithm and lambdas, and mostly nobody bats an eye.
Just another possible idea for debate.
-Steve
More information about the Digitalmars-d
mailing list