Temporally safe by default
Richard (Rikki) Andrew Cattermole
richard at cattermole.co.nz
Mon Apr 8 22:38:15 UTC 2024
On 09/04/2024 10:09 AM, Sebastiaan Koppe wrote:
> On Monday, 8 April 2024 at 19:59:55 UTC, Richard (Rikki) Andrew
> Cattermole wrote:
>>
>> On 09/04/2024 7:43 AM, Dukc wrote:
>>> On Friday, 5 April 2024 at 10:34:24 UTC, Richard (Rikki) Andrew
>>> Cattermole wrote:
>>>> ``shared`` does not offer any guarantees to references, how many
>>>> there are, on what threads its on. None of it. Its fully up to the
>>>> programmer to void it if they chose to do so in normal ``@safe`` code.
>>>
>>> My impression is you can't do that, unless the data structure you're
>>> using is flawed (well `dataStruct.tupleof` of the works to bypass
>>> `@safe`ty but I don't think it's relevant since it probably needs to
>>> be fixed anyway). Pseudocode example?
>>
>> ```d
>> void thread1() {
>> shared(Type) var = new shared Type;
>> sendToThread2(var);
>>
>> for(;;) {
>> // I know about var!
>> }
>> }
>>
>> void sentToThread2(shared(Type) var) {
>> thread2(var);
>> }
>>
>> void thread2(shared(Type) var) {
>> for(;;) {
>> // I know about var!
>> }
>> }
>> ```
>>
>> The data structure is entirely irrelevant.
>>
>> Shared provided no guarantees to stop this.
>>
>> No library features can stop it either, unless you want to check ref
>> counts (which won't work cos ya know graphs).
>>
>> You could make it temporally safe with the help of locking, yes. But
>> shared didn't contribute towards that in any way shape or form.
>
> I don't think this has much to do with shared. You can get in similar
> situations on a single thread, just not in parallel.
This is exactly my point.
Shared doesn't offer any guarantees here.
It doesn't assist in us having temporal safety.
> Solving it requires tracking lifetimes, independent of whether that is
> across threads.
Yes. From allocation all the way until it is no longer known about in
the program graph.
> D has very little in the way of an answer here. It has the GC for auto
> lifetime, smart pointers and then shoot-in-the-foot manual memory
> management.
>
> Well, and there is scope with dip1000 of course, which works
> surprisingly well but requires phrasing things in a more structured manner.
If you think of DIP1000 has a tool for life time tracking to occur, it
works quite well.
If you think of it as life time tracking, you're going to have a very
bad time of it.
I really want us to move away from thinking its lifetime tracking,
because that isn't what escape analysis is.
I super duper want reference counting in the language. An RC object
shouldn't be getting beholden to scope placed on to it. That is hell.
I'd drop DIP1000 instead of RC if we don't get that.
> Curious to what you have been cooking.
We'll need a multi-pronged approach, the big feature I want is isolated
from Midori (or something like it). Coupled with DIP1000 that'll be a
little bit like a borrow checker, except a bit more flexible. While
still maintaining the guarantees across all of SafeD including
``@system`` via the use of a type qualifier outside of ``@safe``.
https://joeduffyblog.com/2016/11/30/15-years-of-concurrency/
However I've found that you need the type state analysis DFA to perform
its guarantees. So you might as well go all in and have type state
analysis and get a whole lot of other goodies while you're at it.
Of course you end up needing something to converge atomics, locking,
immutable and isolated all together. Which I haven't done any design
work on. Not there yet. If I can't get type state analysis in, there
isn't much point in continuing.
More information about the dip.ideas
mailing list