DIP 1024--Shared Atomics--Community Review Round 1

Manu turkeyman at gmail.com
Wed Oct 2 06:30:47 UTC 2019


On Tue, Oct 1, 2019 at 11:01 PM Manu <turkeyman at gmail.com> wrote:
>
> On Tue, Oct 1, 2019 at 10:57 PM Manu <turkeyman at gmail.com> wrote:
> >
> > On Tue, Oct 1, 2019 at 3:45 AM Mike Parker via Digitalmars-d
> > <digitalmars-d at puremagic.com> wrote:
> > >
> > > [...]
> >
> > At best, this DIP is premature; we need to make shared correctly
> > restrict read/write access before trying to do fancy stuff.
> > That flag is now available, but the implementation has bugs, and it
> > needs to be enabled by default.
> >
> > Assuming that the supporting material I mention above is available and
> > works properly, then I think it might be reasonable to open some
> > discussion of this kind, and on that note... I think this is a really
> > bad idea, and almost the opposite of what you want in virtually every
> > case.
> > I've written a whole lot of lock-free concurrent code, which involves
> > a lot of atomics, and the semantics in this DIP are not anything I
> > have ever seen or wanted (as I've argued here in the past). I think
> > the semantics described are misleading, and would only serve to
> > improperly educate an amateur user how to incorrectly write lock-free
> > algorithms, and increase their confidence that they're doing it right
> > (they're probably not).
> > 1. Most atomic operations are NOT seq-cst. Seq-cst operations are *super rare*.
> >   a. I just searched our entire engine codebase; there are 971 calls
> > to atomic functions; there are *four* instances of seq-cst, and I
> > think at least some of them are actually wrong (over-synchronised),
> > possibly all of them, I would need to consider closer.
> > 2. Many atomic operations appear in a series, in which case, it's
> > typical that exactly one operation in the series may be the
> > synchronisation point.
> > 3. In reality, probably at least half of operations are 'relaxed'.
> > 4. xchg and cas operations are fundamental to any atomic code, and
> > there are no syntax or operators for those operations; so, you're
> > undoubtedly using core.atomic aggressively anyway, and I think it's
> > just weird to litter naked arithmetic in between calls to core.atomic
> > functions. I feel that's misleading by making the naked arithmetic
> > look like NOT-atomic operations by contrast to a bunch of atomic
> > library calls.
> > 5. In almost all cases when working with atomics, you tend to do
> > quirky stuff not captured by regular usage of data types, and not
> > conveniently facilitated by this DIP.
> >
> > For instance; this is a common implementation for releasing a ref count:
> >   if (atomicLoad(rc, Relaxed) == 1 || atomicSub(rc, 1, Relaxed))
> >   {
> >     // maybe acquire if necessary
> >     // do release logic...
> >   }
> >
> > How do I write that code with this DIP?
> >
> > We need to make shared correct before we do any stuff like this, and
> > then when it's correct, we can talk... but I will probably continue to
> > resist this, because the only result I see is that it will train
> > amateurs to write bad atomic algorithms with an improper sense of
> > confidence gained by being sanctioned by the language.
> >
> > This is a bad idea.
>
> Sorry, that should have read:
>
>   if (atomicLoad(rc, Relaxed) == 1 || atomicSub(rc, 1, Relaxed) == 1)
>   {
>     // maybe acquire if necessary
>     // do release logic...
>   }

Wait up... it's possible that I've misunderstood this DIP...

There are 2 paragraphs that seem to contradict each other.

First there is A:
"""
By prohibiting direct access to shared data, the user will be required
to use core.atomic and to consider the correctness of their code.

# Using core.atomic instead

This change will require using core.atomic or equivalent functions to
read and write to shared memory objects. It will prevent unintended,
inadvertent non-use of atomic access.
"""

I absolutely agree with this text.

Then there's B:
"""
Atomic reads perform an acquire operation, writes perform a release
operation, and read-modify-write performs an acquire, then a
modification, and then a release. The result is sequentially
consistent ordering, and each thread observes the same order of
operations (total order).

The code generated would be equivalent to that from the core.atomic
library for the supported operations.
"""

And then goes on with examples like this where `x` is clearly accessed
without a call to core.atomic:
```
__gshared int g=0;
shared int x=0;

void thread1(){
    g=1; // non-atomic store
    x=1; // atomic store, release, happens after everything above
}

void thread2(){
    if(x==1){ // atomic load, acquire, happens before everything below
        assert(g==1); // non-atomic load
    }
}
```

I'm confused, which is it, A or B? I presume B, and my prior commentary applies.


More information about the Digitalmars-d mailing list