[dmd-concurrency] synchronized, shared, and regular methods inside the same class
Jason House
jason.james.house at gmail.com
Mon Jan 4 16:46:19 PST 2010
On Jan 4, 2010, at 6:45 PM, Andrei Alexandrescu <andrei at erdani.com>
wrote:
> Jason House wrote:
>> On Jan 4, 2010, at 12:00 PM, Andrei Alexandrescu <andrei at erdani.com>
>> wrote:
>>> ... subject to the "tail-shared" exemption that I'll discuss at a
>>> later point.
>> I wish you'd stop giving teasers like that. It feels like we can't
>> have a discussion because a) you haven't tried to share your
>> perspective b) you're too busy to have the conversation anyway
>> I'm probably way off with my impression...
>
> In the words of Wolf in Pulp Fiction: "If I'm curt it's because time
> is
> of the essence".
>
> The tail-shared exemption is very simple and deducible so I didn't
> think
> it was necessary to give that detail at this point: inside a
> synchronized method, we know the current object is locked but the
> indirectly-accessed memory is not. So although the object's type is
> still shared, accessing direct fields of the object will have their
> memory barriers lifted. This is just a compiler optimization that
> doesn't affect semantics.
You're right that it is easy. I think I assumed too much from the name.
Even so, this style of optimization doesn't necessarilly align very
well with user intention. My best example of this is D's garbage
collector. It uses a single lick for far more data than just head
access to "this".
Actually, when I think about it, the optimization you mention is
sometimes incorrect. See below.
>
> and:
>
>> Sadly, this is a side effect of a simplistic handling of shared.
>> Shared is more like "here be dragons, here's an ice cube in case one
>> breaths fire on you". Nearly all protection / correctness
>> verification are missing and left for D3 or beyond. Message passing
>> lets shared aware code remain in the trusted code base...
>
> This is speculation. Please stop it. We plan to define clean semantics
> for shared.
I don't think it's speculation. I'll try to list a few things I
consider to be supporting facts:
1. Shared does not encode which lock should be held when accessing
data. There are 3 big categories here: lock-free, locked by monitor
for "this", and locked by something else.
2. Shared means that there might be simultaneous attempts to use the
data. The compiler can't infer which objects are intended to use the
same lock, and can't optimize away fences. Similarly, the compiler
can't detect deviations from programmer intent.
3. All shared data is a candidate for lock-free access. The compiler
can't detect which objects the programmer intended to be lock-free and
can't detect use of lock-free variables in a lock is held manner.
4. Variables that the programmer intended to only be accessed when a
specific lock is held can't be inferred by the compiler. Because of
this, the compiler can't detect any failure to lock scenarios.
5. Certain lock-free variables always require sequential consistency
(and the compiler can't/doesn't infer this). This is what I referred
to above. This can be important if there are shared but not
synchronized methods, or some other code uses a member variable in a
lock-free (or incorrectly locked) manner. Again, the compiler doesn't
detect such distributed accesses and so the optimization you mentioned
is invalid.
6. There are no known ways to validate lock-free logic, and the
current shared design implicitly allows it everywhere.
Does that help? Currently, shared only facilitates segregating race-
free thread-local data/logic from the (here be dragons) world of
shared access which lacks all verification of proper locking. Even
Bartosz's more complex scheme has a here-be-dragons subset that he
labeled with lock-free... I can back that last part up if it also
feels like speculation.
>
More information about the dmd-concurrency
mailing list