[dmd-concurrency] synchronized, shared, and regular methods inside the same class
Sean Kelly
sean at invisibleduck.org
Mon Jan 4 16:08:27 PST 2010
On Jan 4, 2010, at 3:58 PM, Andrei Alexandrescu wrote:
> Sean Kelly wrote:
>> On Jan 4, 2010, at 9:00 AM, Andrei Alexandrescu wrote:
>>> The only things that work on shared objects are:
>>>
>>> * calls to synchronized or shared methods, if any;
>>>
>>> * reading if the object is word-size or less;
>>>
>>> * writing if the object is word-size or less.
>> Cool! It's perhaps a minor issue right now, but it would be nice if RMW operations could be performed via library functions. Hopefully all that's required is to accept a "ref shared T" and then write ASM for the machinery from there? ie. Is there any need for compiler changes to support this?
>
> Yes, that's the plan. In fact I have proposed an even more Draconian plan: disallow even direct reads and writes to shared objects. To exact them, user code would have to invoke the intrinsics sharedRead and sharedWrite. Then it's very clear and easy to identify where barriers are inserted, and the semantics of the program is easily definable: the program preserves the sequences of calls to sharedRead and sharedWrite.
You've mentioned this before, and I really like the idea. This makes the atomic ops readily apparent, which seems like a good thing. I guess this could mess with template functions a bit, but since you really need custom algorithms to do nearly anything safely with shared variables, this is probably a good thing as well.
> Consider your example:
>
> shared int x;
> ...
> ++x;
>
> The putative user notices that that doesn't work, so she's like, meh, I'll do this then:
>
> int y = x;
> ++y;
> x = y;
>
> And the user remains with this impression that the D compiler is a bit dumb.
Ack! I hadn't thought of that.
> Of course that doesn't avoid the race condition though. If the user would have to call atomicIncrement(x) that would be clearly an improvement, but even this would be an improvement:
>
> int y = sharedRead(x);
> ++y;
> sharedWrite(y, x);
>
> When writing such code the user inevitably hits on the documentation for the two intrinsics, which clearly define their guarantees: only the sequence of sharedRead and sharedWrite is preserved. At that point, inspecting the code and understanding how it works is improved.
Exactly. Once they're in the module, one would hope that they'll notice the sharedIncrement() routine as well.
>> So if I have:
>> class A
>> {
>> void fn() shared { x = 5; }
>> int x;
>> }
>> Is this legal? If the type of the object doesn't change then I'd guess that I won't be allowed to access non-shared fields inside a shared function?
>
> Shared automatically propagates to fields, so typeof((new shared(A)).x) is shared int. Of course that's not the case right now; the typeof expression doesn't even compile :o).
Hm... but what if fn() were synchronized instead of shared? Making x shared in that instance seems wasteful. I had thought that perhaps a shared function would simply only be allowed to access shared variables, and possibly call synchronized functions:
class A {
void fnA() shared { x = 5; } // ok, x is shared
void fnB() shared { y = 5; } // not ok, y is not shared
void fnC() synchronized { y = 5; } // ok, non-shared ops are ok if synchronized
shared int x;
int y;
}
More information about the dmd-concurrency
mailing list