valid uses of shared

Steven Schveighoffer schveiguy at yahoo.com
Mon Jun 11 10:27:58 PDT 2012


On Mon, 11 Jun 2012 09:41:37 -0400, Artur Skawina <art.08.09 at gmail.com>  
wrote:

> On 06/11/12 14:11, Steven Schveighoffer wrote:
>> On Mon, 11 Jun 2012 07:56:12 -0400, Artur Skawina <art.08.09 at gmail.com>  
>> wrote:
>>
>>> On 06/11/12 12:35, Steven Schveighoffer wrote:
>>
>>>> I wholly disagree.  In fact, keeping the full qualifier intact  
>>>> *enforces* incorrect code, because you are forcing shared semantics  
>>>> on literally unshared data.
>>>>
>>>> Never would this start ignoring shared on data that is truly shared.   
>>>> This is why I don't really get your argument.
>>>>
>>>> If you could perhaps explain with an example, it might be helpful.
>>>
>>> *The programmer* can then treat shared data just like unshared.  
>>> Because every
>>> load and every store will "magically" work. I'm afraid that after more  
>>> than
>>> two or three people touch the code, the chances of it being correct  
>>> would be
>>> less than 50%...
>>> The fact that you can not (or shouldn't be able to) mix shared and  
>>> unshared
>>> freely is one of the main advantages of shared-annotation.
>>
>> If shared variables aren't doing the right thing with loads and stores,  
>> then we should fix that.
>
> Where do you draw the line?
>
> shared struct S {
>    int i
>    void* p;
>    SomeStruct s;
>    ubyte[256] a;
> }
>
> shared(S)* p = ... ;
>
> auto v1 = p.i;
> auto v2 = p.p;
> auto v3 = p.s;
> auto v4 = p.a;
> auto v5 = p.i++;
>
> Are these operations on shared data all safe? Note that if these
> accesses would be protected by some lock, then the 'shared' qualifier
> wouldn't really be needed - compiler barriers, that make sure it all
> happens while this thread holds the lock, would be enough. (even the
> order of operations doesn't usually matter in that case and enforcing
> one would in fact add overhead)

No, they should not be all safe, I never suggested that.  It's impossible  
to engineer a one-size-fits-all for accessing shared variables, because it  
doesn't know what mechanism you are going to use to protect it.  As you  
say, once this data is protected by a lock, memory barriers aren't  
needed.  But requiring a lock is too heavy handed for all cases.  This is  
a good point to make about the current memory-barrier attempts, they just  
aren't comprehensive enough, nor do they guarantee pretty much anything  
except simple loads and stores.

Perhaps the correct way to implement shared semantics is to not allow  
access *whatsoever* (except taking the address of a shared piece of data),  
unless you:

a) lock the block that contains it
b) use some library feature that uses casting-away of shared to accomplish  
the correct thing.  For example, atomicOp.

None of this can prevent deadlocks, but it does create a way to prevent  
deadlocks.

If this was the case, stack data would be able to be marked shared, and  
you'd have to use option b (it would not be in a block).  Perhaps for  
simple data types, when memory barriers truly are enough, and a  
shared(int) is on the stack (and not part of a container), straight loads  
and stores would be allowed.

Now, would you agree that:

auto v1 = synchronized p.i;

might be a valid mechanism?  In other words, assuming p is lockable,  
synchronized p.i locks p, then reads i, then unlocks p, and the result  
type is unshared?

Also, inside synchronized(p), p becomes tail-shared, meaning all data  
contained in p is unshared, all data referred to by p remains shared.

In this case, we'd need a new type constructor (e.g. locked) to formalize  
the type.

Make sense?

-Steve


More information about the Digitalmars-d mailing list