D 2.0 FAQ on `shared`
Marco Leise via Digitalmars-d
digitalmars-d at puremagic.com
Tue Oct 21 12:41:58 PDT 2014
Am Tue, 21 Oct 2014 16:05:57 +0000
schrieb "Sean Kelly" <sean at invisibleduck.org>:
> Good point about a shared class not having any unshared methods.
> I guess that almost entirely eliminates the cases where I might
> define a class as shared. For example, the MessageBox class in
> std.concurrency has one or two ostensibly shared methods and the
> rest are unshared. And it's expected for there to be both shared
> and unshared references to the object held simultaneously. This
> is by design, and the implementation would either be horribly
> slow or straight-up broken if done another way.
>
> Also, of the shared methods that exist, there are synchronized
> blocks but they occur at a fine grain within the shared methods
> rather than the entire method being shared. I think that
> labeling entire methods as synchronized is an inherently flawed
> concept, as it contradicts the way mutexes are supposed to be
> used (which is to hold the lock for as short a time as possible).
> I hate to say it, but if I were to apply shared/synchronized
> labels to class methods it would simply be to service user
> requests rather than because I think it would actually make the
> code better or safer.
I have nothing to add.
> > […] I.e. in this
> > case the programmer must decide between mutex synchronization
> > and atomic read-modify-write. That's not too much to ask.
>
> I agree. I was being pedantic for the sake of informing anyone
> who wasn't aware. There are times where I have some fields be
> lock-free and others protected by a mutex though. See
> Thread.isRunning, for example.
Yep, and the reason I carefully formulated
"read-modify-write", hehe.
> There are times where a write
> delay is acceptable and the possibility of tearing is irrelevant.
> But I think this falls pretty squarely into the "expert"
> category--I don't care if the language makes it easy.
> > Imagine you have a shared root object that contains a deeply
> > nested private data structure that is technically unshared.
> > Then it becomes not only one more method of the root object
> > that needs to be `synchronized` but it cascades all the way
> > down its private fields as well. One ends up requiring data
> > structures designed for single-threaded execution to
> > grow synchronized methods over night even though they aren't
> > _really_ used concurrently my multiple threads.
>
> I need to give it some more thought, but I think the way this
> should work is for shared to not be transitive, but for the
> compiler to require that non-local variables accessed within a
> shared method must either be declared as shared or the access
> must occur within a synchronized block. This does trust the
> programmer a bit more than the current design, but in exchange it
> encourages a programming model that actually makes sense. It
> doesn't account for the case where I'm calling pthread_mutex_lock
> on an unshared variable though. Still not sure about that one.
Do you think it would be bad if a pthread_mutex_t* was
declared as shared or only usable when shared ?
> > The work items? They stay referenced by the shared Thread
> > until it is done with them. In this particular implementation
> > an item is moved from the list to a separate field that
> > denotes the current item and then the Mutex is released.
> > This current item is technically unshared now, because only
> > this thread can really see it, but as far as the language is
> > concerned there is a shared reference to it because shared
> > applies transitively.
>
> Oh I see what you're getting at. This sort of thing is why
> Thread can be initialized with an unshared delegate. Since
> join() is an implicit synchronization point, it's completely
> normal to launch a thread that modifies local data, then call
> join and expect the local data to be in a coherent state. Work
> queues are much the same.
I have to think about that.
[…]
> > Mostly what I use is load-acquire and store-release, but
> > sometimes raw atomic read access is sufficient as well.
> >
> > So ideally I would like to see:
> >
> > volatile -> compiler doesn't reorder stuff
>
> Me too. For example, GCC can optimize around inline assembler.
> I used to have the inline asm code in core.atomic labeled as
> volatile for this reason, but was forced to remove it because
> it's deprecated in D2.
>
>
> > and on top of that:
> >
> > atomicLoad/Store -> CPU doesn't reorder stuff in the pipeline
> > in the way I described by MemoryOrder.xxx
> >
> > A shared variable need not be volatile, but a volatile
> > variable is implicitly shared.
>
> I'm not quite following you here. Above, I thought you meant the
> volatile statement. Are you saying we would have both shared and
> volatile as attributes for variables?
I haven't been around in the D1 times. There was a volatile
statement? Anyways what I don't want is that the compiler
emits memory barriers everywhere shared variables are accessed.
When I use mutex synchronization I don't need it and when I use
atomics, I want control over barriers.
I thought that could end up in two attributes for variables,
but it need not be the case.
--
Marco
More information about the Digitalmars-d
mailing list