Mallocator and 'shared'

Johannes Pfau via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Tue Feb 14 02:52:37 PST 2017


Am Mon, 13 Feb 2017 17:44:10 +0000
schrieb Moritz Maxeiner <moritz at ucworks.org>:

> > Thread unsafe methods shouldn't be marked shared, it doesn't 
> > make sense. If you don't want to provide thread-safe interface, 
> > don't mark methods as shared, so they will not be callable on a 
> > shared instance and thus the user will be unable to use the 
> > shared object instance and hence will know the object is thread 
> > unsafe and needs manual synchronization.  
> 
> To be clear: While I might, in general, agree that using shared 
> methods only for thread safe methods seems to be a sensible 
> restriction, neither language nor compiler require it to be so; 
> and absence of evidence of a useful application is not evidence 
> of absence.

The compiler of course can't require shared methods to be thread-safe
as it simply can't prove thread-safety in all cases. This is like
shared/trusted: You are supposed to make sure that a function behaves
as expected. The compiler will catch some easy to detect mistakes (like
calling a non-shared method from a shared method <=> system method from
safe method) but you could always use casts, pointers, ... to fool the
compiler.

You could use the same argument to mark any method as @trusted. Yes
it's possible, but it's a very bad idea.

Though I do agree that there might be edge cases: In a single core,
single threaded environment, should an interrupt function be marked as
shared? Probably not, as no synchronization is required when calling
the function.

But if the interrupt accesses a variable and a normal function accesses
the variable as well, the access needs to be 'volatile' (not cached into
a register by the compiler; not closely related to this discussion) and
atomic, as the interrupt might occur in between multiple partial
writes. So the variable should be shared, although there's no
multithreading (in the usual sense).

> you'd still need those memory barriers. Also note that the 
> synchronization in the above is not needed in terms of semantics.

However, if you move you synchronized to the complete sub-code blocks
barriers are not necessary. Traditional mutex locking is basically a
superset and is usually implemented using barriers AFAIK. I guess your
point is we need to define whether shared methods guarantee some sort
of sequential consistency?

struct Foo
{
    shared void doA() {lock{_tmp = "a";}};
    shared void doB() {lock{_tmp = "b";}};
    shared getA() {lock{return _tmp;}};
    shared getB() {lock{return _tmp;}};
}

thread1:
foo.doB();

thread2:
foo.doA();
auto result = foo.getA(); // could return "b"

I'm not sure how a compiler could prevent such 'logic' bugs. However, I
think it should be considered a best practice to always make a shared
function a self-contained entity so that calling any other function in
any order does not negatively effect the results. Though that might not
always be possible.

> My opinion on the matter of `shared` emitting memory barriers is 
> that either the spec and documentation[1] should be updated to 
> reflect that sequential consistency is a non-goal of `shared` 
> (and if that is decided this should be accompanied by an example 
> of how to add memory barriers yourself), or it should be 
> implemented. Though leaving it in the current "not implemented, 
> no comment / plan on whether/when it will be implemented" state 
> seems to have little practical consequence - since no one seems 
> to actually work on this level in D - and I can thus understand 
> why dealing with that is just not a priority.

I remember some discussions about this some years ago and IIRC the
final decision was that the compiler will not magically insert any
barriers for shared variables. Instead we have well-defined intrinsics
in std.atomic dealing with this. Of course most of this stuff isn't
implemented (no shared support in core.sync).

-- Johannes



More information about the Digitalmars-d-learn mailing list