Sharing in D

Sean Kelly sean at invisibleduck.org
Thu Jul 31 09:09:56 PDT 2008


== Quote from Walter Bright (newshound1 at digitalmars.com)'s article
> downs wrote:
> > Walter Bright wrote:
> >> Steven Schveighoffer wrote:
> >>> I would hazard to guess that adopting this would cause a larger
> >>> rift than const.
> > He's probably right.
> A couple years ago, I was in a room with 30 of the top C++ programmers
> in the country. The topic was a 2 day conference on how to support
> multithreading in C++. It soon became clear that only two people in the
> room understood the issues (and I wasn't one of them).
> I think I understand the issues now, but it has taken a long time and
> repeatedly reading the papers about it. It is not simple, and it's about
> as intuitive as quantum mechanics.

In my opinion, the easiest way to describe the hardware side of things is
simply to say that the CPU does the exact same thing as the optimizer in
the compiler, only it does this dynamically as the code is executing.  The
rest just involves ideas for how to constrain CPU and compiler optimizations
so the app behaves as expected when concurrency is involved.  In your defense,
"volatile" is a perfectly suitable minimum for D, so you did get the gist of
the issue, even at the outset.  The catch is that, while "volatile" controls
the compiler, you still need to control the CPU, so inline ASM is required
as well.  Fortunately, D has that :-)  So please give yourself a bit more
credit here.  D1 works perfectly well for concurrent programming.  It just
support this is a sufficiently non-obvious way that only people who
understand the issues involved are likely to realize it.  But as most of
this stuff should really be in a library anyway, I don't see a problem
with things as they are.

Finding a way for the average user to do safe concurrent programming is an
entirely different issue in my opinion, and has only really been "solved," in
my opinion, in the functional programming realm.  And that's by eliminating
data sharing--the bane of imperative languages everywhere.

> >> Perhaps. But the alternative is the current wild west approach to
> >> multithreaded programming. With the proliferation of multicore
> >> computers, the era where this is acceptable is coming to an end.
> >
> > Since when is it the language's job to tell us what's acceptable and
> > what's not?
> I meant that programmers will no longer find the language acceptable if
> it doesn't offer better support.

I disagree.  D is a systems language first and foremost.  Much of the
rest can be done in library code.  That isn't to say that I wouldn't
like improved multiprogramming support in the language for the things
that are awkward to do in library code (a thread-local storage class,
for example), but trying to prevent the user from shooting himself in
the foot is unnecessary.  Particularly if doing so incurs a runtime
cost that is not avoidable.

> >> But still, you're far better off than the current wild west
> >> approach where everything is implicitly shared with no protection
> >> whatsoever. The popular double checked locking bug will be
> >> impossible to code in D without stuffing in explicit casts. The
> >> casts will be a red flag that the programmer is making a mistake.
> >
> > The double checked locking "bug" is only a bug on certain
> > architectures, none of which D is supported on.
> D is not only meant to be more than an x86 only language, the x86 family
> steadily moves towards the relaxed sequential consistency model. The
> threading model supported by Java and the upcoming C++0x release are
> both based on the relaxed model, that is clearly where the industry
> expects things to go, and I see no reason to believe otherwise.

There was some talk in the C++0x memory model discussion that Intel
was actually planning on providing equivalent or stronger guarantees
for future processors rather than weaker ones.  I'll admit to having
been very surprised at the time, but apparently they feel that they
can do so and still provide the performance people expect.  I guess
the resounding failure of the Itanic forced them to rethink what was
required for a successful new processor design.  If I remember
correctly, it was Paul McKenney that brought this up if you feel like
searching the archives.


Sean



More information about the Digitalmars-d mailing list