DIP 1024--Shared Atomics--Community Review Round 1

Manu turkeyman at gmail.com
Sun Oct 13 19:23:45 UTC 2019

On Sun, Oct 13, 2019 at 1:10 AM Ola Fosheim Grøstad via Digitalmars-d
<digitalmars-d at puremagic.com> wrote:
> On Saturday, 12 October 2019 at 23:36:55 UTC, Manu wrote:
> > Here's a rough draft of one such sort of tool I use all the
> > time in shared-intensive code:
> > https://gist.github.com/TurkeyMan/c16db7a0be312e9a0a2083f5f4a6efec
> Thanks! That looks quite low level, but now I understand more
> what you are looking for.

Well the implementation is low-level, obviously. But the tool is not.
It's quite approachable at user level, the only risk being to avoid
deadlock. If you want to systemically avoid dead-locks, then
higher-level tools appear.

> What I had in mind was writing APIs that allows ordinary
> programmers to do parallell programming safely.

That doesn't exist. I would call that a 'framework', and that's a very
high level library suite.
What we're doing here is making it possible to write such a thing safely.

'Ordinary' programmers don't do concurrency on a low level. Period.

> Like taking the single threaded code they have written for
> processing a single array or merging two arrays with each other
> and then use a library for speeding it up.

I'm working towards a safe parallel-for... this needs to land first
before I can attempt to argue for the next steps to make that work.

> Anyway, my core argument is to put more meta-programming power in
> hands of library authors like you so that people who have the
> shoes on can define the semantics they are after. I really don't
> think this has to be done at the compiler level, given the right
> meta programming tools, based on the proposed semantics. And I
> don't think providing those meta programming tools are more work
> than hardwiring more stuff into the compiler. Clearly that is
> just my opinion. Others might feel differently.

I've said this a few times now, and I'll say it again.
Your insistence that shared should be a library thing applies equally
to const. It behaves in an identical manner to const, and if you can't
make a compelling case for moving const to a library, you can't
possibly sell me on that idea with respect to shared.

1. `shared` defines D's thread-local by default semantics... without
shared, there is no thread-local by default. That decision is so deep,
there is absolutely no way to move away from that.
2. We must be able to overload on `shared`. We can't 'overload' on
template wrappers; we can try and specialise and stuff and abuse
clever inference and type deduction, but things break down quickly. I
have tried to simulate type qualifiers with template wrappers before.
It always goes *very* badly, and it's a mess to work with.
3. Most importantly, there is a future (not shown in this discussion)
where shared should allow some implicit promotions safely; for

void fun(scope ref shared T arg);
T x;
fun(x); // OK; promote to shared, strictly bound to the lifetime of
this call tree

This will allow some of the simple deployment of algorithms you refer to above.
Interaction with scope and other escape analysis tools is very
low-level and quite complex, and we need shared to have a solid
definition before we can experiment with that stuff.
This is the key that will unlock parallel-for and other algorithms.

More information about the Digitalmars-d mailing list