DConf '22 Talk: Structured Concurrency

Markk markus.kuehni at triviso.ch
Thu Oct 13 19:54:31 UTC 2022


On Thursday, 13 October 2022 at 08:04:46 UTC, Sebastiaan Koppe 
wrote:

> I haven't looked into OpenMP at all, ...

Perhaps I should clarify that my question was not so much about 
the actual "manifestation" of OpenMP, but rather about the 
underlying concepts. A large part of your talk presents the 
benefits of structured programming over the "goto mess", as an 
analog for the benefits of "Structured Concurrency" over anything 
else. It is there on this conceptual level that I do not see any 
innovation over 1997 OpenMP.

I'm not so much talking about whether syntax is fashionable, or 
the approach of a built-in compiler feature the right design 
choice.

> Personally I don't find the `#pragma` approach that elegant to 
> be honest. It also seems to be limited to just one machine.

Clearly, this could be "modernized" and translated to attributes 
on variables, loops etc.

> The sender/receiver model is literally just the abstraction of 
> an asynchronous computation. This allows you to use it as a 
> building block to manage work across multiple compute resources.

Firstly, I do think OpenMP covers multiple compute resources, as 
long as there is a networked memory abstraction, _or_ compiler 
support 
([LLVM](https://openmp.llvm.org/design/Runtimes.html#remote-offloading-plugin)).
https://stackoverflow.com/questions/13475838/openmp-program-on-different-hosts

Secondly, I don't see why an OpenMP task couldn't equally 
interact with multiple compute resources in a similar way. It is 
not that using your solution, a structured code block is sent to 
other compute resources, and magically executed there, right?

It all boils down to presenting the 
[Fork-Join-Model](https://en.wikipedia.org/wiki/Fork%E2%80%93join_model) nicely and safely, the rest is your code doing whatever it likes.

Or maybe I missed something.

_Mark



More information about the Digitalmars-d mailing list