[OT] Senders and Receivers
Derek Fawcus
dfawcus+dlang at employees.org
Tue Jun 3 14:10:07 UTC 2025
On Tuesday, 3 June 2025 at 06:06:24 UTC, Paul Backus wrote:
> On Monday, 2 June 2025 at 19:32:03 UTC, Derek Fawcus wrote:
>>
>> Why should "Structured Concurrency" be viewed as a better
>> approach than CSP (or Actors for that matter)?
>
> Here's the most popular explanation:
>
> https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
>
> The main thesis (quoted from the article's conclusion) is this:
>
>> These [unstructured concurrency] primitives are dangerous even
>> if we don't use them directly, because they undermine our
>> ability to reason about control flow and compose complex
>> systems out of abstract modular parts, and they interfere with
>> useful language features like automatic resource cleanup and
>> error propagation.
Thanks. I'll have a read of that later.
The flaw I've seen to date in the arguments against CSP (and
Actors) in favour of 'Structured Concurrency', but which I do
view as valid against raw threads, async/await type abstractions
is the scope of application.
The way I approach CSP and Actors is to apply them to 'large'
chunks of natural concurrency, and hence generally avoid shared
data access.
Whereas the raw threads + locks, async/await and friends and
implicitly 'Structured Concurrency' seem to be targetting 'fine
grained' conncurrency decomposition, even to the level of
individual functions calls. This I view as inherently difficult
to reason about, and possibly SC then provides some form of aid
in reasoning.
Now one could use CSP and Actors in a similar way, and there they
would likewise be equally difficult to reason about, but I
suggest they tend to encourage different and higher level form of
decomposition.
There the same reasoning issues don't arise, and moreover there
are tools one to apply to prove the network/graph one has for
CSP. (I'm not sure if the same applies for Actors).
The obvious trade off which one makes when using CSP is the risk
of deadlock.
However the above tools are supposed to avoid that. I've not yet
tried them, since I've yet to hit/create a sufficiently difficult
graph which challenges manual analysis. However it may still be
worth using such tools if the graph may change under long term
support of the program.
Even without such use I find that with sufficient logging (which
is not really a lot) it is easy to reason through any remaining
deadlocks once the arise during testing. Whereas it is incredibly
difficult to reason through the trigger cause with shared data
update (even under locks) when it occurs from multiple concurrent
call graphs, which would happen under the other methodologies
(including, I believe, SC).
So I generally see SC as tackling the wrong problem, however I
shall give your reference a fair crack of the whip.
(I listened to the first podcast above; I've yet to get to the
other two)
More information about the Digitalmars-d
mailing list