concurrency call to arms
Russel Winder
russel at winder.org.uk
Wed Aug 22 16:49:01 UTC 2018
On Thu, 2018-08-16 at 20:30 +0000, John Belmonte via Digitalmars-d
wrote:
> This is actually not about war; rather the peace and prosperity
> of people writing concurrent programs.
>
> (Andrei, I hope you are reading and will check out
>
https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
>
On skimming this, I get the feeling the author doesn't really
understand goroutines and channels. Actually I am not entirely sure the
person understands concurrency and parallelism.
> and
> https://vorpus.org/blog/timeouts-and-cancellation-for-humans/)
>
> Recently I've been working with Trio, which is a Python async
> concurrency library implementing the concepts described in the
> articles above. A synopsis (Python):
Have you tried asyncio in the Python standard library? Is Trio better?
> with open_task_container() as container:
> container.start_task(a)
> container.start_task(b)
> await sleep(1)
> container.start_task(c)
> # end of with block
>
> # program continues (tasks a, b, c must be completed)...
Assuming a, b, and c run in parallel and this is just a nice Pythonic
way of ensuring join, this is fairly standard fork/join thread pool
task management – except Python is single threaded so the above is time
division multiplexing of tasks.
std.parallelism can already handle this sort of stuff in D as far as I
know.
> The point is that tasks started in the container's scope will not
> live past the scope. Scope exit will block until all tasks are
> complete (normally or by cancellation). If task b has an
> exception, all other tasks in the container are cancelled.
Use of scope like this is a good thing, and something GPars, Quasar,
and others supports. Using a context manager in Python is clearly a
very Pythonic way of doing it.
> What this means is that task lifetimes can be readily understood
> by looking at the structure of a program. They are tied to
> scoped blocks, honor nesting, etc.
>
> Similar for control of timeouts and cancellation:
>
> with fail_after(10): # raise exception if scope not
> completed in 10s
> reply = await request(a)
> do_something(reply)
> reply2 = await request(b)
> ...
>
> These are novel control structures for managing concurrency.
> Combining this with cooperative multitasking and explicit,
> plainly-visible context switching (i.e. async/await-- sorry
> Olshansky) yields something truly at the forefront of concurrent
> programming. I mean no callbacks, almost no locking, no
> explicitly maintained context and associated state machines, no
> task lifetime obscurity, no manual plumbing of cancellations, no
> errors dropped on the floor, no shutdown hiccups. I'm able to
> write correct, robust, maintainable concurrent programs with
> almost no mental overhead beyond a non-concurrent program.
I'd disagree with them being novel control structures. The concepts
have been around for a couple of decades. They have different
expressions in different languages. Python's context manager just makes
it all very neat.
Clearly getting rid of the nitty-gritty management detail of
concurrency and parallelism is a good thing. Processes and channels
have been doing all this for decades, but have only recently become
fashionable – one up to Rob Pike and team. I've not followed
async/await in C# but in Python it is a tool for concurrency but
clearly not for parallelism. Sadly async/await has become a fashion
that means it is being forced into programming languages that really do
not need it. Still there we see the power of fashion driven programming
language development.
> Some specimens (not written by me):
> #1: the I/O portion of a robust HTTP 1.1 server
> implementation in about 200 lines of code.
>
https://github.com/python-hyper/h11/blob/33c5282340b61ddea0dc00a16b6582170d822d81/examples/trio-server.py
> #2: an implementation of the notoriously difficult "happy
> eyeballs" networking connection algorithm in about 150 lines of
> code.
>
https://github.com/python-trio/trio/blob/7d2e2603b972dc0adeaa3ded35cd6590527b5e66/trio/_highlevel_open_tcp_stream.py
>
> I'd like to see a D library supporting these control structures
> (with possibly some async/await syntax for the coroutine case).
> And of course for vibe.d and other I/O libraries to unify around
> this.
Kotlin, Java, etc. are all jumping on the coroutines bandwagon, but
why? There is no actual need for these given you can have blocking
tasks in a threadpool with channels already.
> I'll go out on a limb and say if this could happen in addition to
> D addressing its GC dirty laundry, the language would actually be
> an unstoppable force.
Why?
Are coroutines with language syntax support really needed?
And whilst Go is obsessively improving it's GC so as to make it a non-
issue to any performance arguments, it seems this is an insoluble
problem in D.
--
Russel.
===========================================
Dr Russel Winder t: +44 20 7585 2200
41 Buckmaster Road m: +44 7770 465 077
London SW11 1EN, UK w: www.russel.org.uk
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20180822/12d4a1ed/attachment-0001.sig>
More information about the Digitalmars-d
mailing list