Opportunities for D

Sean Kelly via Digitalmars-d digitalmars-d at puremagic.com
Thu Jul 10 08:13:27 PDT 2014


On Thursday, 10 July 2014 at 06:32:32 UTC, logicchains wrote:
> On Thursday, 10 July 2014 at 05:58:56 UTC, Andrei Alexandrescu 
> wrote:
>> We already have actor-style via std.concurrency. We also have 
>> fork-join parallelism via std.parallel. What we need is a 
>> library for CSP.
>
> The actor-style via std.concurrency is only between 
> 'heavyweight' threads though, no? Even if lightweight threads 
> may be overhyped, part of the appeal of Go and Erlang is that 
> one can spawn tens of thousands of threads and it 'just works'. 
> It allows the server model of 'one green thread/actor per 
> client', which has a certain appeal in its simplicity. Akka 
> similarly uses its own lightweight threads, not heavyweight JVM 
> threads.

No.  I've had an outstanding pull request to fix this for quite a 
while now.  I think there's a decent chance it will be in the 
next release.  To be fair, that pull request mostly provides the 
infrastructure for changing how concurrency is handled.  A 
fiber-based scheduler backed by a thread pool doesn't exist yet, 
though it shouldn't be hard to write (the big missing piece is 
having a dynamic thread pool).  I was going to try and knock one 
out while on the airplane in a few days.


> Message passing between lightweight threads can also be much 
> faster than message passing between heavyweight threads; take a 
> look at the following message-passing benchmark and compare 
> Haskell, Go and Erlang to the languages using OS threads: 
> http://benchmarksgame.alioth.debian.org/u64q/performance.php?test=threadring

Thanks for the benchmark.  I didn't have a good reference for 
what kind of performance capabilities to hit, so there are a few 
possible optimizations I've left out of std.concurrency because 
they didn't buy much in my own testing (like a free list of 
message objects).  I may have to revisit those ideas with this 
benchmark in mind and see what happens.


More information about the Digitalmars-d mailing list