On Concurrency
Etienne Cimon via Digitalmars-d-learn
digitalmars-d-learn at puremagic.com
Sun Apr 20 21:03:55 PDT 2014
On 2014-04-18 13:20, "Nordlöw" wrote:
> Could someone please give some references to thorough explainings on
> these latest concurrency mechanisms
>
> - Go: Goroutines
> - Coroutines (Boost):
> - https://en.wikipedia.org/wiki/Coroutine
> -
> http://www.boost.org/doc/libs/1_55_0/libs/coroutine/doc/html/coroutine/intro.html
>
> - D: core.thread.Fiber: http://dlang.org/library/core/thread/Fiber.html
> - D: vibe.d
>
> and how they relate to the following questions:
>
> 1. Is D's Fiber the same as a coroutine? If not, how do they differ?
>
> 2. Typical usecases when Fibers are superior to threads/coroutines?
>
> 3. What mechanism does/should D's builtin Threadpool ideally use to
> package and manage computations?
>
> 4. I've read that vibe.d's has a more lightweight mechanism than what
> core.thread.Fiber provides. Could someone explain to me the difference?
> When will this be introduced and will this be a breaking change?
>
> 5. And finally how does data sharing/immutability relate to the above
> questions?
I'll admit that I'm not the expert you may be expecting for this but I
could answer somewhat 1, 2, and 5. Coroutines, fibers, threads,
multi-threading and all of this task-management "stuff" is a very
complex science and most of the kernels actually rely on this to do
their magic, keeping stack frames around with contexts is the idea and
working with it made me feel like it's much more complex than
meta-programming but I've been reading and getting a hang of it within
the last 7 months now.
Coroutines give you control over what exactly you'd like to keep around
once the "yield" returned. You make a callback with
"boost::asio::yield_context" or something of the likes and it'll contain
exactly what you're expecting, but you're receiving it in another
function that expects it as a parameter, making it asynchronous but it
can't just resume within the same function because it does rely on a
callback function like javascript.
D's fibers are very much simplified (we can argue whether it's more or
less powerful), you launch them like a thread ( Fiber fib = new Fiber(
&delegate ) ) and just move around from fiber to fiber with
Fiber.call(fiber) and Fiber.yield(). The yield function called within a
Fiber-called function will stop in a middle of that function's
procedures if you want and it'll just return like the function ended,
but you can rest assured that once another Fiber calls that fiber
instance again it'll resume with all the stack info restored. They're
made possible through some very low-level assembly magic, you can look
through the library it's really impressive, the guy who wrote that must
be some kind of wizard.
Vibe.d's fibers are built right above this, core.thread.fiber (explained
above) with the slight difference that they're packed with more power by
putting them on top of a kernel-powered event loop rotating infinitely
in epoll or windows message queues to resume them, (the libevent driver
for vibe.d is the best developed event loop for this). So basically when
a new "Task" is called (which has the Fiber class as a private member)
you can yield it with yield() until the kernel wakes it up again with a
timer, socket event, signal, etc. And it'll resume right after the
yield() function. This is what helps vibe.d have async I/O while
remaining procedural without having to shuffle with mutexes : the fiber
is yielded every time it needs to wait for the network sockets and
awaken again when packets are received so until the expected buffer
length is met!
I believe this answer is very mediocre and you could go on reading about
all I said for months, it's a very wide subject. You can have "Task
message queues" and "Task concurrency" with "Task semaphores", it's like
multi-threading in a single thread!
More information about the Digitalmars-d-learn
mailing list