[dmd-concurrency] Sending mesages to non-listening threads

Kevin Bealer kevinbealer at gmail.com
Wed Jan 13 00:48:57 PST 2010


On Tue, Jan 12, 2010 at 3:12 PM, Steve Schveighoffer <schveiguy at yahoo.com>wrote:

> The other possibility is to only allow so many messages to be queued before
> the queue is "full".  Unsure what you want to do at that point though,
> either block or return an EAGAIN type error (maybe you can do both depending
> on how you want to send the message).  This solution is at least more likely
> to show logic errors, since with the first solution you may forget to close
> the handle.  But it also could cause deadlocks or force you to write
> handling code for full queue conditions.
>
> How does erlang handle this?
>
> -Steve
>
>
Not sure what erlang does, but...

I've tried to implement request / response on top of a system that happens
to use blocking queues.  Come to discover, that this pattern results in
deadlock.  A 'server' actor can fill up with messages on its input queue and
its output queue.  If any of the actors writing requests into the server are
also blocked waiting for responses, you can get a deadlock where A is
writing a response but B can't get it because as part of processing another
new incoming request, it has blocked on writing another request to A.

I think blocking is kind of like reference counted memory -- you don't want
to do it unless there is a way to break the cycles in the graph.  In this
case that would mean a few queues that are not capable of blocking (they
would be the 'ever expanding' kind), in the right places.  (I'm not sure if
this is enough.  Even if it is enough in theory it might not be enough in
practice as the system might grind to a halt if things expand too much.)

An idea that I think is likely to work better is to prioritize actors
reading from large input queues over actors reading from small ones, or else
prioritize readers from large queues over writers to them.  Depending on
circularity in the design, it might be possible to prioritize "later"
processes over "earlier" ones.  None of this is fool proof of course.

Suppose you have three processes, A, B, and C, that process each piece of
data in that order, and C is slower than the others, and you have at least
four CPUs working on the problem.

My hunch is that you will not be able to avoid an ever expanding queue
between B and C, or else wasted CPU time, no matter what you do at the
framework level.  If there is circularity as well, you may get an ever
expanding queue, wasted CPU, or deadlock.  If there is enough circularity
and/or general complexity in the overall message passing 'graph' you may not
even be able to reason about what is going to happen.

You could send messages "backward" along the chain saying "please stop until
I tell you otherwise".  That approach could be used to resist this problem.
(NOTE: If you have these meta-messages, be sure to check for them before
checking for more incoming elephants that need to be turned into hot dogs.)

--> This assumes you can tell how full your own mailbox is!

And one downside of this application-controlled volume throttling is that
the system can probably no longer reason about deadlocks at all.  Since the
messages going "backward" are always part of a two-way flow, there is no way
for the system to automatically tell which end of the animal is the head and
which is the tail.

This suggests that different policies might be available for each queue...
(does anyone have experience with a system that has such configurable
queues?)

My advice in general is that this kind of message passing system can't be
completely encapsulated.  There are failure states, the application designer
*needs* to be able to measure the system to gauge whether you are about to
stumble into one, and in general, designing the message passing control and
recovery behavior is part of designing your system.

And it is probably one of the next big "it's hard to get right" topics for
concurrent programming.

Kevin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/dmd-concurrency/attachments/20100113/8f53c912/attachment.htm>


More information about the dmd-concurrency mailing list