Why does std.concurrency.Mailbox use lists ?
Sean Kelly via Digitalmars-d
digitalmars-d at puremagic.com
Tue Sep 9 13:47:22 PDT 2014
On Monday, 8 September 2014 at 17:06:34 UTC, badlink wrote:
> Hello,
> I'm creating a real-time 3D app using std.concurrency for
> exchanging messages between the renderer and a few mesher
> threads.
> The app runs fine, but once in a while I get a consistent FPS
> drop.
> Already aware that the cause is the GC (a call to GC.disable
> eliminates the slowdowns) I timed all functions and found that
> spawn() sometimes requires more than 50 ms to returns.
That's strange. As you can see, spawn() just creates a kernel
thread. There's an allocation for the context of a closure, but
this has nothing to do with the message queue.
> I did a quick search through Phobos and found out that the
> Mailbox implementation in std.concurency uses a private List
> struct where every call to .put() needs to allocate a new node
> :(
>
> Why is that ?
Incoming messages are appended to the list and removed from the
middle during receive, so a list seemed like a natural
representation. This could be optimized by putting a "next" ptr
inside the Message object itself and make the list literally just
a list of messages instead of a list of nodes referencing
messages. That would eliminate half of the allocations and not
have any of the problems that a ring buffer brings with removals
from the middle or growing the list size.
> I would have used for example a ring buffer which can do all
> the things the Mailbox needs and faster in every way. The
> growing time can be compensated by a right call to
> setMaxMailboxSize() and any random removal can be a swap of the
> last element with the one being removed.
>
> If I haven't overlooked something, I'd like to fill a
> performance bug on Bugzilla.
See above. I think the optimization of the list is a good idea
regardless. I've also been considering adding free lists for
discarded messages to avoid GC allocations. I did this when
originally writing std.concurrency but it didn't actually seem to
speed things up when profiling. This probably deserves a revisit.
More information about the Digitalmars-d
mailing list