the Disruptor framework vs The Complexities of Concurrency

Dmitry Olshansky dmitry.olsh at gmail.com
Thu Dec 13 08:07:10 PST 2012


12/13/2012 4:59 AM, David Piepgrass пишет:
>> Maybe, but I'm still not clear what are the differences between a
>> normal ring buffer (not a new concept) and this "disruptor" pattern..
>
> Key differences with a typical lock-free queue:

Nice summary. I wasn't sure where should I describing that it's not 
"just a ring buffer". But for start I'd define it is a framework for 
concurrent processing of a stream of tasks/requests/items on a well 
structured multi-staged pipeline.

> - Lightning fast when used correctly. It observes that not only is
> locking expensive, even CAS (compare and swap) is not cheap, so it
> avoids CAS in favor of memory barriers (unless multiple writers are
> required.) Memory allocation is avoided too, by preallocating everything.
> - Multicast and multisource: multiple readers can view the same entries.
> - Separation of concerns: disruptors are a whole library instead of a
> single class, so disruptors support several configurations of producers
> and consumers, as opposed to a normal queue that is limited to one or
> two arrangements. To me, one particularly interesting feature is that a
> reader can modify an entry and then another reader can flag itself as
> "dependent" on the output of the first reader.

And the producer in the end depends on the last consumers in this graph.

There is also highly flexible (policy-based design) selection of how 
consumers wait on data:
- either busy _spin_ on it thus getting the highest responsiveness at 
the cost of wasted CPU cycles
- lazy spin (that yields) no outright burning of CPU resources but 
higher latency
- and even locking with wait-notify that saves greatly on CPU but kills 
responsiveness and throughput (but gives freedom to spend CPU elsewhere)

Plus there are different strategies for multi-prioducer or 
single-producer setup.

>So really it supports not
> just readers and writers but "annotators" that both read an write. And
> the set of readers and writers can be arranged as a graph.
>
Yeah, to me it indicates multi-thread-friendly multi-pass processing.

Another important IMHO observation is that the order of processed items 
is preserved* and this is interesting property if you consider doing the 
same stages as lock-free queues with a pool of consumers at each stage. 
Things will get O-o-O very quickly.

*That was stressed in the article about LMAX i.e. a trade depends on the 
order of other trades. So if your trade happens later then expected (or 
earlier) it going to be a different trade.

> See also
> http://stackoverflow.com/questions/6559308/how-does-lmaxs-disruptor-pattern-work
>


-- 
Dmitry Olshansky


More information about the Digitalmars-d mailing list