the Disruptor framework vs The Complexities of Concurrency

Nick B nick.barbalich at gmail.com
Sun Dec 9 19:58:07 PST 2012


On Saturday, 8 December 2012 at 19:54:29 UTC, Dmitry Olshansky 
wrote:
> 12/8/2012 9:08 PM, Nick Sabalausky пишет:
>> On Fri, 07 Dec 2012 19:55:50 +0400
>> Dmitry Olshansky <dmitry.olsh at gmail.com> wrote:
>>
>>> 12/7/2012 1:43 PM, deadalnix пишет:
>>>> On Friday, 7 December 2012 at 09:03:58 UTC, Dejan Lekic 
>>>> wrote:
>>>>> On Friday, 7 December 2012 at 09:00:48 UTC, Nick B wrote:
>>>>>>
>>>>>>> [Andrei's comment ] Cross-pollination is a good thing 
>>>>>>> indeed.
>>>>>>
>>>>>> I came across this while searching the programme of the 
>>>>>> conference
>>>>>> that Walter is attending in Australia.
>>>>>>
>>>>>>
>>>>>> This gentleman, Martin Thompson
>>>>>>
>>>>>> http://www.yowconference.com.au/general/details.html?speakerId=2962
>>>>>>
>>>>>>
>>>>>> The main idea, is in this paper (11 pages, pdf):
>>>>>>
>>>>>> http://disruptor.googlecode.com/files/Disruptor-1.0.pdf
>>>>>>
>>>>>>
>>>>>> and here is a review of the architecure by Martin Fowler:
>>>>>>
>>>>>> http://martinfowler.com/articles/lmax.html
>>>>>>
>>
>> Fascinating.
>>
>>>
>>> So the last problem is I don't see how it cleanly scales with 
>>> the
>>> number of messages: there is only one instance of a specific 
>>> consumer
>>> type on each stage. How do these get scaled if one core 
>>> working on
>>> each is not enough?
>>>
>>
>> As Fowler's articles mentions at one point, you can have 
>> multiple
>> consumers of the same type working concurrently on the same 
>> ring by
>> just simply having each of them skip every N-1 items (for N 
>> consumers
>> of the same type. Ie, if you have two consumers of the same 
>> type, one
>> operates on the even #'d items, the other on the odd.
>>
> I thought about that even-odd style but it muddies waters a bit.
> Now producers or however comes next the "circle" have to track 
> all of the split-counters (since they could outpace each other 
> at different times). The other way is to have them contend on a 
> single counter with CAS but again not as nice.
>
> The other moment is that system becomes that more dependent on 
> a single component failure and they get around this be running 
> multiple copies of the whole system in sync. A wise move to 
> ensure stability of a complex system (and keeping in mind the 
> stack exchange reliability requirements).

Would Andrei like to comment on any of the comments so far  ??


More information about the Digitalmars-d mailing list