Go, D, and the GC

deadalnix via Digitalmars-d digitalmars-d at puremagic.com
Sat Oct 3 11:41:54 PDT 2015


On Saturday, 3 October 2015 at 18:26:32 UTC, Vladimir Panteleev 
wrote:
> On Saturday, 3 October 2015 at 18:21:55 UTC, deadalnix wrote:
>> On Saturday, 3 October 2015 at 13:35:19 UTC, Vladimir 
>> Panteleev wrote:
>>> On Friday, 2 October 2015 at 07:32:02 UTC, Kagamin wrote:
>>>> Low latency (also a synonym for fast) is required by 
>>>> interactive applications like client and server software
>>>
>>> Isn't a typical collection cycle's duration negligible 
>>> compared to typical network latency?
>>
>> Not really, especially if you have to block all threads, 
>> meaning hundreds of requests.
>
> I don't understand how that is relevant.
>
> E.g. how is making 1% of requests take 100ms longer worse than 
> making 100% of requests take 10ms longer?

Let's say you have capacity on you server to serve 100 requests 
and a request takes 100ms to process. Then you need to dimension 
your infra for you servers to absorb 1 request per ms per server.

Now you need to stop operation for 100ms to do a GC cycle. In the 
meantime, requests continue arriving. By the end of the GC cycle, 
you have 100 more requests to process. Twice as much.

The problem is that you are creating a peak demand and need to be 
able to absorb it.

This is a serious problem, in fact, twitter have a very complex 
system to get machine that are GCing out the load balancer and 
back at the end of the cycle. This is one way to do it, but it is 
far from ideal as now re-configuring the load balancer is part of 
the GC cycle.

TL;DR: it is not bad for a user individually, it is bad because 
it creates peaks of demand on your server you need to absorb.



More information about the Digitalmars-d mailing list