GC performance: collection frequency

Jonathan M Davis via Digitalmars-d digitalmars-d at puremagic.com
Mon Sep 14 12:17:46 PDT 2015


On Monday, 14 September 2015 at 18:58:45 UTC, Adam D. Ruppe wrote:
> On Monday, 14 September 2015 at 18:51:36 UTC, H. S. Teoh wrote:
>> We could also reduce the default collection frequency, of 
>> course, but lacking sufficient data I wouldn't know what value 
>> to set it to.
>
> Definitely. I think it hits a case where it is right at the 
> edge of the line and you are allocating a small amount.
>
> So it is like the limit is 1,000 bytes. You are at 980 and ask 
> it to allocate 30. So it runs a collection cycle, frees the 30 
> from the previous loop iteration, then allocates it again... so 
> the whole loop, it is on the edge and runs very often.
>
> Of course, it has to scan everything to ensure it is safe to 
> free those 30 bytes so the GC then runs way out of proportion.
>
> Maybe we can make the GC detect this somehow and bump up the 
> size. I don't actually know the implementation that well though.

My first inclination would be to make it just allocate more 
memory and not run a collection if the last collection was too 
recent, but there are bound to be papers and studies on this sort 
of thing already. And the exact strategy to use likely depends 
heavily on the type of GC - e.g. if our GC were updated to be 
concurrent like we've talked about for a while now, then 
triggering a concurrent collection at 80% could make it so that 
the program didn't actually run out of memory while still not 
slowing it down much (just long enough to fork for the concurrent 
collection), whereas if we don't have a concurrent GC (like now), 
then triggering at 80% would just make things worse.

- Jonathan M Davis


More information about the Digitalmars-d mailing list