General Problems for GC'ed Applications?

Karen Lanrap karen at digitaldaemon.com
Mon Jul 24 07:02:00 PDT 2006


Unknown W. Brackets wrote:

> Yes, a collect will cause swapping - if you have that much
> memory used. 
>   Ideally, collects won't happen often (since they can't just
>   happen 
> whenever anyway, they happen when you use up milestones of
> memory) and you can disable/enable the GC and run collects
> manually when it makes the most sense for your software.
> 
> Failing that, software which is known to use a large amount of
> memory may need to use manual memory management.  Likely said
> software will perform poorly anyway.

I disagree. Assume a non GC'ed program that allocates 1.5 GB to 1.7 
GB memory, from which 0.7 GB to 0.9 GB are vital data. If you run 
this program on a machine equipped with 1 GB, the OS will swap out 
the 0.8 GB data that is accessed infrequently. Therefore this 
program cause swapping only if it accesses data from the swapped 
out part of data and the size of the swapped data will be 
approximately bounded by doubling the size of the data needed to be 
swapped back.

This changes dramatically if you GC it, because on every allocation 
the available main memory is exhausted and the GC requires the OS 
to swap all 0.8 GB back, doesn't it. 


> I'm afraid I'm not terribly familiar with the dining
> philosopher's problem, but again I think this is a problem only
> somewhat aggravated by garbage collection.
> 
> Most of your post seems to be wholly concerned with applications
> that use at least the exact figure of Too Much Memory (tm). 

It is not only somewhat aggravated. Assume the example given above 
is doubled by two instances of that program and the main memory is 
not only doubled to 2GB but increased to 4GB or even more.

Again both non GC'ed version of the program run without any 
performance problems, but the GC'ed versions do not---although the 
memory size is increased by a factor that enables the OS to not 
swap out any allocated data in case of the non GC'ed versions.

This is because both programs at least slowly increase their 
allocations of main memory.

This goes without performance problems unitl the available main 
memory is exhausted. The first program that hits the limit starts 
GC'ing its allocated memory---and forces the OS to swap all in. 
Hence this first program is getting in the danger that all memory 
freed by its GC is immetiately eaten up by the other instance, that 
continues running unaffected because its thirst for main memory is 
accompülished by the GC of the other instance, if that GC is 
freeing memory as the GC recognizes it.

At the time when this GC run ends there are at least two cases 
distinguishable:
a) the main memory at the end of the run is still insufficient, 
because the other application ate it all up. Then this instance 
stops with "out of memory".
b) the main memory at the end of the run by chance is sufficient, 
because the other application was not that hungry. Then this 
instance will start being performant again. But only for the short 
time until the limit is reached again.

This is a simple example with only one processor and two competing 
applications---and I believe that case a) can happen.

So I feel unable to prove that on multi-core machines running 
several GC'ed applications the case a) will never happen.

And even if case a) never happens there might be always at least 
one application that is running its GC. Hence swapping si always on 
the run. 

 
> A sweeping statement that garbage collection causes
> a dining philosopher's problem just doesn't seem correct to me.

Then prove me wrong.



More information about the Digitalmars-d mailing list