either me or GC sux badly (GC don't reuse free memory)

Matthias Bentrup via Digitalmars-d digitalmars-d at puremagic.com
Wed Nov 12 04:42:10 PST 2014


On Wednesday, 12 November 2014 at 12:30:15 UTC, ketmar via 
Digitalmars-d wrote:
> On Wed, 12 Nov 2014 12:05:25 +0000
> thedeemon via Digitalmars-d <digitalmars-d at puremagic.com> wrote:
>
>> On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
>> Digitalmars-d wrote:
>> >   734003200
>> > address space" (yes, i'm on 32-bit system, GNU/Linux).
>> >
>> > the question is: am i doing something wrong here? how can i 
>> > force GC to stop eating my address space and reuse what it 
>> > already has?
>> 
>> Sure: just make the GC precise, not conservative. ;)
>> With current GC implementation and array this big chances of 
>> having a word on the stack that looks like a pointer to it and 
>> prevents it from being collected are almost 100%. Just don't 
>> store big arrays in GC heap or switch to 64 bits where the 
>> problem is not that bad since address space is much larger and 
>> chances of false pointers are much smaller.
> but 'mkay, let's change the sample a little:
>
>   import core.memory;
>   import std.stdio;
>
>   void main () {
>     uint size = 1024*1024*300;
>     for (;;) {
>       auto buf = new ubyte[](size);
>       writefln("%s", size);
>       size += 1024*1024*100;
>       GC.free(GC.addrOf(buf.ptr));
>       buf = null;
>       GC.collect();
>       GC.minimize();
>     }
>   }
>
> this shouldn't fail so soon, right? i'm freeing the memory, 
> so... it
> still dying on 1,887,436,800. 1.7GB and that's all? this can't 
> be true,
> i have 3GB of free RAM (with 1.2GB used) and 8GB of unused 
> swap. and
> yes, it consumed all of the process address space again.

On Linux/x86 you have only 3 GB virtual address space, and this 
has to include the program code + all loaded libraries too. Check 
out /proc/<pid>/maps, to see where the dlls are loaded, and look 
at the largest chunk of free space available. That is the 
theoretical limit that could be allocated.


More information about the Digitalmars-d mailing list