Debugging memory leak.

David Brown dlang at davidb.org
Mon Oct 8 09:04:27 PDT 2007


I've been developing an application on x86_64 (dgcc) without any problems.
Yesterday, I tried building it on x86 to discover that it has a memory leak
on that platform.  The leak is rather significant, and the program quickly
exhausts memory and is killed.

Any suggestions from the list on how to debug/fix this?

Some background as to what the program is doing:

   - It reads large amounts of data into dynamically allocated buffers
     (currently 256K each).

   - These buffers are passed to std.zlib.compress, which returns a new
     buffer.

Some suspicions I have:

   - Because of the larger address space on the x86_64, it is less likely
     that random data will point into one of these buffers, but on the x86,
     it happens more, causing a buffer to be kept around.  Eventually more
     and more of these stick around.

   - It's worse with a gentoo built compiler (USE=d emerge gcc) than with
     the gdc-0.24 binary distribution.  These are both built using gcc
     4.1.2.

Ideas for possibly fixing this:

   - Manually 'delete' these buffers.  In my instance, this wouldn't really
     be all that difficult since I know when they go out of use.

   - Call std.gc.hasNoPointers(void*) on the block.  I would think this is
     the case for a char[], but std.zlib.compress uses a void[], which the
     compiler can't make this assumption about.

   - Try Tango?  Is the GC different there?

David



More information about the Digitalmars-d mailing list