Out of memory error (even when using destroy())
rikki cattermole via Digitalmars-d-learn
digitalmars-d-learn at puremagic.com
Fri May 26 01:20:01 PDT 2017
On 26/05/2017 9:15 AM, realhet wrote:
> Thanks for the answer!
>
> But hey, the GC knows that is should not search for any pointers in
> those large blocks.
> And the buffer is full of 0-s at the start, so there can't be any 'false
> pointers' in it. And I think the GC will not search in it either.
>
> The only reference to the buffer is 'st' which will die shortly after it
> has been allocated.
>
> 64bit is not a solution because I need to produce a 32bit dll, and I
> also wanna use 32bit asm objs.
> The total 2GB amount of memory is more than enough for the problem.
> My program have to produce 300..500 MB of continuous data frequently.
> This works in MSVC32, but with D's GC it starts to eat memory and fails
> at the 4th iteration. Actually it never releases the previous blocks
> even I say so with destroy().
>
> At this point I only can think of:
> a) Work with the D allocator but emulate large blocks by virtually
> stitching small blocks together. (this is unnecessary complexity)
> b) Allocating memory by Win32 api and not using D goodies anymore (also
> unnecessary complexity)
>
> But these are ugly workarounds. :S
>
> I also tried to allocate smaller blocks than the previous one, so it
> would easily fit to the prevouisly released space, and yet it keeps
> eating memory:
>
> void alloc_dealloc(size_t siz){
> auto st = new ubyte[siz];
> }
>
> void main(){
> foreach(i; 0..4) alloc_dealloc(500_000_000 - 50_000_000*i);
> }
If you have to use such large amounts frequently, you really have to go
with buffers of memory that you control, not the GC. Memory allocation
is always expensive, if you can prevent it all the better.
More information about the Digitalmars-d-learn
mailing list