Random points from a D n00b CTO

John Colvin via Digitalmars-d digitalmars-d at puremagic.com
Tue Jul 15 10:24:43 PDT 2014


On Tuesday, 15 July 2014 at 17:13:46 UTC, Kagamin wrote:
> On Monday, 14 July 2014 at 14:19:49 UTC, John Colvin wrote:
>> However, I would say that it is not recommended. Very large 
>> heaps aren't conducive to good GC performance (especially with 
>> D's current GC). I now use a hybrid approach where the body of 
>> my data is on the C heap - managed manually - and all the 
>> scraps and difficult-to-track transient data are managed by 
>> the GC for convenience and correctness.
>
> Even if the big blocks are allocated with NO_SCAN flag?

I'm not sure, haven't tried it. To be honest my problems were 
that the allocation itself was slow (even if it was just one big 
chunk), not that the GC was running slow collections. 
core.stdc.malloc was much faster.

FYI:
Beware doing any timing of malloc on linux, you're likely not 
measuring the allocation time at all. It's the first write to 
each page that triggers the allocation of that page, independent 
of when you called malloc.


More information about the Digitalmars-d mailing list