What are the worst parts of D?

via Digitalmars-d digitalmars-d at puremagic.com
Sat Oct 11 02:26:26 PDT 2014


On Saturday, 11 October 2014 at 03:39:10 UTC, Dicebot wrote:
> I am not speaking about O(1) internal heap increases but O(1) 
> GC.malloc calls
> Typical pattern is to encapsulate "temporary" buffer with the 
> algorithm in a single class object and never release it, 
> reusing with new incoming requests (wiping the buffer data each 
> time). Such buffer quickly gets to the point where it is large 
> enough to contain all algorithm temporaries for a single 
> request and never touches GC from there.
>
> In a well-written program which follows such pattern there are 
> close to zero temporaries and GC only manages more persistent 
> entities like cache elements.

I understand that. My argument is that the same should apply to 
the entire heap: After you've allocated and released a certain 
amount of objects via GC.malloc() and GC.free(), the heap will 
have grown to a size large enough that any subsequent allocations 
of temporary objects can be satisfied from the existing heap 
without triggering a collection, so that only the overhead of 
actual allocation and freeing should be relevant.


More information about the Digitalmars-d mailing list