Is a moving GC really needed?
Frits van Bommel
fvbommel at REMwOVExCAPSs.nl
Mon Oct 2 07:26:41 PDT 2006
Lionello Lunesu wrote:
> Chris Miller wrote:
>> On Mon, 02 Oct 2006 05:23:11 -0400, Lionello Lunesu
>> <lio at lunesu.remove.com> wrote:
>>> Being able to move memory blocks reduces memory fragmentation, am I
>>> correct? Is this the only reason? (For the remainder of this post,
>>> I'm assuming it is.)
>>>
>>
>> I believe it allows for very fast allocations, almost as fast as stack
>> allocation. This is something I'm very interested in.
>
> Why would it allocate faster than a non-moving GC? (Ignoring for the
> moment the cost of moving.)
"Normal" allocators have to deal with fragmentation.
With a moving GC you can allocate everything to the top of the heap and
rely on the GC to shrink the heap when you run out of space.
So the allocator can basically be "increment top-of-heap-pointer by size
of allocation and return old value" (synchronized if in a multi threaded
environment, of course).
That's pretty much the most efficient allocator possible (as long as the
GC doesn't need to run, anyway ;) )
>> It may also improve caching as allocated memory is more together.
>
> This I understand, but what's the granularity (if that's the correct
> term) of a CPU cache?
>
> http://en.wikipedia.org/wiki/CPU_cache#Example:_the_K8 : "The data cache
> keeps copies of 64 byte lines of memory."
>
> I guess the chances are pretty low for two blocks to be moved close
> together and be cached together, and be accessed
IIRC CPUs typically have multiple leveled caches, and the later levels
have larger cache lines. Not sure how big they get though.
More information about the Digitalmars-d
mailing list