[xmlp] the recent garbage collector performance improvements

H. S. Teoh hsteoh at quickfur.ath.cx
Wed Feb 1 15:44:55 PST 2012


On Thu, Feb 02, 2012 at 12:09:01AM +0100, dsimcha wrote:
> On Wednesday, 1 February 2012 at 22:53:11 UTC, Richard Webb wrote:
> >For reference, the file i was testing with has ~50000 root nodes,
> >each of which has several children.
> >The number of nodes seems to have a much larger effect on the
> >speed that the amount of data.
> >
> 
> Sounds about right.  For very small allocations sweeping time
> dominates the total GC time.  You can see the breakdown at
> https://github.com/dsimcha/druntime/wiki/GC-Optimizations-Round-2 .
> The Tree1 benchmark is the very small allocation benchmark.
> Sweeping takes time linear in the number of memory blocks allocated
> and, for blocks <1 page, constant time in the size of the blocks.

Out of curiosity, is there a way to optimize for the "many small
allocations" case? E.g., if a function allocates, as temporary storage,
a tree with a large number of nodes, which becomes garbage when it
returns. Perhaps a way to sweep the entire space used by the tree in one
go?

Not sure if such a thing is possible.


T

-- 
Tell me and I forget. Teach me and I remember. Involve me and I understand. -- Benjamin Franklin


More information about the Digitalmars-d mailing list