[Issue 5623] Slow GC with large heaps

d-bugmail at puremagic.com d-bugmail at puremagic.com
Wed Feb 23 20:12:52 PST 2011


http://d.puremagic.com/issues/show_bug.cgi?id=5623


Steven Schveighoffer <schveiguy at yahoo.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |schveiguy at yahoo.com


--- Comment #11 from Steven Schveighoffer <schveiguy at yahoo.com> 2011-02-23 20:10:03 PST ---
Cursory glance at the patch, it looks like it won't affect array appending.

BTW, I had a very similar thought with storing the value to be able to jump
back to find the PAGEPLUS start a while ago, but I thought of a different
method.

First, the Bins value is already stored for every page, it's an int, and we're
using exactly 13 of the 4 billion possible values.  My idea was to remove
B_PAGEPLUS from the enum.  If the Bins value was anything other than the given
enums, it would be a number of pages to jump back + B_MAX.

This saves having to keep/update a separate array.

In addition, your statement that we only get 16 TB of space doesn't matter.  It
means the *jump size* is 16 TB.  That is, if you exceed 16 TB of space for a
block, then you just store the maximum.  The algorithm just has to be adjusted
to jump back that amount, then check the page at that location (which will also
know how far to jump back), and continue on.

Can you imagine how awesome the performance would be on a system with a 16TB
block with the linear search? ;)

I think this patch should be applied (will be voting shortly).

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------


More information about the Digitalmars-d-bugs mailing list