I'm looking a little more at low-hanging fruit to improve GC performance, and I think I may have found yet another huge performance bug, but I want to make sure I'm not horribly misunderstanding the code before I try to "fix" it. This one has nothing to do with the patch I already submitted. <br>
<br>Someone please take a look the loop on lines 2479-2500 of gcx.d (<a href="https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gcx.d">https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gcx.d</a>). It looks to me like B_PAGEPLUS blocks of memory are <b>getting scanned O(N^2) times, completely by accident. </b>As far as I can tell, the variable u represents the number of pages in the block currently being examined. o represents some offset from the base of the pool. Therefore, it looks like we're scanning the same memory multiple times for absolutely no reason. I also wonder if stuff that's not supposed to be getting scanned is. o can be greater than the base of the block, but we're marking from o to o + u * PAGESIZE, where u * PAGESIZE is the length of the whole block.<br>
<br>