GC.calloc with random bits causes slowdown, also seen in built in AA

Moritz Warning moritzwarning at web.de
Thu Mar 11 05:02:23 PST 2010


Hi,

the reason for using calloc in PyDict was because GC slows down 
allocation way too much.
Maybe it is/was a bug in the GC. I used manual memory management
to cut this problem and I got a huge speed improvement.


On Thu, 11 Mar 2010 01:33:32 +0000, Michael Rynn wrote:

> Looked at the Associative Arrays project on http://www.dsource.org/
> projects/aa, with the idea of testing and maybe uploading a trie class
> (radix tree)..
Nice. :-)

> In the PyDictD1 test code there was a call to disable the garbage
> collector. Thats cheating.
Looks like it was forgotten.


> I looked at the code for PyDictD2.d  and I decided that the table of
> struct Entry,  holding the key and value, for which calloc is used ,
> could be replaced by a more D - like (its a template anyway) call to new
> Entry[size].  The size is always a power of 2.
The aim was to simplify the code and to prevent hidden function calls
to runtime functions.


> Ran the test again after removing the calloc, and the speed improved and
> the progressive run time increase went away.
Well, I've got a slowdown instead the last time I have tested this.
Maybe there was a bug fixed?


> So now the PyDict does not use calloc. and is faster than the built-in.
> I added a few missing properties (length, opIndex)
"public size_t size()" is the length you were looking for.
Also, opIndex might crash if the key is not present.
I don't write D2 code., but shouldn't opIndex return a pointer to the 
entry?





More information about the Digitalmars-d mailing list