AA Performance in Benchmarks
Chris via Digitalmars-d
digitalmars-d at puremagic.com
Fri Apr 24 02:18:47 PDT 2015
On Thursday, 23 April 2015 at 14:40:58 UTC, Shammah Chancellor
wrote:
> So, I was tooling around with one of the benchmarks I saw
> listed on twitter:
>
> https://github.com/kostya/benchmarks/tree/master/brainfuck
>
> This D benchmark spends most of it's time on AA lookups.
> Discussions about the implementation aside, I am curious as to
> why the D AA is so slow even after calling AA.rehash.
>
> I took a look at the Nim tables implementation, and it looks
> that they use power-of-two vectors and use the property of
> primitive roots modulo n to get the next value in the case of a
> collision:
>
> ```
> proc nextTry(h, maxHash: THash): THash {.inline.} =
> result = ((5 * h) + 1) and maxHash
> ```
>
> So, my questions:
>
> 1) What are the general purpose performance implications of
> something like this?
> 2) I saw a thread awhile ago where someone had posted an
> updated implementation of the AA with benchmarks and showed
> that it was substantially faster in all cases. I can't find
> the thread again -- what happened with this work?
>
> -Shammah
Interesting. I noticed while doing some benchmarking that looking
up data that is stored in an AA is slower than generating the
data on the fly. I was really surprised.
More information about the Digitalmars-d
mailing list