(Improved) Benchmark for Phobos Sort Algorithm

Matthias Walter xammy at xammy.homelinux.net
Thu Dec 16 19:37:10 PST 2010


On 12/16/2010 09:36 PM, Craig Black wrote:
> It was brought to my attention that the quick sort has a very bad
> worst case, so I implemented a simple fix for it.  Now the worst case
> (completely ordered) is the best case, and it only slows down the
> general case by a small percentage.  I thought to myself, "it can't be
> this easy to fix quick sort".  Does anyone see a flaw in this simple
> fix?  Performs much better than Phobos in completely random and
> completely sorted data.  Perhaps there is another case where it
> doesn't do as well?

Yes, there is a "flaw": There are still instances of arrays where you
will end up with a pivot element being one of the largest or one of the
smallest elements in *every* call. The means, that you split your array
from length n not into two arrays roughly of size n/2 and n/2, but of
O(1) and n - O(1). This implies a running time of n^2 (in contrast to n
log n), which is obviously bad.

I don't know how std.algorithm.sort works, but C++ STL uses an
Introspective sort, which is a quick-sort variant like you have, but it
has some measurements that observe whether the above worst case occurs
(e.g. by looking at the recursion depth) and switches to a heap-sort in
this case. [1]

Matthias

[1] http://en.wikipedia.org/wiki/Introsort


More information about the Digitalmars-d mailing list