The Quick Mom Algorithm

Andrei Alexandrescu via Digitalmars-d digitalmars-d at puremagic.com
Fri Jan 29 14:47:44 PST 2016


http://dpaste.dzfl.pl/05a82699acc8

So over the past few days I've been in the zone working on a smooth 
implementation of "Median of Medians" 
(https://en.wikipedia.org/wiki/Median_of_medians). Its performance is 
much better compared to the straightforward implementation. However, in 
practice it's very hard to get it to beat simple heuristics (such as 
median of five, random pivot etc). These heuristics run `n` comparisons 
for an input of length `n`, whereas the MoM needs over twice as many 
comparisons. Also it does a bunch of swapping around the data. Overall I 
got it within 2.5x of the heuristic-based topN for most data sizes up to 
tens of millions.

While thinking of MoM and the core reasons of its being slow (adds nice 
structure to its input and then "forgets" most of it when recursing), I 
stumbled upon a different algorithm. It's much simpler, also 
deterministic and faster than MoM for many (most?) inputs. But it's not 
guaranteed to be linear. After having pounded at this for many hours, it 
is clear that I am in need of some serious due destruction. I call it a 
"quick median of medians" or in short "quick mom".

Consider the algorithm defined as follows over a range `r` of length 
`n`. It returns an index to an element likely to be in the second 
tertile of r. (A tertile is a third of the range. So the expectation is 
that the returned index x is such that e<=r[x] for at least one third of 
the range, and r[x]<=e again for at least one third of the range.)

0. If n<=3, compute median by rote and return its index.

1. Divide r in three equal adjacent subranges r0 = r[0 .. $/3], r1 = 
r[$/3 .. $*2/3], r2 = r[$*2/3 .. $].

2. Recurse to get the median of these three, call them m0, m1, m2.

3. Return the median of m0, m1, m2.

Note that no data has been written yet; we only have an estimate of a 
pivot. On random inputs it's expected that the median thus gotten is 
greater than 1/6 + 1/6 = 1/3 elements, and less than the same fraction.

The algorithm completes in linear time.

However, the pivot obtained is just an approximation. In the worst case 
it's possible that e.g. all recursions return the leftmost of the 
allowed index, so the bound deteriorates by 1/3 for each recursion 
depth, or generally to (1/3)^^log3(n).

Not good! That's where the quick mom pays attention. After it partitions 
the data using the pivot obtained by the quick mom method, the 
partitioning stage checks whether the resulting pivot falls within 
bounds. If not, it runs a precise selection method (such as proper 
median of medians - "thorough mom") to bring the pivot where it needs to 
be. The data patterns that cause the quick mom to fail systematically 
are rather odd but worst case is what it is.

Overall the partitioning either succeeds with the quick mom pivot in 
linear time, or does one more linear pass if "unlucky". So overall 
partition is linear. (Micro-optimization: not all data needs to be 
repartitioned, only that outside the not-so-good pivot.) Unlike the 
quick mom, the partition guarantees a pivot in the mid tertile.

After partitioning the classic quickselect algorithm may be implemented.

There's one more _really_ juicy detail. In fact, the quick mom may 
finish without looking at all data. Look at the implementation - there 
are two overloads of quickMom. The second gets the bounds and works as 
follows: if you've already computed two elements of a median, the third 
may be chosen only if in between the two. Otherwise, you only care 
whether it's larger or smaller than the other two. This allows the 
algorithm to finish certain recursion branches early.


Destroy!

Andrei


More information about the Digitalmars-d mailing list