What complexity have a log(sum) shape?

H. S. Teoh via Digitalmars-d digitalmars-d at puremagic.com
Tue Dec 8 17:02:23 PST 2015


On Tue, Dec 08, 2015 at 09:30:09PM +0000, Charles via Digitalmars-d wrote:
> On Tuesday, 8 December 2015 at 20:56:28 UTC, cym13 wrote:
> >On Tuesday, 8 December 2015 at 17:33:49 UTC, Andrei Alexandrescu wrote:
> >>On 12/08/2015 12:12 PM, Timon Gehr wrote:
> >>>O(log(n+m)) = O(log(n)+log(m)).
> >>
> >>Noice. Yes I did miss it. Thx!! -- Andrei
> >
> >Surely I'm missing something obvious but why is it true exactly?
> 
> Im confident it isn't. I think the rule he was thinking of was
> O(log(ab)) = O(log(a)+log(b)), which is just a basic property of
> logarithms. It's pretty easy to get to a contradiction between those
> two rules.

There is no contradiction.  Big-O notation deals with asymptotic
behaviour as the variables approach infinity, so behaviour at 'small'
values of m and n are irrelevant.  The idea is that if m grows
fundamentally faster than n, then for sufficiently large values of m and
n, the actual value will be dominated by m, and the contribution from n
will be negligible. The same argument holds for the converse. Hence,
O(n+m) = O(max(n,m)).

Now, for O(log(n+m)), at sufficiently large values of n and m what
dominates the argument to log will be max(n,m), so O(log(n+m)) =
O(log(max(n,m))). But we've already established that O(f(x)+g(x)) =
O(max(f(x),g(x))), so we note that O(log(max(n,m))) =
O(max(log(n),log(m))) = O(log(n)+log(m)).

Note that the last step only works for slow-growing functions like log,
because log(x+y) < log(x) + log(y). This is no longer true for functions
that grow too fast, e.g., exp(x+y) < exp(x) + exp(y) is false, so big-O
expressions involving exp cannot be simplified in this way.


T

-- 
We've all heard that a million monkeys banging on a million typewriters will eventually reproduce the entire works of Shakespeare.  Now, thanks to the Internet, we know this is not true. -- Robert Wilensk


More information about the Digitalmars-d mailing list