Complexity nomenclature

Ola Fosheim Grøstad via Digitalmars-d digitalmars-d at puremagic.com
Fri Dec 4 01:50:15 PST 2015


On Friday, 4 December 2015 at 09:33:42 UTC, ZombineDev wrote:
> Well, I agree, but I didn't say hard real-time, only 
> performance requirements that are hard to achieve because the 
> system is large, and becuase it would cost even more if the 
> system was larger (slower).

Yes, if you are writing for large scalable systems then it is 
interesting to know what the ceiling is, but then you probably 
also want something that is no worse than O(log N). Say for a 
chat service you'd know that the more expensive to develop 
solution can scale to 10 million users (O(1)), and the less 
expensive do develop solution will become very expensive to run 
when you hit 1 million (O(log N)).

So Big-Oh can be useful when you work with very large sizes and 
large variations, but in most cases I think it can be misleading 
without a good description of the actual implementation.

Another very useful measure in real time is the amount of 
variation between repeated calls. Say if there is a guarantee of 
<50% variation, you can measure actual running time and add 50% 
headroom on the hardware requirements. Some algorithms have some 
fixed size tables they need to clear out every once in a while 
and that can be very bad.



More information about the Digitalmars-d mailing list