<br><br><div class="gmail_quote">2010/10/8 "Jérôme M. Berger" <span dir="ltr"><<a href="mailto:jeberger@free.fr">jeberger@free.fr</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">Steven Schveighoffer wrote:<br>
> On Thu, 07 Oct 2010 16:23:47 -0400, Rainer Deyke <<a href="mailto:rainerd@eldwood.com">rainerd@eldwood.com</a>><br>
> wrote:<br>
><br>
>> On 10/7/2010 13:57, Andrei Alexandrescu wrote:<br>
>>> On 10/7/10 14:40 CDT, bearophile wrote:<br>
>>>> Another solution is just to accept O(n) as the worst complexity for<br>
>>>> the "in" operator. I don't understand what's the problem in this.<br>
>>><br>
>>> That means we'd have to define another operation, i.e. "quickIn" that<br>
>>> has O(log n) bound.<br>
>><br>
>> Why?<br>
>><br>
>> I can't say I've ever cared about the big-O complexity of an operation.<br>
><br>
> Then you don't understand how important it is.<br>
<br>
</div> If big O complexity is so important, then why does everyone use<br>
quicksort (which is O(n**2)) and not heap sort or merge sort (which<br>
are O(n*log(n)))?<br>
<br>
Jerome<br>
<font color="#888888">--<br>
mailto:<a href="mailto:jeberger@free.fr">jeberger@free.fr</a><br>
<a href="http://jeberger.free.fr" target="_blank">http://jeberger.free.fr</a><br>
Jabber: <a href="mailto:jeberger@jabber.fr">jeberger@jabber.fr</a><br>
<br>
</font></blockquote></div><br><div><br></div><div>Because on average quicksort is faster than heap sort, and uses much less space than merge sort. Also, trivial guards can be put in place to avoid running quicksort in a worst case (pre-sorted data) scenario. </div>