Why Strings as Classes?

Nick Sabalausky a at a.a
Wed Aug 27 21:33:17 PDT 2008


"Dee Girl" <deegirl at noreply.com> wrote in message 
news:g94j7a$2875$1 at digitalmars.com...
> Nick Sabalausky Wrote:
>
>> "Dee Girl" <deegirl at noreply.com> wrote in message
>> news:g943oi$11f4$1 at digitalmars.com...
>> > Benji Smith Wrote:
>> >
>> >> Dee Girl wrote:
>> >> > Michiel Helvensteijn Wrote:
>> >> >> That's simple. a[i] looks much nicer than a.nth(i).
>> >> >
>> >> > It is not nicer. It is more deceiving (correct spell?). If you look 
>> >> > at
>> >> > code it looks like array code.
>> >> >
>> >> > foreach (i; 0 .. a.length)
>> >> > {
>> >> >     a[i] += 1;
>> >> > }
>> >> >
>> >> > For array works nice. But for list it is terrible! Many operations 
>> >> > for
>> >> > incrementing only small list.
>> >>
>> >> Well, that's what you get with operator overloading.
>> >
>> > I am sorry. I disagree. I think that is what you get with bad design.
>> >
>> >> The same thing could be said for "+" or "-". They're inherently
>> >> deceiving, because they look like builtin operations on primitive data
>> >> types.
>> >>
>> >> For expensive operations (like performing division on an
>> >> unlimited-precision decimal object), should the author of the code use
>> >> "opDiv" or should he implement a separate "divide" function?
>> >
>> > The cost of + and - is proportional to digits in number. For small 
>> > number
>> > of digits computer does fast in hardware. For many digits the cost 
>> > grows.
>> > The number of digits is log n. I think + and - are fine for big 
>> > integer. I
>> > am not surprise.
>> >
>> >> Forget opIndex for a moment, and ask the more general question about 
>> >> all
>> >> overloaded operators. Should they imply any sort of asymptotic
>> >> complexity guarantee?
>> >
>> > I think depends on good design. For example I think ++ or -- for 
>> > iterator.
>> > If it is O(n) it is bad design. Bad design make people say like you 
>> > "This
>> > is what you get with operator overloading".
>> >
>> >> Personally, I don't think so.
>> >>
>> >> I don't like "nth".
>> >>
>> >> I'd rather use the opIndex. And if I'm using a linked list, I'll be
>> >> aware of the fact that it'll exhibit linear-time indexing, and I'll be
>> >> cautious about which algorithms to use.
>> >
>> > But inside algorithm you do not know if you use a linked list or a 
>> > vector.
>> > You lost that information in bad abstraction. Also abstraction is bad
>> > because if you change data structure you have concept errors that still
>> > compile. And run until tomorrow ^_^.
>> >
>>
>> A generic algoritm has absolutely no business caring about the complexity 
>> of
>> the collection it's operating on. If it does, then you've created a 
>> concrete
>> algoritm, not a generic one.
>
> I appreciate your view point. Please allow me explain. The view point is 
> in opposition with STL. In STL each algorithm defines what kind of 
> iterator it operates with. And it requires what iterator complexity.
>
> I agree that other design can be made. But STL has that design. In my 
> opinion is much part of what make STL so successful.
>
> I disagree that algorithm that knows complexity of iterator is concrete. I 
> think exactly contrary. Maybe it is good that you read book about STL by 
> Josuttis. STL algorithms are the most generic I ever find in any language. 
> I hope std.algorithm in D will be better. But right now std.algorithm 
> works only with array.
>
>> If an algoritm uses [] and doesn't know the
>> complexity of the []...good! It shouldn't know, and it shouldn't care. 
>> It's
>> the code that sends the collection to the algoritm that knows and cares.
>
> I think this is mistake. Algorithm should know. Otherwise "linear find" is 
> not "linear find"! It is "cuadratic find" (spell?). If you want to define 
> something called linear find then you must know iterator complexity.
>

If a generic algorithm describes itself as "linear find" then I know damn 
well that it's referring to the behavior of *just* the function itself, and 
is not a statement that the function *combined* with the behavior of the 
collection and/or a custom comparison is always going to be O(n).

A question about STL: If I create a collection that, internally, is like a 
linked list, but starts each indexing operation from the position of the 
last indexing operation (so that a "find first" would run in O(n) instead of 
O(n*n)), is it possible to send that collection to STL's generic "linear 
find first"? I would argue that it should somehow be possible *even* if the 
STL's generic "linear find first" guarantees a *total* performance of O(n) 
(Since, in this case, it would still be O(n) anyway). Because otherwise, the 
STL wouldn't be very extendable, which would be a bad thing for a library of 
"generic" algorithms.

Another STL question: It is possible to use STL to do a "linear find" using 
a custom comparison? If so, it is possible to make STL's "linear find" 
function use a comparison that just happens to be O(n)? If so, doesn't that 
violate the linear-time guarantee, too? If not, how does it know that the 
custom comparison is O(n) instead of O(1) or O(log n)?





More information about the Digitalmars-d mailing list