Why Strings as Classes?

Nick Sabalausky a at a.a
Wed Aug 27 13:33:24 PDT 2008


"Dee Girl" <deegirl at noreply.com> wrote in message 
news:g943oi$11f4$1 at digitalmars.com...
> Benji Smith Wrote:
>
>> Dee Girl wrote:
>> > Michiel Helvensteijn Wrote:
>> >> That's simple. a[i] looks much nicer than a.nth(i).
>> >
>> > It is not nicer. It is more deceiving (correct spell?). If you look at 
>> > code it looks like array code.
>> >
>> > foreach (i; 0 .. a.length)
>> > {
>> >     a[i] += 1;
>> > }
>> >
>> > For array works nice. But for list it is terrible! Many operations for 
>> > incrementing only small list.
>>
>> Well, that's what you get with operator overloading.
>
> I am sorry. I disagree. I think that is what you get with bad design.
>
>> The same thing could be said for "+" or "-". They're inherently
>> deceiving, because they look like builtin operations on primitive data
>> types.
>>
>> For expensive operations (like performing division on an
>> unlimited-precision decimal object), should the author of the code use
>> "opDiv" or should he implement a separate "divide" function?
>
> The cost of + and - is proportional to digits in number. For small number 
> of digits computer does fast in hardware. For many digits the cost grows. 
> The number of digits is log n. I think + and - are fine for big integer. I 
> am not surprise.
>
>> Forget opIndex for a moment, and ask the more general question about all
>> overloaded operators. Should they imply any sort of asymptotic
>> complexity guarantee?
>
> I think depends on good design. For example I think ++ or -- for iterator. 
> If it is O(n) it is bad design. Bad design make people say like you "This 
> is what you get with operator overloading".
>
>> Personally, I don't think so.
>>
>> I don't like "nth".
>>
>> I'd rather use the opIndex. And if I'm using a linked list, I'll be
>> aware of the fact that it'll exhibit linear-time indexing, and I'll be
>> cautious about which algorithms to use.
>
> But inside algorithm you do not know if you use a linked list or a vector. 
> You lost that information in bad abstraction. Also abstraction is bad 
> because if you change data structure you have concept errors that still 
> compile. And run until tomorrow ^_^.
>

A generic algoritm has absolutely no business caring about the complexity of 
the collection it's operating on. If it does, then you've created a concrete 
algoritm, not a generic one. If an algoritm uses [] and doesn't know the 
complexity of the []...good! It shouldn't know, and it shouldn't care. It's 
the code that sends the collection to the algoritm that knows and cares. 
Why? Because "what algoritm is best?" depends on far more than just what 
type of collection is used. It depends on "Will the collection ever be 
larger than X elements?". It depends on "Is it a standard textbook list, or 
does it use trick 1 and/or trick 2?". It depends on "Is it usually mostly 
sorted or mostly random?". It depends on "What do I do with it most often? 
Sort, append, search, insert or delete?". And it depends on other things, 
too.

Using "[]" versus "nth()" can't tell the algoritm *any* of those things. But 
those things *must* be known in order to make an accurate decision of "Is 
this the right algoritm or not?" Therefore, a generic algoritm *cannot* ever 
know for certain if it's the right algoritm *even* if you say "[]" means 
"O(log n) or better". Therefore, the algorithm should not be designed to 
only work with certain types of collections. The code that sends the 
collection to the algoritm is the *only* code that knows the answers to all 
of the questions above, therefore it is the only code that should ever 
decide "I should use this algorithm, I shouldn't use that algorithm."

> I also like or do not like things. But good reason can convince me? Thank 
> you, Dee Girl.






More information about the Digitalmars-d mailing list