Why Strings as Classes?

Dee Girl deegirl at noreply.com
Wed Aug 27 10:32:45 PDT 2008


Michiel Helvensteijn Wrote:

> Dee Girl wrote:
> 
> >> Yes, the first 'trick' makes it a different datastructure. The second
> >> does not. Would you still be opposed to using opIndex if its
> >> time-complexity is O(log n)?
> > 
> > This is different question. And tricks are not answer for problem. Problem
> > is list has other access method than array.
> 
> And what's the answer?

I accept logarithm complexity with []. Logarithm grows slow.

> >> With that second trick the loop would have the same complexity for lists.
> > 
> > Not for singly linked lists.
> 
> Yeah, also for singly linked lists.

May be it is not interesting discuss trick more. I am sure many tricks can be done. And many serious things. Can be done and have been done. They make list not a list any more.

> >> But putting that aside for the moment, are you saying you would allow
> >> yourself to be deceived by a syntax detail? No, mentally attaching O(1)
> >> to the *subscripting operator* is simply a legacy from C, where it is
> >> syntactic sugar for pointer arithmetic.
> > 
> > I do not think so. I am sorry. If a[n] is not allowed then other array
> > access primitive is allowed. Give index(a, n) as example. If language say
> > index(a, n) is array access then it is big mistake for list to also define
> > index(a, n). List maybe should define findAt(a, n). Then array also can
> > define findAt(a, n). It is not mistake.
> 
> Yes, I agree for "array access". That term implies O(1), since it uses the
> word "array". But I was argueing against the subscripting operator to be
> forced into O(1).

I am sorry. I do not understand your logic. My logic was this. Language has a[n] for array index. My opinion was then a[n] should not be linear search. I said also you can replace a[n] with index(a, n) and my same reason is the same. How are you argueing?

I did not want to get in this discussion. I see how it is confusing fast ^_^.

> >> Lists allocate memory for bare nodes, but never have to copy their
> >> elements. Arrays have to move their whole content to a larger memory
> >> location each time they are outgrown. For more complex data-types that
> >> means potentially very expensive copies.
> > 
> > I think this is mistake. I think you should google "amortized complexity".
> > Maybe that can help much.
> 
> Amortized complexity has nothing to do with it. Dynamic arrays have to copy
> their elements and lists do not. It's as simple as that.

No, it is not. I am sorry! In STL there is copy. In D there is std.move. I think it only copies data by bits and clears source. And amortized complexity shows that there is o(1) bit copy on many append.

> >> But don't you understand that if this tool did exist, and the language
> >> had a standard notation for time/space-complexity, I could simply write:
> >> 
> >> sequence<T> s;
> >> /* fill sequence */
> >> sort(s);
> >> 
> >> And the compiler (in cooperation with this 'tool') could automatically
> >> find the most effective combination of data-structure and algorithm. The
> >> code would be more readable and efficient.
> > 
> > Michiel-san, STL does that. Or I misunderstand you?
> 
> STL will choose the right sorting algorithm, given a specific
> data-structure. But I am saying it may be possible also for the
> data-structure to be automatically chosen, based on what the programmer
> does with it.

I think this is interesting. Then why argueing for bad container design? I do not understand. Thank you, Dee Girl.



More information about the Digitalmars-d mailing list