Why Strings as Classes?

superdan super at dan.org
Wed Aug 27 06:00:51 PDT 2008


Michiel Helvensteijn Wrote:

> superdan wrote:
> 
> > my point was opIndex should not be written for a list to begin with.
> 
> Ok. So you are not opposed to the random access operation on a list, as long
> as it doesn't use opIndex but a named function, correct?

correctamundo.

> You are saying that there is a rule somewhere (either written or unwritten)
> that guarantees a time-complexity of O(1) for opIndex, wherever it appears.

yeppers. amend that to o(log n). in d, that rule is a social contract derived from the built-in vector and hash indexing syntax.

> This of course means that a linked list cannot define opIndex, since a
> random access operation on it will take O(n) (there are tricks that can
> make it faster in most practical cases, but I digress).

it oughtn't. & you digress in the wrong direction. you can't prove a majority of "practical cases" will not suffer a performance hit. the right direction is to define the right abstraction for forward iteration.

i mean opIndex optimization is making a shitty design run better. y not make a good design to start with?

> That, in turn, means that a linked list and a dynamic array can not share a
> common interface that includes opIndex.

so what. they can share a common interface that includes nth(). what exactly is yer problem with that.

> Aren't you making things difficult for yourself with this rule?

not @ all. i want o(n) index access, i use nth() and i know it's gonna take me o(n) and i'll design my higher-level algorithm accordingly. if random access helps my particular algo then is(typeof(a[n])) tells me that a supports random access. if i can't live without a[n] my algo won't compile. every1's happy.

> A list and an array are very similar data-structures and it is natural for
> them to share a common interface.

sure. both r sequence containers. opIndex ain't part of a sequence container interface.

> The main differences are:
> * A list takes more memory.
> * A list has slower random access.

nooonononono. o(n) vs. o(1) to be precise. that's not "slower". that's sayin' "list don't have random access, you can as well get up your lazy ass & do a linear search by calling nth()". turn that on its head. if i give u a container and say "it has random access" u'd rightly expect better than a linear search. a deque has slower random access than a vector @ the same complexity.

> * A list has faster insertions and growth.

o(1) insertion if u have the insertion point.  both list and vector have o(1) growth.

list also has o(1) splicing. that's important.

we get to the point where we realize the two r fundamentally different structures built around fundamentally different tradeoffs. they do satisfy the same interface. just ain't the vector interface. it's a sequential-access interface. not a random-access interface.

> But the interface shouldn't necessarily make any complexity guarantees.

problem is many said so til stl came and said enuff is enuff. for fundamental data structures & algos complexity /must/ be part of the spec & design. otherwise all u get is a mishmash of crap & u simply can't do generic stuff w/ a mishmash of crap. as other cont/algo libs've copiously shown. that approach's impressive coz it challenged some stupid taboos & proved them worthless. it was contrarian & to great effect. for that alone stl puts to shame previous container/algo libs. i know i'd used half a dozen and wrote a couple. thot the whole container/algo design is old hat. when stl came along i was like, holy effin' guacamole. that's why i say. even if u don't use c++ for its many faults. understand stl coz it's d shiznit.

> The
> implementations should. And any programmer worth his salt will be able to
> use this wisely and choose the right sorting algorithm for the right
> data-structure.

here's where the thing blows apart. i agree with choosing manually if i didn't want to do generic programming. if u wanna do generic programming u want help from the compiler in mixing n matching stuff. it's not about the saltworthiness. it's about writing generic code.

> There are other algorithms, I'm sure, that work equally
> well on either. Of course, any algorithm should give its time-complexity in
> terms of the complexity of the operations it uses.
> 
> I do understand your point, however. And I believe my argument would be
> stronger if there were some sort of automatic complexity analysis tool.

stl makes-do without an automatic tool. 

> This could either warn a programmer in case he makes the wrong choice, or
> even take the choice out of the programmers hands and automatically choose
> the right sorting algorithm for the job. That's a bit ambitious. I guess a
> profiler is the next best thing.

i have no doubt stl has had big ambitions. for what i can tell it fulfilled them tho c++ makes higher-level algorithms look arcane. so i'm happy with the lambdas in std.algorithm. & can't figure why containers don't come along. walt?




More information about the Digitalmars-d mailing list