[Issue 1323] Implement opIn_r for arrays

d-bugmail at puremagic.com d-bugmail at puremagic.com
Tue Oct 30 22:00:06 PDT 2012


http://d.puremagic.com/issues/show_bug.cgi?id=1323


Jonathan M Davis <jmdavisProg at gmx.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jmdavisProg at gmx.com


--- Comment #16 from Jonathan M Davis <jmdavisProg at gmx.com> 2012-10-30 22:00:03 PDT ---
> I think this absolutely needs to be reconsidered. The success of the same
> semantics in Python should be enough proof of the convenience and low risk of
> this feature.

> On top of that, the `in' operator is overloadable anyway, meaning it already
> has semantic ambiguity for different types. Should `~' be considered an issue
> since it can accept both T and T[]?

> Practicality seems preferable here. I think this should be reopened.

We care a lot more about efficiency than python does, and we hold to one of the
key concepts that C++ does - that certain operators and operations need to have
a maximum algorithmic complexity. A prime example of this is std.container. All
of the common operations have a maximum algorithmic complexity, and any
container which can't implement them with tha complexity, doesn't have them.
For instance, SList and DList don't have a length property, because they can't
implement it in O(1). The same goes for range-base operations. front, popFront,
length, slicing, etc. have to be O(1). If they can't be, then they're not
implemented. That's why narrow strings aren't sliceable, have no length, and
don't provide random access. They can't do it in O(1).

In the case of in, it's required to have at most O(log n), which is what it
takes to search for an element in a balanced binary tree. AAs can do it in
O(1), which is even better, so they get in. Dynamic arrays require O(n), which
is worse than O(log n), so they don't have it and never will. If we allowed
them to have in, then functions could not rely on in's algorithmic complexity,
which would make it useless for generic code.

Of course, it's true that anyone can overload an operator or define a function
which has worse algorithmic complexity than it's required to have (e.g. they
define length on a range which has to count all of its elements to get its
length, making it O(n) instead of the required O(1)), but in that case, the
programmer screwed up, and it's their fault when algorithms using their type
have horrible performance. But as long as they implement those functions with
the correct algorithmic complexity, then algorithms using those functions can
guarantee a certain level of performance. They can't do that if they can't rely
on the maximum algorithmic complexity of the functions and operations that they
use.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------


More information about the Digitalmars-d-bugs mailing list