Clay language

Andrei Alexandrescu SeeWebsiteForEmail at erdani.org
Thu Dec 30 09:52:32 PST 2010


On 12/30/10 11:08 AM, Steven Schveighoffer wrote:
> I'd have to see how it works. I also thought the new operator
> overloading scheme was reasonable -- until I tried to use it.

You mean until you tried to use it /once/.

> Note this is even more bloated because you generate one function per
> pair of types used in concatenation, vs. one function per class defined.

That function is inlined and vanishes out of existence. I wish one day 
we'd characterize this bloating issue more precisely. Right now anything 
generic has the "bloated!!" alarm stuck to it indiscriminately.

>> How do you mean bloated? For documentation you specify in the
>> documentation of the type what operators it supports, or for each
>> named method you specify that operator xxx forwards to it.
>
> I mean bloated because you are generating template functions that just
> forward to other functions. Those functions are compiled in and take up
> space, even if they are inlined out.

I think we can safely leave this matter to compiler technology.

> Let's also realize that the mixin is going to be required *per
> interface* and *per class*, meaning even more bloat.

The bloating argument is a complete red herring in this case. I do agree 
that generally it could be a concern and I also agree that the compiler 
needs to be improved in that regard. But by and large I think we can 
calmly and safely think that a simple short function is not a source of 
worry.

> I agree if there is a "standard" way of forwarding with a library mixin,
> the documentation will be reasonable, since readers should be able to
> get used to looking for the 'atlernative' operators.

Whew :o).

>>> The thing I find ironic is that with the original operator overloading
>>> scheme, the issue was that for types that define multiple operator
>>> overloads in a similar fashion, forcing you to repeat boilerplate code.
>>> The solution to it was a mixin similar to what you are suggesting.
>>> Except now, even mundane and common operator overloads require verbose
>>> template definitions (possibly with mixins), and it's the uncommon case
>>> that benefits.
>>
>> Not at all. The common case is shorter and simpler. I wrote the
>> chapter on operator overloading twice, once for the old scheme and
>> once for the new one. It uses commonly-encountered designs for its
>> code samples. The chapter and its code samples got considerably
>> shorter in the second version. You can't blow your one example into an
>> epic disaster.
>
> The case for overloading a single operator is shorter and simpler with
> the old method:
>
> auto opAdd(Foo other)
>
> vs.
>
> auto opBinary(string op)(Foo other) if (op == "+")
>
> Where the new scheme wins in brevity (for written code at least, and
> certainly not simpler to understand) is cases where:
>
> 1. inheritance is not used
> 2. you can consolidate many overloads into one function.
>
> So the question is, how many times does one define operator overloading
> on a multitude of operators *with the same code* vs. how many times does
> one define a few operators or defines the operators with different code?
>
> In my experience, I have not yet defined a type that uses a multitude of
> operators with the same code. In fact, I have only defined the "~=" and
> "~" operators for the most part.

Based on extensive experience with operator overloading in C++ and on 
having read related code in other languages, I can firmly say both of 
(1) and (2) are the overwhelmingly common case.

> So I'd say, while my example is not proof that this is a disaster, I
> think it shows the change in operator overloading cannot yet be declared
> a success. One good example does not prove anything just like one bad
> example does not prove anything.

Many good examples do prove a ton though. Just off the top of my head:

- complex numbers

- checked integers

- checked floating point numbers

- ranged/constrained numbers

- big int

- big float

- matrices and vectors

- dimensional analysis (SI units)

- rational numbers

- fixed-point numbers

If I agree with something is that opCat is an oddity here as it doesn't 
usually group with others. Probably it would have helped if opCat would 
have been left named (just like opEquals or opCmp) but then uniformity 
has its advantages too. I don't think it's a disaster one way or 
another, but I do understand how opCat in particular is annoying to your 
case.

>>> So really, we haven't made any progress (mixins are still
>>> required, except now they will be more common). I think this is one area
>>> where D has gotten decidedly worse. I mean, just look at the difference
>>> above between defining the opcat operator in D1 and your mixin solution!
>>
>> I very strongly believe the new operator overloading is a vast
>> improvement over the existing one and over most of today's languages.
>
> I haven't had that experience. This is just me talking. Maybe others
> believe it is good.
>
> I agree that the flexibility is good, I really think it should have that
> kind of flexibility. Especially when we start talking about the whole
> opAddAssign mess that was in D1. It also allows making wrapper types
> easier.
>
> The problem with flexibility is that it comes with complexity. Most
> programmers looking to understand how to overload operators in D are
> going to be daunted by having to use both templates and template
> constraints, and possibly mixins.

Most programmers looking to understand how to overload operators in D 
will need to bundle them (see the common case argument above) and will 
go with the TDPL examples, which are clear, short, simple, and useful.

> There once was a discussion on how to improve operators on the phobos
> mailing list (don't have the history, because i think it was on
> erdani.com). Essentially, the two things were:
>
> 1) let's make it possible to easily specify template constraints for
> typed parameters (such as string) like this:
>
> auto opBinary("+")(Foo other)
>
> which would look far less complex and verbose than the current
> incarnation. And simple to define when all you need is one or two
> operators.

I don't see this slight syntactic special case a net improvement over 
what we have.

> 2) make template instantiations that provably evaluate to a single
> instance virtual. Or have a way to designate they should be virtual.
> e.g. the above operator syntax can only have one instantiation.

This may be worth exploring, but since template constraints are 
arbitrary expressions I fear it will become a mess of special cases 
designed to avoid the Turing tarpit.

>> We shouldn't discount all of its advantages and focus exclusively on
>> covariance, which is a rather obscure facility.
>
> I respectfully disagree. Covariance is very important when using class
> hierarchies, because to have something that returns itself degrade into
> a basic interface is very cumbersome. I'd say dcollections would be
> quite clunky if it weren't for covariance (not just for operator
> overloads). It feels along the same lines as inout -- where inout allows
> you to continue using your same type with the same constancy, covariance
> allows you to continue to use the most derived type that you have.

Okay, I understand.

>> Using operator overloading in conjunction with class inheritance is rare.
>
> I don't use operator overloads and class inheritance, but I do use
> operator overloads with interfaces. I think rare is not the right term,
> it's somewhat infrequent, but chances are if you do a lot of interfaces,
> you will encounter it at least once. It certainly doesn't dominate the
> API being defined.

Maybe a more appropriate characterization is that you use catenation 
with interfaces.

>> Rare as it is, we need to allow it and make it convenient. I believe
>> this is eminently possible along the lines discussed in this thread.
>
> Convenience is good. I hope we can do it at a lower exe footprint cost
> than what you have proposed.

We need to destroy Walter over that code bloating thing :o).

>>> As a compromise, can we work on a way to forward covariance, or to have
>>> the compiler reevaluate the template in more derived types?
>>
>> I understand. I've had this lure a few times, too. The concern there
>> is that this is a potentially surprising change.
>
> Actually, the functionality almost exists in template this parameters.
> At least, the reevaluation part is working. However, you still must
> incur a performance penalty to cast to the derived type, plus the
> template nature of it adds unnecessary bloat.

Saw that. I have a suspicion that we'll see a solid solution from you soon!


Andrei


More information about the Digitalmars-d mailing list