Proposition for change in D regarding Inheriting overloaded methods

Walter Bright newshound1 at digitalmars.com
Tue Aug 7 12:59:43 PDT 2007


Regan Heath wrote:
> Walter Bright wrote:
>> The next update of the compiler will throw a runtime exception for 
>> this case.
> 
> So, in this case:
> 
> class B
> {    long x;
>      void set(long i) { x = i; }
>     void set(int i) { x = i; }
>     long squareIt() { return x * x; }
> }
> class D : B
> {
>     long square;
>     void set(long i) { B.set(i); square = x * x; }
>     long squareIt() { return square; }
> }
> long foo(B b)
> {
>     b.set(3);
>     return b.squareIt();
> }
> 
> when the call to b.set(3) is made you insert a runtime check which looks 
> for methods called 'set' in <actual type of object>, if none of them 
> take a <insert types of parameters> you throw an exception.

There is no runtime check or cost for it. The compiler just inserts a 
call to a library support routine in D's vtbl[] entry for B.set(int).

> Is this done at runtime instead of compile time because the parameters 
> cannot always be determined at compile time?

Yes.

> 
>>> The second possibility is that the author fully intended to allow the 
>>> base class to define foo(char[]), but forgot to define the alias.  
>>> Again, since the compiler gives no error, he is unaware that he is 
>>> releasing buggy code to the world.  I believe the correct assumption 
>>> of the compiler should be that the user wanted the alias for the base 
>>> class' foo(char[]), and should alias it implicitly if and only if no 
>>> suitable match exists on the derived class.  In the case where the 
>>> author did not notice foo(char[]) existed, he problably doesn't mind 
>>> that foo(char[]) is defined by the base class.
>>
>> The problem with code that looks like a mistake, but the compiler 
>> makes some assumption about it and compiles it anyway, is that the 
>> code auditor cannot tell if it was intended behavior or a coding 
>> error. Here's a simple example:
>>
>> void foo()
>> { int i;
>>   ...
>>   { int i;
>>     ...
>>     use i for something;
>>   }
>> }
>>
>> To a code auditor, that shadowing declaration of i looks like a 
>> mistake, because possibly the "use i for something" code was meant to 
>> refer to the outer i, not the inner one. (This can happen when code 
>> gets updated by multiple people.) To determine if it was an actual 
>> mistake, the code auditor is in for some serious spelunking. This is 
>> why, in D, shadowing declarations are illegal. It makes life easier 
>> for the auditor, because code that looks like a mistake is not allowed.
> 
> It took me a while (because the example seems to be about something 
> totally different) but I think the argument you're making is that you 
> would prefer an error, requiring the author to specify what they want 
> explicitly, rather than for the compiler to make a potentially incorrect 
> assumption, silently. Is that correct?

Yes.

> 
> In the original example (trimmed slightly):
> 
> class A
> {
>    int foo(int x) { ... }
>    int foo(long y) { ... }
>    int foo(char[] s) { ... }
> }
> 
> class B : A
> {
>   override int foo(long x) { ... }
> }
> 
> void test()
> {
>   B b = new B();
>   A a = b;
> 
>   b.foo("hello");     // generates a compiler error
>   a.foo("hello");     // calls A.foo(char[])
> }
> 
> you're already making an assumption, you're assuming the author of B 
> does not want to expose foo(char[]) and it's the fact that this 
> assumption is wrong that has caused this entire debate.

The language is assuming things on the conservative side, not the 
expansive side, based on the theory that it is better to generate an 
error for questionable (and easily correctable) constructs than to make 
a silent (and erroneous) assumption.


> As others have mentioned, this assumption destroys the "is-a" 
> relationship of inheritance because "foo(char[])" is a method of A but 
> not a method of B.

We should not take rules as absolutes when they don't give us desirable 
behavior.


> Meaning B "isn't-a" A any more... unless you've 
> referring to a B with a reference to an A, when suddenly, it is.

That will generate a runtime error.

> Crazy idea, could the compiler (when it fails to match this overload) 
> cast the object to it's base class and try again, repeat until you hit 
> Object.  I guess this would essentially be a modification of the method 
> lookup rules ;)
> 
> 
> Making the opposite assumption (implicitly aliasing the "foo(char[])") 
> doesn't introduce any silent bugs (that I am aware of) and restores the 
> "is-a" relationship.
> 
> If the author really didn't want to expose "foo(char[])" then why were 
> they deriving their class from A?  It goes against the whole idea of 
> inheritance, doesn't it?

The problem is when the base class implementor wants to add some 
functionality (or specialization) with a new overload. A's implementor 
may be a third party, and has no idea about or control over B. His hands 
shouldn't be tied.



More information about the Digitalmars-d mailing list