Slow performance compared to C++, ideas?

Steven Schveighoffer schveiguy at yahoo.com
Tue Jun 4 09:31:25 PDT 2013


On Tue, 04 Jun 2013 01:16:22 -0400, Manu <turkeyman at gmail.com> wrote:

> On 4 June 2013 14:16, Steven Schveighoffer <schveiguy at yahoo.com> wrote:
>
>> Since when is that on the base class author?  Doctor, I overrode this
>> class, and it doesn't work.  Well, then don't override it :)
>>
>
> Because it wastes your time (and money). And perhaps it only fails/causes
> problems in edge cases, or obscure side effects, or in internal code that
> you have no ability to inspect/debug.
> You have no reason to believe you're doing anything wrong; you're using  
> the
> API in a perfectly valid way... it just happens that it is wrong (the
> author never considered it), and it doesn't work.

Technically and narrow-mindedly, yes, it will not waste your time and  
money to try extending it -- you will know right up front that you can't  
use it via extension, and therefore cannot use the library if it doesn't  
fit exactly what you need.  You will simply waste time and money  
re-implementing it.

There is also a quite likely possibility that you have the source to the  
base class, in which case you can determine whether it's possible to  
extend.

This view that you've taken is that if I can do something, then the  
library developer has expected that usage, simply by it being possible.   
This is a bad way to look at APIs.  Documentation and intent are important  
to consider.

> Also there is the possibility that a class that isn't designed from the
>> start to be overridden.  But overriding one or two methods works, and  
>> has
>> no adverse effects.  Then it is a happy accident.  And it even enables
>> designs that take advantage of this default, like mock objects.  I would
>> point out that in Objective-C, ALL methods are virtual, even class  
>> methods
>> and properties.  It seems to work fine there.
>>
>
> Even apple profess that Obj-C is primarily useful for UI code, and they  
> use
> C for tonnes of other stuff.

First, I've never heard that statement or read it anywhere (you have a  
link?).  Second, the idea that if you use Objective C objects for your  
API, then you must use method calls for EVERYTHING is ridiculous.  Pretty  
much all the OS functionality is exposed via Objective-C objects.  It  
doesn't mean the underlying implementation is pure objects, like wrapping  
ints in objects or something.  I don't know of any language that would do  
that.  The public API is all virtual, including networking, I/O, image  
processing, threading, etc. and it works quite well.

C is a subset of Objective-C, so it's quite easy to switch back and forth.

> What I'm really trying to say is, when final is the default, and you  
> really
>> should have made some method virtual (but didn't), then you have to pay  
>> for
>> it later when you update the base class.
>
>
> I recognise this, but I don't think that's necessarily a bad thing. It
> forces you a moment of consideration wrt making the change, and if it  
> will
> affect anything else. If it feels like a significant change, you'll treat
> it as such (which it is).
> Even though you do need to make the change, it's not a breaking change,  
> and
> you don't risk any side effects.

I find this VERY ironic :)

Library Author: After careful consideration, we have decided that we are  
going to make all our classes virtual, to allow more flexibility.

Library user Manu: NOOOOO! That will make all my code horribly slow!

Library Author: Don't worry!  Your code will still compile and work!  It's  
a non-breaking change with no risk of side effects.

>> When virtual is the default, and you really wanted it to be final (but
>> didn't do that), then you have to pay for it later when you update the  
>> base
>> class.  There is no way that is advantageous to *everyone*.
>>
>
> But unlike the first situation, this is a breaking change. If you are not
> the only user of your library, then this can't be done safely.

I think it breaks both ways, just in different ways.

>> It's advantageous to a particular style of coding.  If you know  
>> everything
>> is virtual by default, then you write code expecting that.  Like mock
>> objects.  Or extending a class simply to change one method, even when  
>> you
>> weren't expecting that to be part of the design originally.
>>
>
> If you write code like that, then write 'virtual:', it doesn't hurt  
> anyone
> else. The converse is not true.

This really is simply a matter of preference.  Your preference for  
performance over flexibility is biasing your judgment.  You can just as  
easly write 'final'.  The default is an arbitrary decision.

When I first came across D, I was experiencing "D euphoria" and I  
wholeheartedly considered the decision to have virtual-by-default a very  
wise one.  At this point, I'm indifferent.  It could have been either way,  
and I think we would be fine.

But to SWITCH mid-stream would be a horrible breaking change, and needs to  
have a very compelling reason.

>>
>> I think it is unfair to say most classes are not base classes.  This  
>> would
>> mean most classes are marked as final.  I don't think they are.  One of  
>> the
>> main reasons to use classes in the first place is for extendability.
>>
>
> People rarely use the final keyword on classes, even though they could  
> 90%
> of the time.

Let me fix that for you:

"People rarely use the final keyword on classes, even though I wish they  
would 90% of the time."

A non-final class is, by definition, a base class.  To say that a  
non-final class is not a base class because it 'could be' final is just  
denial :)


>> The losses are that if category 3 were simply always final, some other
>> anti-Manu who wanted to extend everything has to contact all the  
>> original
>> authors to get them to change their classes to virtual :)
>>
>
> Fine, they'll probably be receptive since it's not a breaking change.
> Can you guess how much traction I have when I ask an author of a popular
> library to remove some 'virtual' keywords in C++ code?
> "Oh we can't really do that, it could break any other users!", so then we
> rewrite the library.

This is a horrible argument.  C++ IS final by default.  They HAVE TO opt  
in by default.  You have been spending all this time arguing we should go  
the C++ route only to tell me that your experience with C++ is that you  
can't get what you want there either?!!!

Alternatively, we can say the two situations aren't the same.  In the C++  
situation, the author opted for virtuality.  In the D case, the author may  
have simply not cared.  In the not caring case, they may be much more open  
to adding final (I did).  In the case where they specifically want  
virtuality, they aren't going to drop it whether it's the default or not.

> BTW, did you know you can extend a base class and simply make the  
> extension
>> final, and now all the methods on that derived class become non-virtual
>> calls?  Much easier to do than making the original base virtual (Note I
>> haven't tested this to verify, but if not, it should be changed in the
>> compiler).
>>
>
> One presumes that the library that defines the base class deals with its
> own base pointers internally, and as such, the functions that I may have
> finalised in my code will still be virtual in the place that it counts.

Methods take the base pointer, but will be inlinable on a final class, and  
any methods they call will be inlinable and final.

Any closed source code is already compiled, and it's too bad you can't fix  
it.  But that is simply a missed optimization for the library writer.   
It's no different than someone having a poorly implemented algorithm, or  
doing something stupid like unaligned simd loads :)

-Steve


More information about the Digitalmars-d mailing list