Slow performance compared to C++, ideas?

Manu turkeyman at gmail.com
Mon Jun 3 22:16:22 PDT 2013


On 4 June 2013 14:16, Steven Schveighoffer <schveiguy at yahoo.com> wrote:

> On Mon, 03 Jun 2013 23:39:25 -0400, Manu <turkeyman at gmail.com> wrote:
>
>  On 4 June 2013 12:50, Steven Schveighoffer <schveiguy at yahoo.com> wrote:
>>
>>  On Mon, 03 Jun 2013 12:25:11 -0400, Manu <turkeyman at gmail.com> wrote:
>>>
>>>  You won't break every single method, they already went through that
>>>
>>>> recently when override was made a requirement.
>>>> It will only break the base declarations, which are far less numerous.
>>>>
>>>>
>>> Coming off the sidelines:
>>>
>>> 1. I think in the general case, virtual by default is fine.  In code that
>>> is not performance-critical, it's not a big deal to have virtual
>>> functions,
>>> and it's usually more useful to have them virtual.  I've experienced
>>> plenty
>>> of times with C++ where I had to go back and 'virtualize' a function.
>>>  Any
>>> time you change that, you must recompile everything, it's not a simple
>>> change.  It's painful either way.  To me, this is simply a matter of
>>> preference.  I understand that it's difficult to go from virtual to
>>> final,
>>> but in practice, breakage happens rarely, and will be loud with the new
>>> override requirements.
>>>
>>>
>> I agree that in the general case, it's 'fine', but I still don't see how
>> it's a significant advantage. I'm not sure what the loss is, but I can see
>> clear benefits to being explicit from an API point of view about what is
>> safe to override, and implicitly, how the API is intended to be used.
>> Can you see my point about general correctness? How can a class be correct
>> if everything can be overridden, but it wasn't designed for it, and
>> certainly never been tested?
>>
>
> Since when is that on the base class author?  Doctor, I overrode this
> class, and it doesn't work.  Well, then don't override it :)
>

Because it wastes your time (and money). And perhaps it only fails/causes
problems in edge cases, or obscure side effects, or in internal code that
you have no ability to inspect/debug.
You have no reason to believe you're doing anything wrong; you're using the
API in a perfectly valid way... it just happens that it is wrong (the
author never considered it), and it doesn't work.


Also there is the possibility that a class that isn't designed from the
> start to be overridden.  But overriding one or two methods works, and has
> no adverse effects.  Then it is a happy accident.  And it even enables
> designs that take advantage of this default, like mock objects.  I would
> point out that in Objective-C, ALL methods are virtual, even class methods
> and properties.  It seems to work fine there.
>

Even apple profess that Obj-C is primarily useful for UI code, and they use
C for tonnes of other stuff.
UI code is extremely low frequency by definition. I can't click my mouse
very fast ;)


What I'm really trying to say is, when final is the default, and you really
> should have made some method virtual (but didn't), then you have to pay for
> it later when you update the base class.


I recognise this, but I don't think that's necessarily a bad thing. It
forces you a moment of consideration wrt making the change, and if it will
affect anything else. If it feels like a significant change, you'll treat
it as such (which it is).
Even though you do need to make the change, it's not a breaking change, and
you don't risk any side effects.



> When virtual is the default, and you really wanted it to be final (but
> didn't do that), then you have to pay for it later when you update the base
> class.  There is no way that is advantageous to *everyone*.
>

But unlike the first situation, this is a breaking change. If you are not
the only user of your library, then this can't be done safely.


 2. I think your background may bias your opinions :)  We aren't all
>>> working on making lightning fast bare-metal game code.
>>>
>>>
>> Of course it does. But what I'm trying to do is show the relative merits
>> of
>> one default vs the other. I may be biased, but I feel I've presented a
>> fair
>> few advantages to final-by-default, and I still don't know what the
>> advantages to virtual-by-default are, other than people who don't care
>> about the matter feel it's an inconvenience to type 'virtual:'. But that
>> inconvenience is going to be forced upon one party either way, so the
>> choice needs to be based on relative merits.
>>
>
> It's advantageous to a particular style of coding.  If you know everything
> is virtual by default, then you write code expecting that.  Like mock
> objects.  Or extending a class simply to change one method, even when you
> weren't expecting that to be part of the design originally.
>

If you write code like that, then write 'virtual:', it doesn't hurt anyone
else. The converse is not true.


I look at making methods final specifically for optimization.  It doesn't
> occur to me that the fact that it's overridable is a "leak" in the API,
> it's at your own peril if you want to extend a class that I didn't intend
> to be extendable.  Like changing/upgrading engine parts in a car.
>

Precisely, this highlights one of the key issues. Optimising has now become
a dangerous breaking process.


 3. It sucks to have to finalize all but N methods.  In other words, we
>>> need a virtual *keyword* to go back to virtual-land.  Then, one can put
>>> final: at the top of the class declaration, and virtualize a few methods.
>>>  This shouldn't be allowed for final classes though.
>>>
>>>
>> The thing that irks me about that is that most classes aren't base
>> classes,
>> and most methods are trivial accessors and properties... why cater to the
>> minority case?
>>
>
> I think it is unfair to say most classes are not base classes.  This would
> mean most classes are marked as final.  I don't think they are.  One of the
> main reasons to use classes in the first place is for extendability.
>

People rarely use the final keyword on classes, even though they could 90%
of the time.
Class hierarchies only typically extend to a certain useful extent, but
people usually leave the option to go further anyway. And the deeper the
average hierarchy, the more leaf's there are - and the less drastic this
change seems in contrast.


Essentially, making virtual the default enables the *extender* to determine
> whether it's a good base class, when the original author doesn't care.
>
> I think classes fall into 3 categories:
>
> 1. Declared a base class (abstract)
> 2. Declared NOT a base class (final)
> 3. Don't care.
>
> I'd say most classes fall in category 3.  For that, I think having virtual
> by default isn't a hindrance, it's simply giving the most flexibility to
> the user.


Precisely, we're back again at the only real argument for
virtual-by-default: it'll slightly annoy some people to type 'virtual', but
that goes both ways. I don't think this supports one position or the other.


 It also doesn't really address the problem where programmers just won't do
>> that. Libraries suffer, I'm still inventing wheels 10 years from now, and
>> I'm wasting time tracking down slip ups.
>> What are the relative losses to the if it were geared the other way?
>>
>
> The losses are that if category 3 were simply always final, some other
> anti-Manu who wanted to extend everything has to contact all the original
> authors to get them to change their classes to virtual :)
>

Fine, they'll probably be receptive since it's not a breaking change.
Can you guess how much traction I have when I ask an author of a popular
library to remove some 'virtual' keywords in C++ code?
"Oh we can't really do that, it could break any other users!", so then we
rewrite the library.

Who has been more inconvenienced in this scenario?

Additionally, if it's the sort of library that's so polymorphic as you
suggest, then what are the chances it also uses a lot of templates, and
therefore you have the source code...
I think the type of library you describe has a MUCH higher probability of
being open-source, or that you have the source available.

BTW, did you know you can extend a base class and simply make the extension
> final, and now all the methods on that derived class become non-virtual
> calls?  Much easier to do than making the original base virtual (Note I
> haven't tested this to verify, but if not, it should be changed in the
> compiler).
>

One presumes that the library that defines the base class deals with its
own base pointers internally, and as such, the functions that I may have
finalised in my code will still be virtual in the place that it counts.


 My one real experience on this was with dcollections.  I had not declared
>>
>>> anything final, and I realized I was paying a performance penalty for it.
>>>  I then made all the classes final, and nobody complained.
>>>
>>>
>> The userbase of a library will grow with time. Andrei wants a million D
>> users, that's a lot more opportunities to break peoples code and gather
>> complaints.
>> Surely it's best to consider these sorts of changes sooner than later?
>>
>
> I think it vastly depends on the intent of the code.  If your classes
> simply don't lend themselves to extending, then making them final is a
> non-issue.
>
>
>  And where is the most likely source of those 1 million new users to
>> migrate
>> from? Java?
>>
>
> From all over the place, I would say.  D seems to be an island of misfit
> programmers.
>
> -Steve
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20130604/1a7e61d4/attachment-0001.html>


More information about the Digitalmars-d mailing list