Slow performance compared to C++, ideas?

Manu turkeyman at gmail.com
Tue Jun 4 01:36:02 PDT 2013


On 4 June 2013 15:22, Andrei Alexandrescu <SeeWebsiteForEmail at erdani.org>wrote:

> On 6/4/13 12:53 AM, Manu wrote:
>
>> I don't buy the flexibility argument as a plus. I think that's a
>> mistake, but I granted that's a value judgement.
>>
>
> Great.


That doesn't mean it's wrong, just that there are other opinions.

 But it's a breaking change to the API no matter which way you slice it,
>> and I suspect this will be the prevalent pattern.
>> So it basically commits to a future of endless breaking changes when
>> someone wants to tighten up the performance of their library, and
>> typically only after it has had time in the wild to identify the problem.
>>
>
> You're framing the matter all wrongly. Changing a method from virtual to
> final breaks the code of people who chose to override it - i.e. EXACTLY
> those folks who found it useful to TAP into the FLEXIBILITY of the design.
>
> Do you understand how you are wrong about this particular little thing?


Well first, there's a very high probability the number of people in that
group is precisely zero, but since you can't know the size of your
audience, library dev's will almost always act conservatively on that
matter.
In the alternate universe, those folks that really want to extend the class
in unexpected ways may need to contact the author and request the change.
Unlike the situation where I need to do that (where it will probably be
rejected), the author will either give them advice about a better solution,
or will probably be happy to help and make the change, since it's not a
breaking change, and there's no risk of collateral damage.
There's a nice side-effect that comes from the inconvenience too, which is
that the author now has more information from his customers about how his
library is being used, and can factor that into future thought/design.

Surely you can see this point right?
Going virtual is a one-way change.

 Situation: I have a closed source library I want to use. I test and find
>> that it doesn't meet our requirements for some trivial matter like
>> performance (super common, I assure you).
>> The author is not responsive, possibly because it would be a potentially
>> breaking change to all the other customers of the library, I've now
>> wasted a month of production time in discussions in an already tight
>> schedule, and I begin the process of re-inventing the wheel.
>> I've spent 10 years repeating this pattern. It will still be present
>> with final-by-default, but it will be MUCH WORSE with
>> virtual-by-default. I don't want to step backwards on this front.
>>
>
> Situation: I have a closed source library I want to use. I test and find
> that it doesn't meet our requirements for some trivial matter like the
> behavior of a few methods (super common, I assure you).
>
> The author is not responsive, possibly because it would be a potentially
> breaking change to all the other customers of the library, I've now wasted
> a month of production time in discussions in an already tight schedule, and
> I begin the process of re-inventing the wheel.
> I've spent 10 years repeating this pattern. It will still be present with
> virtual-by-default, but it will be MUCH WORSE with final-by-default. I
> don't want to step backwards on this front.
>
> Destroyed?


What? I don't really know what you're saying here, other than mocking me
and trivialising the issue.
This is a very real and long-term problem.

 Even with C++ final-by-default, we've had to avoid libraries because C++
>> developers can be virtual-tastic sticking it on everything.
>>
>
> Oh, so now the default doesn't matter. The amount of self-destruction is
> high in this post.


No, you're twisting my words and subverting my point. I'm saying that
virtual-by-default will _make the problem much worse_. It's already a
problem enough.
Where once there might be one or 2 important methods that can't be used
inside a loop, now there's a situation where we can't even do
'thing.length', or 'entity.get', which appear completely benign, but
they're virtual accessors.
This has now extended the problem into the realm of the most trivial of
loops, and the most basic of interactions with the class in question.

The point of my comment is to demonstrate that it's a REAL problem that
does happen, and under the virtual-by-default standard, it will become much
worse.

 D will magnify this issue immensely with virtual-by-default.
>>
>
> It will also magnify the flexibility benefits.


And this (dubious) point alone is compelling enough to negate everything
I've presented?

Tell me honestly, when was the last time you were working with a C++ class,
and you wanted to override a method that the author didn't mark virtual?
Has that ever happened to you?
It's never happened to me in 15 years. So is there a real loss of
flexibility, or just a theoretical one?

 At least in
>> C++, nobody ever writes virtual on trivial accessors.
>> virtual accessors/properties will likely eliminate many more libraries
>> on the spot for being used in high frequency situations.
>>
>
> I don't think a "high frequency situation" would use classes designed
> naively. Again, the kind of persona you are discussing are very weird chaps.


No it wouldn't, but everyone needs to make use of 3rd party code.
And even the internal code is prone to forgetfulness and mistakes, as I've
said countless times. Which cost time and money to find and fix.

 Again, refer to Steven's pattern. Methods will almost always be virtual
>> in D (because the author didn't care), until someone flags the issue
>> years later... and then can it realistically be changed? Is it too late?
>> Conversely, if virtual needs to be added at a later time, there are no
>> such nasty side effects. It is always safe.
>>
>
> Again:
>
> - changing a method final -> overridable is nonbreaking. YOU ARE RIGHT
> HERE.
>
> - changing a method overridable -> final will break PRECISELY code that
> was finding that design choice USEFUL. YOU SEEM TO BE MISSING THIS.
>

No it won't break, it wouldn't be there in the first place, because the
function wasn't virtual.
I realise I've eliminated a (potentially dangerous) application of a class,
but the author is more than welcome to use 'virtual:' if it's important to
them.

I also think that saying people might want to override something is purely
theoretical, and I've certainly never encountered a problem of this sort in
C++.
In my opinion, C++ users often tend to over-use virtual if anything, and I
expect that practise would continue unchanged.

And you've missed (or at least not addressed) why I actually think this is
positive.
Again to repeat myself. I think this sort of code is highly more likely to
be open source, it's also highly more likely to contain templates (in which
case the source is available anyway), and in lieu of those points, it's
also of some benefit for the user that wants to bend this object in an
unexpected direction to have some contact with the author. The author will
surely have some opinion on the new usage pattern, and will now know the
library is being used in this previously unexpected way, and can consider
that user-base in the future.

Again, both are still possible. But which should be the DEFAULT?
Which is a more dangerous default?

         And I argue the subjective opinion, that code can't possibly be
>>         correct
>>         if the author never considered how the API may be used outside his
>>         design premise, and can never test it.
>>
>>
>>     I think you are wrong in thinking traditional procedural testing
>>     methods should apply to OOP designs. I can see how that fails indeed.
>>
>>
>> Can you elaborate?
>> And can you convince me that an author of a class that can be
>> transformed/abused in any way that he may have never even considered,
>> can realistically reason about how to design his class well without
>> being explicit about virtuals?
>>
>
> I can try. You don't understand at least this aspect of OOP (honest
> affirmation, not intended to offend). If class A chooses to inherit class
> B, it shouldn't do so to reuse B, but to be reused by code that manipulates
> Bs. In a nutshell: "inherit not to reuse, but to be reused". I hope this
> link works: http://goo.gl/ntRrt


I understand the scripture, but I don't buy it outright. In practise,
people derive to 'reuse' just as often (or even more often) than they do to
be reused.
API's are often defined to be used by deriving and implementing some little
thing. Java is probably the most guilty of this pattern I've ever seen, you
typically need to derive a class to do something trivial like provide a
delegate.
I'm not suggesting it should be that way, just that it's often not that way
in practise.

And regardless, I don't see how the default virtual-ness interferes with
the reuse of A in any way. Why do these principles require that EVERYTHING
be virtual.

(If all A wants is to reuse B, it just uses composition.)
>
> You should agree as a simple matter that there's no reasonable way one can
> design a software library that would be transformed, abused, and misused.
> Although class designers should definitely design to make good use easy and
> bad use difficult, they routinely are unable to predict all different ways
> in which clients would use the class, so designing with flexibility in mind
> is the safest route (unless concerns for performance overrides that). Your
> concern with performance overrides that for flexibility, and that's
> entirely fine. What I disagree with is that you believe what's best for
> everybody.


D usually has quite an obsession with correctness, how can it be safe to
encourage use of classes in ways that it was never designed or considered
for? Outside of the simplest of classes, I can't imagine any designer can
consider all possibilities, they will have had a very specific usage
pattern in mind. At best, your 'creative' application won't have been
tested.
As new usage scenario's develop, it's useful for the author to know about
it, and consider it in future.

But this isn't a rule, only a default (in this case, paranoid safety first,
a typical pattern for D). A class that wants to offer the flexibility you
desire can easily use 'virtual:', or if the author is sufficiently
confident that any part can be safely extended, they're perfectly welcome
to make everything virtual. There's no loss of possibility, just that the
default would offer some more confidence that your usage of a given API is
correct; you'll get a compile error if you use beyond the author's intent.

 I've made the point before that the sorts of super-polymorphic classes
>> that might have mostly-virtuals are foundational classes, written once
>> and used many times.
>>
>
> I don't know what a super-polymorphic class is, and google fails to list
> it: http://goo.gl/i53hS
>
>
>  These are not the classes that programmers sitting at their desk are
>> banging out day after day. This are not the common case. Such a
>> carefully designed and engineered base class can afford a moment to type
>> 'virtual:' at the top.
>>
>
> I won't believe this just because you said it (inventing terminology in
> the process), it doesn't rhyme with my experience, so do you have any
> factual evidence to back that up?


It's very frustrating working with proprietary code, I can't paste a class
diagram or anything, but I'm sure you've seen a class diagram before.
You understand that classes have a many:1 relationship with their base
class?
So logically, for every 1 day spent writing a base, there are 'many' days
working on specialisations. So which is the common case?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20130604/c2f57c35/attachment-0001.html>


More information about the Digitalmars-d mailing list