Slow performance compared to C++, ideas?

Manu turkeyman at gmail.com
Tue Jun 4 04:15:13 PDT 2013


On 4 June 2013 15:58, Andrei Alexandrescu <SeeWebsiteForEmail at erdani.org>wrote:

> On 6/4/13 1:16 AM, Manu wrote:
>
>> But unlike the first situation, this is a breaking change. If you are
>> not the only user of your library, then this can't be done safely.
>>
>
> Same fallacy all over again, for the third time in this thread. You keep
> on going about "breaking change" without recognizing that the now broken
> code was happily taking advantage of the very flexibility that you argue
> was useless and needed fixing.
>

And the same fallacious response.
The code you refer to wouldn't exist, because it wasn't possible to write,
because what they have allegedly overridden isn't virtual. Nobody's code
can break.
They may have lost an opportunity to twist some code into an unexpected use
case, but chances are, it wasn't their only possible solution, and possible
that it might have even been dangerous.

This is exactly the kind of argumentation that I disagree with,
> philosophically. Instead of carefully pondering the goods and the bads in a
> complex web of tradeoffs, it just clumsily gropes for the desired
> conclusion in ignorance of all that doesn't fit.


And this pisses me off, because you're implying that it's okay for you to
disagree on some points in principle, but not for anyone else. I've clearly
(and repeatedly) given my reasonings, and you haven't responded to many of
them head-on.
I don't agree the sacrifice is anywhere as significant as you suggest other
than a breakage which which we can quantify, it's entirely theoretical for
a start, whereas my points are all taken from a decade of experience that I
don't want to see become worse in the future.
I've also acknowledged that these are my opinions, but you can't say that I
haven't put any thought into my position and I'm 'clumsily groping for
desired conclusions' out of ignorance. That's just basically offensive.

If I'm found to be wrong by the majority here, that will become evident
soon enough.

 If you write code like that, then write 'virtual:', it doesn't hurt
>> anyone else. The converse is not true.
>>
>
> Fourth.
>
>      I look at making methods final specifically for optimization.  It
>>     doesn't occur to me that the fact that it's overridable is a "leak"
>>     in the API, it's at your own peril if you want to extend a class
>>     that I didn't intend to be extendable.  Like changing/upgrading
>>     engine parts in a car.
>>
>>
>> Precisely, this highlights one of the key issues. Optimising has now
>> become a dangerous breaking process.
>>
>
> Fifth.
>
> [snip]
>
> Allow me to summarize my understanding of the most valuable parts your
> argument.
>
> * At the top level you believe ultimate efficiency should be default and
> OOP flexibility should be opt-in.
>
> * Classes routinely should make most methods final because it's hard to
> imagine why one would override all but a few. Since those are a minority,
> it's so much the better to make final the default.
>
> * Even the most performance-conscious people would not care to annotate
> classes and methods. It just doesn't happen. In contrast, people who want
> flexibility will annotate things for the simple reason they have to,
> otherwise overriding won't work.
>
> * You don't consider it a problem that one must go back to base classes
> and changing methods from final to overridable, whenever such a need arises.
>
> (It would be awesome if you had a similar list with the opposite
> arguments.)
>
> If the above is an accurate summary, I'd say it's a matter in which
> reasonable people might disagree. I take issue with each of the points
> above (not flat out disagree with each, more like amend and qualify etc).
>
> Unless fresh arguments, facts, or perspectives come about, I am personally
> not convinced, based on this thread so far, that we should operate a
> language change.



I'll summarise my arguments, though I've done this at least 3 times now.
Sorry, I 'value' more of my points than you, so my summary is quite longer.
These are all supporting reasons why I think it would be a good change, and
naturally, some are of lower significance then than others.
I'd like to think that most of them should be objectively rejected, or the
counter arguments list grows in size a whole lot to justify the insults you
offer:

* At top level I believe D aspires to be a systems language, and
performance should certainly be a key concern.
  - 'Flexibility' [at the expense of performance] should be opt-in. It
comes at the expense of what I presume should be a core audience for a
systems language.
  - x86 is the most tolerant architecture _by far_, and we're committed to
a cost that isn't even known yet on the vast majority of computers in the
world.

* virtual is a one-way trip. It can't be undone without risking breaking
code once released to the wild. How can that state be a sensible default?
  - Can not be un-done by the compiler/linker like it can in other
(dynamic) languages. No sufficiently smart compiler can ever address this
problem as an optimisation.

* The result of said performance concern has a cost in time and money for
at least one of D's core audiences (realtime systems programming).
  - I don't believe the converse case, final-by-default, would present any
comparative loss for users that want to write 'virtual:' at the top of
their class.
  - 'Opportunistic de-virtualisation' is a time consuming and tedious
process, and tends to come up only during crunch times.

* "Classes routinely should make most methods final because it's hard to
imagine why one would override [the intended] few. Since those are a
minority, it's so much the better to make final the default."
  - The majority of classes are leaf's, and there's no reason for leaf
methods to be virtual by default. Likewise, most methods are trivial
accessors (the most costly) which have no business being virtual either.
  - It's also self-documenting. It makes it clear to a customer how the API
is to be used.

* Libraries written in D should hope to be made available to the widest
audience possible.
  - Library author's certainly don't consider (or care about) everyone's
usage cases, but they often write useful code that many people want to make
use of. This is the definition of a library.
  - They are almost certainly not going to annotate their classes with lots
of 'final'.
  - Given hard experience, when asked to revoke virtual, even if authors
agree in principle, they will refuse to do it given the risk of breakage
for unknown customers.
  - Adding final as an optimisation is almost always done post-release, so
it will almost always run the risk of breaking someones code somewhere.

* Experience has shown that programmers from C++/C# don't annotate 'final'
even when they know they should. Users from Java don't do it either, but
mainly because they don't consider it important.
  - Note: 'the most performance conscious users' that you refer to are
often not the ones writing the code. Programmers work in teams, sometimes
those teams are large, and many programmers are inexperienced.

* final-by-default promotes awareness of virtual-ness, and it's associated
costs.
  - If it's hidden, it will soon be forgotten or dismissed as a trivial
detail. It's not... at least, not in a systems language that attracts
high-frequency programmers.

* 'Flexibility' may actually be a fallacy anyway. I personally like the
idea of requiring an explicit change to 'virtual' in the base when a new
and untested usage pattern is to be exploited, it gives me confidence.
  - People are usually pretty permissive when marking functions virtual in
C++, and people like to consider many possibilities.
    - When was the last time you wanted to override a function in C++, but
the author didn't mark it virtual? Is there actually a reduction in
flexibility in practise? Is this actually a frequent reality?
  - Overriding unintended functions may lead to dangerous behaviours never
considered by the author in the first place.
    - How can I be confident in an API when I know the author couldn't have
possibly tested all obscure possibilities available. And how can I know the
extent of his consideration of usage scenarios when authoring the class?
      - At best, my obscure use case has never been tested.
    - 'virtual is self-documenting, succinctly communicating the authors
design/intent.

* Bonus: Improve interoperability with C++, which I will certainly
appreciate, but this point manifested from the D-DFE guys at dconf.


And I'll summarise my perception of the counter arguments argument:

* It's a breaking change.

* 'Flexibility'; someone somewhere might want to make use of a class in a
creative way that wasn't intended or tested. They shouldn't be prohibited
from this practise _by default_, in principle.
  - They would have to contact the author to request a method be made
virtual in the unlikely event that source isn't available, and they want to
use it in some obscure fashion that the author never considered.
    - Note: This point exists on both sides, but on this side, the author
is likely to be accommodating to their requests.
  - Authors would have to write 'virtual:' if they want to offer this style
of fully extensible class.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20130604/5fa36cf4/attachment-0001.html>


More information about the Digitalmars-d mailing list