Broken?

Manu turkeyman at gmail.com
Wed Mar 12 12:38:51 PDT 2014


On 13 March 2014 03:47, Andrei Alexandrescu
<SeeWebsiteForEmail at erdani.org>wrote:

> On 3/11/14, 8:04 PM, Manu wrote:
>
>> I'm really trying to keep my lid on here...
>>
>
> Yep, here we go again :o).


*sigh*

 I'll just remind that in regard to this particular point which sounds
>> reasonable, it's easy to forgot that *all library code where the author
>> didn't care* is now unusable by anybody who does. The converse would not
>> be true if the situation was reversed.
>>
>
> There's an asymmetry introduced by the fact there's today code in use.


Do you think the deprecation path is particularly disruptive? It can be
implemented over a reasonably long time.

 virtual-by-default is incompatible with optimisation, and it's reliable
>> to assume that anybody who doesn't explicitly care about this will stick
>> with the default, which means many potentially useful libraries may be
>> eliminated for use by many customers.
>>
>
> Virtual by default is, however, compatible with customization and
> flexibility.
>

I completely disagree with the everything-should-be-virtual idea in
principle; I think it's dangerous and irresponsible API design, but that's
my opinion.
Whether you are into that or not, I see it as a design decision, and I
think it's reasonable to make that decision explicit by typing 'virtual:'.

What's not subjective, is that the optimiser can't optimise
virtual-by-default. That's just a fact, and one which I care about deeply.
I think it's also statistically reliable that people will stick with the
default in almost all cases.

Unstated assumption: "many potential useful libraries" assumes many
> libraries use traditional OO design in their core components.
>

In my experience; physics, sound, scene graph... these sorts of things are
common libraries, and also heavy users of OO. Each of those things are
broken into small pieces implemented as many objects.
If people then make use of properties, we're in a situation which is much
worse than what we already struggle with in C++.

Unstated assumption: "many customers".


Do I need to quantify?

I work in a gigantic industry. You might call it niche, but it's like,
really, really big.
I often attend an annual conference called GDC which attracts multiple 10s
of thousands of developers each year. It's probably the biggest software
developer conference in the world.
A constantly recurring theme at those conferences is low-level performance
on embedded hardware, and specifically, the mistakes that PC developers
make when first moving to embedded architectures.
There's a massive audience for these topics, because everyone is suffering
the same problems. Virtual is one of the most expensive hazards, just not
on x86.
Most computers in the world today don't run x86 processors.

 Also, as discussed at length, revoking virtual from a function is a
>> breaking change, adding virtual is not.
>>
>
> Changing the default is a breaking change.


Yes, but there is an opportunity for a smooth transition and elimination of
the problem, rather than committing to consistent recurrence of breaking
libraries in the future whenever anyone wants to optimise in this way.

 Which means that, instead of
>> making a controlled breaking change with a clear migration path here and
>> now, we are committing every single instance of any user's intent to
>> 'optimise' their libraries (by finalising unnecessarily virtuals) to
>> breaking changes in their ABI - which *will* occur, since virtual is the
>> default.
>>
>
> Unstated assumption: "every single instance" assumes again that people
> interested in writing fast libraries have virtual calls as a major
> bottleneck, and furthermore they didn't care about speed to start with, to
> wake up later. This pictures library designers as quite incompetent people.


YES! This is absolutely my professional experience! I've repeated this many
times.
Many(/most) libraries I've wanted to use in the past are written for a PC;
rarely any real consideration for low-level performance.
Those that are tested for cross-compiling are often _originally_ written
for a PC; API is architecturally pre-disposed to poor performance.

This is precisely the sort of thing that library authors don't care about
until some subset of customers come along that do. At that point, they are
faced with a conundrum; breaking the API or ignoring the minority - which
can often take years to resolve, meanwhile buggering up our schedule or
wasting our time re-inventing some wheel.
PC programmers are careless programmers on average. Because x86 is the most
tolerant architecture WRT low-level performance by far, unless library
authors actively test their software on a wide variety of machines, they
have no real bearing to judge their code.

 According to semantic versioning, this requires bumping the major
>> version number... that's horrible!
>>
>
> Appeal to emotion.


Bumping major version numbers is not an emotional expression. People take
semantic versioning very seriously.

 What's better; implementing a controlled deprecation path now, or
>> leaving it up to any project that ever uses the 'class' keyword to
>> eventually confront breaking changes in their API when they encounter a
>> performance oriented customer?
>>
>
> It's better to leave things be. All I see is the same anecdote gets
> vividly told again whenever the topic comes up.


Whatever.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20140313/a2d8b83e/attachment-0001.html>


More information about the Digitalmars-d mailing list