Slow performance compared to C++, ideas?

Manu turkeyman at gmail.com
Sun Jun 2 06:59:37 PDT 2013


On 2 June 2013 19:53, Jacob Carlborg <doob at me.com> wrote:

> On 2013-06-01 23:08, Jonathan M Davis wrote:
>
>  If you don't need polymorphism, then in general, you shouldn't use a class
>> (though sometimes it might make sense simply because it's an easy way to
>> get a
>> reference type). Where it becomes more of a problem is when you need a few
>> polymorphic functions and a lot of non-polymorphic functions (e.g. when a
>> class has a few methods which get overridden and then a lot of properties
>> which it makes no sense to override). In that case, you have to use a
>> class,
>> and then you have to mark a lot of functions as final. This is what folks
>> like
>> Manu and Don really don't like, particularly when they're in environments
>> where the extra cost of the virtual function calls actually matters.
>>
>
> If a reference type is needed but not a polymorphic type, then a final
> class can be used.


I've never said that virtuals are bad. The key function of a class is
polymorphism.
But the reality is that in non-tool or container/foundational classes
(which are typically write-once, use-lots; you don't tend to write these
daily), a typical class will have a couple of virtuals, and a whole bunch
of properties.
The majority of functions in OO code (in the sorts of classes that you tend
to write daily, ie, the common case) are trivial accessors or properties.
The cost of forgetting to type 'final' is severe, particularly so on a
property, and there is absolutely nothing the compiler can do to help you.
There's no reasonable warning it can offer either, it must presume you
intended to do that.

Coders from at least C++ and C# are trained by habit to type virtual
explicitly, so they will forget to write 'final' all the time.
I can tell from hard experience, that despite training programmers that
they need to write 'final', they have actually done so precisely ZERO TIMES
EVER.
People don't just forget the odd final here or there, in practise, they've
never written it yet.

The problem really goes pear shaped when faced with the task of
opportunistic de-virtualisation - that is, de-virtualising functions that
should never have been virtual in the first place, but are; perhaps because
code has changed/evolved, but more likely, because uni tends to output
programmers that are obsessed with the possibility of overriding everything
;)
It becomes even worse than what we already have with C++, because now in D,
I have to consider every single method and manually attempt to determine
whether it should actually be virtual or not. A time consuming and
certainly dangerous/error-prone task when I didn't author the code! In C++
I can at least identify the methods I need to consider by searching for
'virtual', saving maybe 80% of that horrible task by contrast.

But there are other compelling reasons too, for instance during
conversations with Daniel Murphy and others, it was noted that it will
enhance interoperation with C++ (a key target for improvement being flagged
recently), and further enabled the D DFE.
I also think with explicit 'override' a requirement, it's rather
non-orthogonal to not require explicit 'virtual' too.

So, consider this reasonably. At least Myself and Don have both made strong
claims to this end... and we're keen to pay for it with fixing broken base
classes.
Is it REALLY that much of an inconvenience to people to be explicit with
'virtual' (as they already are with 'override')?
Is catering to that inconvenience worth the demonstrable cost? I'm not
talking about minor nuisance, I'm talking about time and money.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20130602/d0fa067d/attachment.html>


More information about the Digitalmars-d mailing list