Faster Virtual Method Dispatch

Craig Black cblack at ara.com
Mon Apr 24 21:23:37 PDT 2006


"Walter Bright" <newshound at digitalmars.com> wrote in message 
news:e2k12i$19bi$1 at digitaldaemon.com...
> Craig Black wrote:
>> "BCS" <BCS_member at pathlink.com> wrote in message 
>> news:e2isuc$2jmd$1 at digitaldaemon.com...
>>> In article <e2inmo$2bjp$1 at digitaldaemon.com>, Craig Black says...
>>>> "Dan" <Dan_member at pathlink.com> wrote in message
>>>> news:e2im9o$29gi$1 at digitaldaemon.com...
>>>>> Ultimately though, I think the GC causes the highest level of 
>>>>> performance
>>>>> troubles on the benchmarks.  I love it to death though and wish 
>>>>> someone
>>>>> would
>>>>> get their hands dirty making it smaller and sexier.
>>>> Or (truly) optional. :)  The notion that the GC is optional in D is 
>>>> really
>>>> misleading.  Everything uses it.  While you could certainly make it 
>>>> faster
>>>> at the expense of more complexity, I don't think that GC that scans the
>>>> stack testing for non-null pointers will ever provide the performance 
>>>> of C's
>>>> malloc and free.  GC is simply performing much more work.  I have heard 
>>>> many
>>>> arguments to the contrary, but I still don't see how it is possible 
>>>> that GC
>>>> could compete with malloc and free.
>>>>
>>>
>>> A bad GC will definitely make for bad performance. And any GC will be 
>>> slower
>>> than an optimum malloc/free solution. However (and I'm still undecided 
>>> on this
>>> one) I think the argument is that in any non trivial project, manually 
>>> freeing
>>> all of the memory without any memory leaks or axing something to soon 
>>> would
>>> require more overhead than using GC. The trade-off is not in the freeing 
>>> time
>>> but in the can-I-free-this check.
>>
>> I have heard similar.  "The code required to manually free everything 
>> outweighs the overhead of automatic GC."  Unless you are doing something 
>> rediculous in your code when freeing heap objects, this statement is 
>> false. Manual freeing does not require gobs of code or eat up lots of 
>> processor cycles.  Typically it involves testing a pointer to see if it 
>> is null, calling free() if it is not, and then setting it to null.  How 
>> could this ever be slower than scanning the entire stack for non-null 
>> pointers?  GC is simply doing way more work.  IMO this statement is 
>> simply obsurd.
>
> There's more to it than that. One of the big benefits of GC is that fewer 
> allocations are necessary - and a sure way to better allocation 
> performance is to do less of them. The reason fewer allocations are 
> necessary is that the normal behavior in C/C++, when faced with two 
> pointers to the same object, is to copy the object. That way, there's no 
> grief with worrying about freeing the object that someone else points to.
>
> To see this effect at work, see www.digitalmars.com/d/cppstrings.html.
>
> This problem is bad enough that the notion of shared_ptr<> has appeared. 
> shared_ptr<> implements reference counting to eliminate the need for extra 
> copies. But it has its own overhead problems:
>
> 1) for every allocation, shared_ptr<> allocates an *extra* block of memory 
> to hold its accounting information. So you've got, out of the box, twice 
> as many allocations/deallocations happening.
>
> 2) for every copy of the shared_ptr<>, like passing one to a function, 
> returning one, etc., the reference count has to be incremented, then 
> decremented. The inc/dec must be done in an exception safe manner, i.e. 
> they need to be unwindable. Even worse, for multithreaded apps these 
> inc/dec's need to be synchronized.
>
> And even with all that falderol, shared_ptr<> still can't do slicing, 
> which are an important efficiency improvement.

Hmmm.  I have glanced at the word count example many times but was not aware 
of why it was faster in D until now.  Is it really the GC paradigm that 
allows this performance?  I think I'm beginning to finally see some method 
to the madness.  Perhaps I need to rethink my stance on GC a little.

It would seem then that GC can come out on top if there is a lot of sharing 
of allocated memory e.g. multiple references to each object.  However, if 
you are certain that there will be only a single reference to each object, 
manual memory management should always outperform GC.  Am I on the right 
track here?

-Craig 





More information about the Digitalmars-d mailing list