D vs C++ - Where are the benchmarks?

Jonathan M Davis jmdavisProg at gmx.com
Sun Jun 30 21:50:15 PDT 2013


On Monday, July 01, 2013 06:27:15 Marco Leise wrote:
> Am Sun, 30 Jun 2013 22:55:26 +0200
> 
> schrieb "Gabi" <galim120 at bezeqint.net>:
> > I wonder why is that.. Why would deleting 1 million objects in
> > C++ (using std::shared_ptr for example) have to be slower than
> > the garbage collection freeing a big chunk of million objects all
> > at once.
> 
> I have no numbers, but I think especially when you have complex graph
> structures linked with pointers, the GC needs a while to follow all
> the links and mark the referenced objects as still in use. And this
> will be done every now and then when you allocate N new objects.

The other thing to consider is that when the GC runs, it has to figure out 
whether anything needs to be collected. And regardless of whether anything 
actually needs to be collected, it has to go through all of the various 
references to mark them and then to sweep them. With deterministic 
destruction, you don't have to do that. If you have a fairly small number of 
heap allocations in your program, it's generally not a big deal. But if you're 
constantly allocating and deallocating small objects, then the GC is going to 
be run a lot more frequently, and it'll have a lot more objects to have to 
examine. So, having lots of small objects which are frequently being created 
and destroyed is pretty much guaranteed to tank your performance if they're 
being allocated by the GC. You really want reference counting for those sorts 
of situations.

> I think he referred to the DMD backend being faster than
> GDC/LDC sometimes, no ?

Maybe someone did, but if that's ever true, it's rare. It's pretty much always 
the case that the dmd backend is inferior with regards to optimizations. It 
just isn't getting worked on like the other backends are. Where it shines is 
compilation speed.

- Jonathan M Davis


More information about the Digitalmars-d mailing list