Maybe D is right about GC after all !

H. S. Teoh hsteoh at quickfur.ath.cx
Wed Jan 3 22:45:07 UTC 2018


On Wed, Jan 03, 2018 at 03:28:15PM -0700, Jonathan M Davis via Digitalmars-d wrote:
[...]
> The problem is that there are some very vocal folks who complain about
> the GC, and then that often leads to folks thinking that there's a
> serious problem with the fact that D has a GC, when arguably, that
> isn't true at all. There are pros and cons to using a GC, there are
> some circumstances where it's not appropriate, and we can do better to
> have it be optional where appropriate, but in spite of what some of
> the vocal folks say, it has been a big boon to D to have a GC, and it
> gets increasingly annoying to have to deal with folks insisting that
> the GC should be excised from everywhere and avoided as much as
> possible.
> 
> So, if no one speaks up about how it's actually great to have a GC, it
> starts seeming like we all think that D shouldn't have a GC, which
> isn't the case at all.
[...]

+1.

Even though I'm well aware of the current GC's limitations, and have
implemented various workarounds in my own code, I totally agree that it
has been great to have a GC in D.  It frees up a big chunk of my mental
capacity while coding to actually think about the problem domain, rather
than being drained by constantly needing to think about memory
management issues, like I have to when writing C/C++ code.  It makes for
faster development time, and less bugs (heck, it eliminates an entire
class of bugs that commonly plague C/C++ code).

The performance cost is usually not even noticeable unless you're
writing (1) memory-intensive, CPU-intensive code, or (2) real-time code
like medical appliance controllers where people may die if the code
doesn't respond within 1ms, or (3) 3D game engines where people may die
if they miss an animation frame or two, though only virtually. :-P

When the GC *does* become a source of performance concerns, I've found
that in 99% of the cases all it takes is to call GC.disable and schedule
your own GC.collect at strategic points in your code.  In one particular
memory-intensive, CPU-intensive program I wrote, all I needed was to
make a 3-line change:

	...
	GC.disable;
	...
	if (++counter % 1_000_000 == 0) // or whatever number you wish
		GC.collect;

and I was able to obtain performance improvements upwards of 40%.

Furthermore, upon closer inspection, eliminating array allocations in
one specific hotspot in my inner loop by reusing a static buffer won me
another 15-20% performance improvement, again a rather small code
change, and standard practice to reduce GC load.

All it took was to run a profiler to identify the hotspots, and fix the
few places in the code where it actually mattered (and usually, such
places only need a minor change).  The rest of the code was perfectly
fine using the GC just as it is. There was no need to completely excise
the GC, rewrite all of my code to use ref-counting or *shudder*
malloc/free, or any of the extreme stuff that the vocal minority seems
to advocate for.

I'm not saying there aren't use cases where you *do* want to use @nogc
and completely avoid the GC (@nogc is there for a reason), but that
these use cases aren't all as common as some people would have us think.
The majority of application code won't even notice a difference, even
with the current supposedly poor GC.


T

-- 
Windows: the ultimate triumph of marketing over technology. -- Adrian von Bidder


More information about the Digitalmars-d mailing list