Maybe D is right about GC after all !

H. S. Teoh hsteoh at quickfur.ath.cx
Tue Dec 19 19:42:37 UTC 2017


On Tue, Dec 19, 2017 at 01:54:05AM -0800, Walter Bright via Digitalmars-d wrote:
> "C, Python, Go, and the Generalized Greenspun Law"
> 
> http://esr.ibiblio.org/?p=7804

Coming from a strong C/C++ background, it took me a good long while to
accept the GC.  I had all the usual objections about lack of control
(e.g. over GC pauses), lack of determinism (never know when something
will get collected), performance, etc..  But there's one thing the GC
gives that no amount of clever programming can: productivity.

It's exactly as ESR says: once your code gets to a certain point of
complexity, in a manual memory management language like C/C++ your
coding time becomes more and more disproportionately spent on managing
memory rather than working on your problem domain. That is, if you wish
to preserve code correctness.  It *is* possible to be productive past
this point, but memory-related bugs become more frequent and eventually
overwhelms your effort to make progress in the problem domain.

After I started writing non-trivial code in D, I found my brain has been
freed up from the constant energy drain of having to think about
managing memory -- is that pointer still in scope, do I have to free it,
have I freed it already, are the parameters borrowed references or
transfer of ownership, should they be borrowed references instead,
should I use a refcounting wrapper instead, what happens to this
template if a refcounted type was passed in instead of something else,
what happens to the reference if an exception is thrown here, ad nauseam
-- now I can actually think about the algorithm as it directly pertains
to the problem domain, rather than constantly fiddling with the dirty
details of memory management.

As a result, not only my productivity skyrocketed, but the correctness
of my algorithms improved. When you're constantly under the nagging of
manual memory management gotchas, you may well churn out
*memory-correct* code, but not necessarily code that's correct *in the
problem domain*.  Being freed from the fidgety concerns of memory
management means more mental resources can be directed at actually
solving the problem at hand and doing a better job at it.  Not to
mention an entire class of memory-related bugs are eliminated by the GC.

That's why nowadays my opinion is that if you're not working with
performance-sensitive code (i.e., where every missed CPU cycle increases
the likelihood of a patient dying or the airplane crashing), and if you
haven't profiled your code to clearly show that the GC is a performance
bottleneck, then you really should just let the GC do its job.  There
*have* been cases where I've found that GC performance is hindering my
code; so far, judicious, strategic use of GC.disable() and manual calls
to GC.collect() have sufficed to fix that, without needing to throw out
the GC baby with the bathwater altogether.  Having to go back to paying
the mental memory management tax on every other line of code I write is
just not a wise use of my time and energy.


T

-- 
Those who don't understand D are condemned to reinvent it, poorly. -- Daniel N


More information about the Digitalmars-d mailing list