Bartosz Milewski seems to like D more than C++ now :)
H. S. Teoh
hsteoh at quickfur.ath.cx
Thu Sep 19 16:48:43 PDT 2013
On Fri, Sep 20, 2013 at 12:18:22AM +0200, Szymon Gatner wrote:
> I had similar thoughts when watching GoingNaive 2013:
> http://bartoszmilewski.com/2013/09/19/edward-chands/
> I was more and more scared with every talk and now I am valualizing
> my polymorphic types a'la Sean Parent
Quote:
There was so much talk about how not to use C++ that it occurred
to me that maybe this wasn’t the problem of incompetent
programmers, but that straightforward C++ is plain wrong. So if
you just learn the primitives of the language and try to use
them, you’re doomed.
... [big snippage] ...
I can go on and on like this (and I often do!). Do you see the
pattern? Every remedy breeds another remedy. It’s no longer just
the C subset that should be avoided. Every new language feature
or library addition comes with a new series of gotchas. And you
know a new feature is badly designed if Scott Meyers has a talk
about it. (His latest was about the pitfalls of, you guessed it,
move semantics.)
This is sooo true. It reflects my experience with C++. Honestly, it got
to a point where I gave up trying to following the remedy upon the patch
to another remedy to a third remedy that patches yet another remedy on
top of a fundamentally broken core. I just adopt my own C++ coding style
and stuck with it. Unfortunately, that approach is unworkable in
real-life projects involving more than one programmer.
At work, I dread every single time I need to look at the C++ module
(which fortunately has been confined to a single module, although it's
also one of the largest). For "performance reasons" they eschewed the
built-in C++ try/catch constructs, and implemented their own
replacements using preprocessor macros. You bet there are memory leaks,
pointer bugs, and all sorts of nasty things just from this one
"optimization" alone. And it just goes downhill from there.
It's this endless cycle of a remedy upon a remedy upon a patch to a
remedy that drove me to look for something better. I found D. :)
One of the outstanding features of D to me is that code written with
simple language constructs are, surprisingly, actually correct. As
opposed to C++'s situation of code being wrong by default until you
learn the 8th circle black belt advanced level C++ coding techniques.
Anyway, this bit sounds interesting:
It’s a common but false belief that reference counting (using
shared pointers in particular) is better than garbage
collection. There is actual research showing that the two
approaches are just two sides of the same coin. You should
realize that deleting a shared pointer may lead to an arbitrary
long pause in program execution, with similar performance
characteristics as a garbage sweep. It’s not only because every
serious reference counting algorithm must be able to deal with
cycles, but also because every time a reference count goes to
zero on a piece of data a whole graph of pointers reachable from
that object has to be traversed. A data structure built with
shared pointers might take a long time to delete and, except for
simple cases, you’ll never know which shared pointer will go out
of scope last and trigger it.
Sounds like D's decision to go with a GC may not be *that* bad after
all...
Let’s take a great leap of faith and assume that all these
things will be standardized and implemented by, say, 2015. Even
if that happens, I still don’t think people will be able to use
C++ for mainstream parallel programming. C++ has been designed
for single thread programming, and parallel programming requires
a revolutionary rather than evolutionary change. Two words: data
races. Imperative languages offer no protection against data
races — maybe with the exception of D.
Welp, time to get our act together and clean up that mess that is
'shared', so that D will actually stand a chance of lasting past the
next 10 years... ;-)
T
--
Ruby is essentially Perl minus Wall.
More information about the Digitalmars-d
mailing list