Why I chose D over Ada and Eiffel

H. S. Teoh hsteoh at quickfur.ath.cx
Thu Aug 22 13:00:55 PDT 2013


On Thu, Aug 22, 2013 at 03:28:34PM -0400, Nick Sabalausky wrote:
> On Wed, 21 Aug 2013 18:50:35 +0200
> "Ramon" <spam at thanks.no> wrote:
> > 
> > I am *not* against keeping an eye on performance, by no means. 
> > Looking at Moore's law, however, and at the kind of computing 
> > power available nowadays even in smartphones, not to talk about 8 
> > and 12 core PCs, I feel that the importance of performance is way 
> > overestimated (possibly following a formertimes justified 
> > tradition).
> > 
> 
> Even if we assume Moore's law is as alive and well as ever, a related
> note is that software tends to expand to fill the available
> computational power. When I can get slowdown in a text-entry box on a
> 64-bit multi-core, I know that hardware and Moore's law, practically
> speaking, have very little effect on real performance. At this point,
> it's code that affects performance far more than anything else. When
> we hail the great performance of modern web-as-a-platform by the fact
> that it allows an i7 or some such to run Quake as well as a Pentium 1
> or 2 did, then we know Moore's law effectively counts for squat -
> performance is no longer about hardware, it's about not writing
> inefficient software.

I've often heard the argument that inefficiencies in code is OK, because
you can just "ask the customer to upgrade to better hardware", and
"nobody runs a 386 anymore". Which, from a business POV, is a profitable
outlook -- if you're the one producing the hardware, inefficient
software is incentive for the customer to pay you more money to buy
faster hardware to run the software. On the contrary, if your software
runs *too* well, then customers have no motivation to buy new hardware.

This sometimes goes to ludicrous extremes, where an O(n^2) algorithm is
justified because "the customer can just upgrade to better hardware", or
"next year's CPU will be able to handle this no problem". Until they
realize that when n is large (e.g., the customer says "oh I'm running
your software with about n=8000), doubling the CPU speed every year just
ain't gonna cut it -- you'd be waiting a long many years before your
software becomes usable again.


> Now I'm certainly not saying that we should try to wring every last
> drop of performance out of every place where it doesn't even matter
> (like C++ tends to do). But software developers' belief in Moore's law
> has caused many of them to inadvertently cancel out, or even reverse,
> the hardware speedups with code inefficiencies (which are *easily*
> compoundable, and can and *do* exceed the 3x slowdown you claimed in
> another post was unrealistic) - and, as JS-heavy web apps prove, they
> haven't even gotten considerably more reliable as a result (Not that
> JS is a good example of a reliability-oriented language - but a lot of
> people certainly seem to think it is).

Heh. JS? reliable? in the same sentence? Heh.

On the flip side, though, it's true that the performance-conscious crowd
among programmers have a tendency to premature optimization, producing
unmaintainable code in the process. I used to be one of them, so I know.
:) A profiler is absolutely essential to identify where the real
bottlenecks are. But once identified, sometimes there's no way to make
it better except by going low-level and writing it in a systems
programming language. Like D. ;-)

And sometimes, there *is* no single bottleneck that you can address; you
just need the code to be closer to hardware *in general* in order to
bridge that last 10% performance gap to reach your target. All those
convenient little indirections and virtual method lookups do add up.


T

-- 
It is widely believed that reinventing the wheel is a waste of time; but
I disagree: without wheel reinventers, we would be still be stuck with
wooden horse-cart wheels.


More information about the Digitalmars-d mailing list