Coolest D features

Walter Bright newshound at digitalmars.com
Thu Dec 28 12:08:39 PST 2006


Waldemar wrote:
> == Quote from Walter Bright (newshound at digitalmars.com)'s article
>> Georg Wrede wrote:
>>> Walter Bright wrote:
>>>> Jason House wrote:
>>>>> It was reported that D was slower than C by a factor of 1.5.
>>>> I bet that if they compared DMD with DMC, they'd have found no
>>>> difference.
>>> That makes it sound like DMC is 1.5 slower than your average C.
>> Your average C? No. It might be 1.5 slower than the specific C compiler
>> the benchmarker used for that specific application. Performance for
>> particular applications varies all over the map for different C compilers.
> 
> That may be but this specific C compiler is most likely gcc on Linux or VS C++ on
> Windows.  The D compiler is probably dmd.

If he's using gcc, he should do benchmark comparisons with gdc.

> It's a bit shocking to see a 50%
> difference.  Is there information which compilers were used?  And is there any
> reason to believe the specifics of the benchmark could produce such a wide difference?

There's every reason to believe it. Often, people who write benchmarks 
never check to see exactly what they are actually benchmarking. I've 
seen all of the following:

1) using the wrong compiler switches
2) assuming one is testing string handling speed, when actually the 
benchmark was extremely sensitive to how the compiler handled the / 
operation on integers
3) assuming one is benchmarking some calculation speed, when one is 
actually benchmarking some innocuous looking C library function call
4) etc. etc.

In other words, you don't know what you're benchmarking until you run a 
profiler on it. And nobody runs profilers <g>. The old adage that 90% of 
your code execution time is in 10% of the code applies to benchmarks, 
too. Unless you actually dig in and measure it, sure as heck that 10% 
will not be where you think it is.



More information about the Digitalmars-d mailing list