Updated D Benchmarks

Robert Clipsham robert at octarineparrot.com
Sun Mar 15 07:15:32 PDT 2009


bearophile wrote:
> Robert Clipsham:
> 
> I have seen you have put all graphs in a page. This is probably better. When you have 10-20 benchmarks you may need less thick bars.
> 
> You can add the raw timings, formatted into an ASCII table, a bit like this (don't use an HTML table):
> http://zi.fi/shootout/rawresults.txt
> There's no strict need of a separated file, a <pre>...</pre> part in the page is enough too.
> It's useful for automatic processing of your data, for example with a small script.
> It's so useful, that you too can use such script to generate your html page with a small python script from such table of numbers.

I've got a better idea. That page is automatically generated from an xml 
file, I'll just make that available instead.

>> None of the ones that I'm currently using are.<
> I know, but here you can see C++ benchmarks that use 300+ MB:
> http://shootout.alioth.debian.org/u64/benchmark.php?test=all&lang=gpp&lang2=gpp&box=1

I would probably have to exclude tests that use that much memory, there 
isn't enough ram in my server to go much higher than 256mb in the 
benchmarks (without taking out all the services running on it first).

>> So you suggest I choose whichever performs best out of C or C++?<
> 
> Yep, it gives a more reliable reference. But if you don't like this suggestion do as you like. Using C++ only too is acceptable to me.

I'll probably go all C++, we'll see what other people want though.

>> What compiler would you recommend?< 
> GCC or LLVM-GCC seems fine. They aren't equal, as you may have seen from my benchmarks. GCC is probably better, more developed and more widespread.

I'll probably go with GCC then. Again, we'll see what anyone else thinks 
first.

> OK, I can probably find you 5-10 more small benchmarks.
> I think a private email is better for this (or I'll put a zip somewhere and I'll give you an URL).

That'd be great! Thanks.

> Stripped only versions are enough too.

But if I go with both then I've got more data up there for not much more 
effort :P

>>> - What is the trouble of nbody with gdc?<<
> 
>> I can't remember off the top of my head, I seem to recall it was a linking error though. I did try to debug it when they were originally run I didn't manage to get anywhere with it though.<
> 
> Such trouble can probably be fixed.

Probably, I'll look into it again before the next time I run the benchmarks,

>>> - From your results it seems ldc needs more memory to run the programs. The LDC team may take a look at this.<
> 
>> There doesn't seem to be that much difference,<
> 
> This is a small Python script with data scraped manually from your page (this is why having a raw table is useful):
> 
> data = """ldc 0.69     dmd 0.63     gdc 0.63
> ldc 30.7     dmd 30.64     gdc 30.65
> ldc 140.24     dmd 120.61     gdc 120.62
> ldc 16.68     dmd 16.62     gdc 16.63
> ldc 0.95     dmd 1.52     gdc 0.87    """
> 
> data = data.replace("ldc", "").replace("dmd", "").replace("gdc", "").splitlines()
> data = [map(float, line.split()) for line in data]
> results = [int(round(sum(line))) for line in zip(*data)]
> for comp_time in zip("ldc dmd gdc".split(), results):
>     print "%s: %d MB" % comp_time
> 
> 
> Its output:
> 
> ldc: 189 MB
> dmd: 170 MB
> gdc: 169 MB
> 
> To me it seems there's some difference.
> 

OK, it's more difference than I saw with a quick glance... You proved me 
wrong!



More information about the Digitalmars-d mailing list