Thoughts from newcommer

Joakim via Digitalmars-d digitalmars-d at puremagic.com
Sat Apr 15 21:19:56 PDT 2017


On Saturday, 15 April 2017 at 18:00:50 UTC, Isaac Gouy wrote:
> On Thursday, 13 April 2017 at 03:29:26 UTC, Joakim wrote:
>
>> Cooperative with what?  He chose not to include D anymore, 
>> which at one point dominated the shootout, and says we should 
>> just start our own site:
>>
>> https://forum.dlang.org/post/no8klt$1d1i$1@digitalmars.com
>
>
> When did D dominate?
>
> http://web.archive.org/web/20090303214521/http://shootout.alioth.debian.org:80/gp4/benchmark.php?test=all%26lang=all

Probably a year or so before that page was archived, when I 
played around with your results to see what languages popped up.  
It was likely the second time I came across D- the first may be 
Yegge's blog post mentioning D as a possible Next Big Language 
(http://forum.dlang.org/thread/bjqcszlnohjjjersvojp@forum.dlang.org)- when I found that D came out tops if I weighted time, memory, and source code size equally.  Not always highest, as Free Pascal would sometimes beat it, but D usually won.

You are only weighting time in that archived page, which will 
favor languages that take hundreds of lines of code to do the 
same thing, because there's no penalty for verbosity.

> On that archived page you can see a lot of language 
> implementations that I chose not to include on the "new" quad 
> core measurements, that began back in 2008 iirc.

D was in the top 10 for time alone on the page you linked, 
without even the benefit of an llvm backend like it has today, I 
wonder why you'd remove a language that performed so well.

> For newer languages like Crystal and Nim and Julia the tiny 
> benchmarks game programs have been used to provide performance 
> examples (without needing my involvement):
>
> https://github.com/kostya/crystal-benchmarks-game

kostya does another one that includes D, it does very well:

https://github.com/kostya/benchmarks

> https://github.com/JuliaLang/julia/tree/master/test/perf/shootout

This one doesn't show any benchmarks because it says it's in your 
game, so you are involved.

> :but when it comes to D as-far-as-I-can-tell those efforts seem 
> to somehow disappoint the D community and the comparisons are 
> not publicized in the same way:
>
> https://forum.dlang.org/post/ihfqubwtadgvlxkvedbl@forum.dlang.org

I'm guessing that's because he tried to update the old D 
benchmarking code and likely the C/C++ code has been optimized a 
lot more since.  I took the source for the top C regex-dna 
benchmark late last year and compared it to the latest D 
implementation by Dmitry, the author of std.regex.

I found that the D benchmark easily beat the top C regex 
benchmark on a single core- not surprising as Dmitry's regex 
always beats everyone else- but would lose on multi-core, only 
because Dmitry's benchmark had not been parallelized.  The D 
source was an order of magnitude smaller, partially because both 
call external regex libraries that do most of the work but mostly 
because the C one back then needed a lot more source to be 
parallelized.

I see that benchmark has since been renamed to regex-redux and 
new C implementations have been added, though the benchmark 
description appears to be the same.

I suspect D would do just as well on the other benchmarks now.


More information about the Digitalmars-d mailing list