Optimizations and performance
Dave via Digitalmars-d
digitalmars-d at puremagic.com
Thu Jun 9 06:32:14 PDT 2016
On Thursday, 9 June 2016 at 00:25:28 UTC, Seb wrote:
> On Wednesday, 8 June 2016 at 22:32:49 UTC, Ola Fosheim Grøstad
> wrote:
>> On Wednesday, 8 June 2016 at 22:19:47 UTC, Bauss wrote:
>>> D definitely needs some optimizations, I mean look at its
>>> benchmarks compared to other languages:
>>> https://github.com/kostya/benchmarks
>>>
>>> I believe the first step towards better performance would be
>>> identifying the specific areas that are slow.
>>>
>>> I definitely believe D could do much better than what's shown.
>>>
>>> Let's find D's performance weaknesses and crack them down.
>>
>> I wouldn't put too much emphasis on that benchmark as the
>> implementations appear different? Note that Felix compiles to
>> C++, yet beats C++ in the same test? Yes, Felix claims to do
>> some high level optimizations, but doesn't that just tell us
>> that the C++ code tested wasn't optimal?
>
> While I appreciate that someone puts the time and effort in
> such benchmarks, what do they really say?
> In the brainfuck example the most expensive function call is
> the hash function, which is not needed at all!
> We can just allocate an array on demand and we beat the Cpp
> implemention by the factor of 2! (and use less memory).
>
> https://github.com/kostya/benchmarks/pull/87
> (n.b. that this small change makes D by far the fastest for
> this benchmark)
>
> So what do we learn?
>
> 1) Don't trust benchmarks!
> 2) D's dictionary hashing should be improved
I think you can trust benchmark. I think the bigger key is
understanding what you are truly benchmarking. Most of the time
it's actually just the compilers more than the language. From
what I see it's very difficult to actually 'benchmark' a
language. Especially when you shy away from the ways the language
was designed to be used.
A good example I see almost all of the time is declaring classes
in most languages. Which generally involves the class itself
being created to be inherited and to utilize polymorphism. But
the first thing you see people doing in benchmarking tests is
slap 'final' or 'sealed' onto the class to eliminate the overhead
in setting up a traditional OOP class, to get that extra
performance. But that is not how the language is generally used...
And at that point if your implementation performs slower than
whatever you are benchmarking against (lets say for the sake of
comparison Java and C# which the upper example would fit well),
well...it's most likely not one language being better than the
other, because they both accomplish the same thing the same way,
so it's most likely some other feature, or more likely, some
detail of the compiler or IL language they are using. When the
clear blemish of the language is showing in both cases, classes
should be 'final' or 'sealed' unless explicitly stated otherwise.
So both languages are slower here. And the implementations with
their workaround is really comparing something else.
Benchmarking is important, but it takes nuance and thought to
actually find what you benchmarked. For instance the vast
majority of benchmarking cases I see have nothing to do with the
language, but more so how a library call was implemented.
More information about the Digitalmars-d
mailing list