Language performance benchmark be updated 2019/11/09

Jon Degenhardt jond at noreply.com
Mon Nov 18 23:24:33 UTC 2019


On Monday, 18 November 2019 at 21:50:04 UTC, bachmeier wrote:
> On Monday, 18 November 2019 at 21:35:08 UTC, JN wrote:
>
>> I think it signifies a deeper problem with these kind of 
>> benchmarks. Most people would expect these benchmarks to 
>> measure idiomatic code, "every day" kind of code. Most people 
>> would write their code with associative arrays in this case. 
>> Sure, you can optimize it later, but just as well you can just 
>> drop into asm {} block and write hand optimized code.
>
> If you're in a position where you care about "fast as possible" 
> code, how fast your "every day" code runs isn't really helpful.
>
> Now, I do understand that you might want to measure the 
> performance of a piece of code written when you aren't 
> optimizing for execution speed. Someone in that position is 
> going to care about speed of execution and speed of 
> development, among other things. The problem is that you can't 
> learn anything useful in that case from a benchmark that 
> reports execution time and nothing else.

Yes, there are often multiple goals behind a benchmark like this, 
goals that may not be explicitly identified.

There is also the question of what "idiomatic" means. This is can 
be quite subjective, especially in multi-paradigm languages. And, 
what "idiomatic" means to an individual may change as familiarity 
with the language grows. For D performance studies, an example is 
that it can take time to learn how to use lazy, range-based 
programming facilities. This is certainly one idiomatic D coding 
style. And, it often results in much better memory management and 
performance improvements. Code can move further from the most 
common paradigms of course, including all the way to inline 
assembly blocks. Makes it difficult to say when versions of a 
program in different languages are similarly idiomatic.


More information about the Digitalmars-d mailing list