I was wrong.

downs default_357-line at yahoo.de
Thu Aug 14 06:35:23 PDT 2008


superdan wrote:
> downs Wrote:
> 
>> To clear this up, I've been running a benchmark.
>>
>> module test91;
>>
>> import tools.time, std.stdio, tools.base, tools.mersenne;
>>
>> class A { void test() { } }
>> class B : A { final override void test() { } }
>> class C : A { final override void test() { } }
>>
>> A a, b, c;
>> static this() { a = new A; b = new B; c = new C; }
>>
>> A gen() {
>>   if (randf() < 1f/3f) return a;
>>   else if (randf() < 0.5) return b;
>>   else return c;
>> }
>>
>> void main() {
>>   const count = 1024*1024;
>>   for (int z = 0; z < 4; ++z) {
>>     writefln("Naive: ", time({
>>       for (int i = 0; i < count; ++i) gen().test();
>>     }()));
>>     writefln("Speculative for B: ", time({
>>       for (int i = 0; i < count; ++i) {
>>         auto t = gen();
>>         if (t.classinfo is typeid(B)) (cast(B)cast(void*)t).test();
>>         else t.test();
>>       }
>>     }()));
>>     writefln("Speculative for B/C: ", time({
>>       for (int i = 0; i < count; ++i) {
>>         auto t = gen();
>>         if (t.classinfo is typeid(B)) (cast(B)cast(void*)t).test();
>>         else if (t.classinfo is typeid(C)) (cast(C)cast(void*)t).test();
>>         else t.test();
>>       }
>>     }()));
>>   }
>> }
>>
>>
>> And as it turns out, virtual method calls were at least fast enough to not make any sort of difference in my calls.
>>
>> Here's the output of my little proggy in the last iteration:
>>
>> Naive: 560958
>> Speculative for B: 574602
>> Speculative for B/C: 572429
>>
>> If anything, naive is often a little faster.
>>
>> This kind of completely confuses my established knowledge on the matter. Looks like recent CPUs' branch predictions really are as good as people claim.
>>
>> Sorry for the confusion.
> 
> you are looking with a binocular at a coin a mile away and tryin' to figure quarter or nickel. never gonna work. most likely ur benchmark is buried in randf timing.
> 
> make iteration cost next to nothing. put objects in a medium size vector. then iterate many times over it.

I know. But if it doesn't matter in that case, it most likely won't matter in practial situations.

Nonetheless, here are some better timings, including a faster (dirtier) randf() function, and a null pass that only generates the object.

Null: 729908
Naive: 1615314
Speculative for B: 1692860
Speculative for B/C: 1664040



More information about the Digitalmars-d mailing list