I was wrong.

downs default_357-line at yahoo.de
Thu Aug 14 05:44:40 PDT 2008


To clear this up, I've been running a benchmark.

module test91;

import tools.time, std.stdio, tools.base, tools.mersenne;

class A { void test() { } }
class B : A { final override void test() { } }
class C : A { final override void test() { } }

A a, b, c;
static this() { a = new A; b = new B; c = new C; }

A gen() {
  if (randf() < 1f/3f) return a;
  else if (randf() < 0.5) return b;
  else return c;
}

void main() {
  const count = 1024*1024;
  for (int z = 0; z < 4; ++z) {
    writefln("Naive: ", time({
      for (int i = 0; i < count; ++i) gen().test();
    }()));
    writefln("Speculative for B: ", time({
      for (int i = 0; i < count; ++i) {
        auto t = gen();
        if (t.classinfo is typeid(B)) (cast(B)cast(void*)t).test();
        else t.test();
      }
    }()));
    writefln("Speculative for B/C: ", time({
      for (int i = 0; i < count; ++i) {
        auto t = gen();
        if (t.classinfo is typeid(B)) (cast(B)cast(void*)t).test();
        else if (t.classinfo is typeid(C)) (cast(C)cast(void*)t).test();
        else t.test();
      }
    }()));
  }
}


And as it turns out, virtual method calls were at least fast enough to not make any sort of difference in my calls.

Here's the output of my little proggy in the last iteration:

Naive: 560958
Speculative for B: 574602
Speculative for B/C: 572429

If anything, naive is often a little faster.

This kind of completely confuses my established knowledge on the matter. Looks like recent CPUs' branch predictions really are as good as people claim.

Sorry for the confusion.



More information about the Digitalmars-d mailing list