I was wrong.

superdan super at dan.org
Thu Aug 14 07:04:37 PDT 2008


downs Wrote:

> superdan wrote:
> > downs Wrote:
> > 
> >> To clear this up, I've been running a benchmark.
> >>
> >> module test91;
> >>
> >> import tools.time, std.stdio, tools.base, tools.mersenne;
> >>
> >> class A { void test() { } }
> >> class B : A { final override void test() { } }
> >> class C : A { final override void test() { } }
> >>
> >> A a, b, c;
> >> static this() { a = new A; b = new B; c = new C; }
> >>
> >> A gen() {
> >>   if (randf() < 1f/3f) return a;
> >>   else if (randf() < 0.5) return b;
> >>   else return c;
> >> }
> >>
> >> void main() {
> >>   const count = 1024*1024;
> >>   for (int z = 0; z < 4; ++z) {
> >>     writefln("Naive: ", time({
> >>       for (int i = 0; i < count; ++i) gen().test();
> >>     }()));
> >>     writefln("Speculative for B: ", time({
> >>       for (int i = 0; i < count; ++i) {
> >>         auto t = gen();
> >>         if (t.classinfo is typeid(B)) (cast(B)cast(void*)t).test();
> >>         else t.test();
> >>       }
> >>     }()));
> >>     writefln("Speculative for B/C: ", time({
> >>       for (int i = 0; i < count; ++i) {
> >>         auto t = gen();
> >>         if (t.classinfo is typeid(B)) (cast(B)cast(void*)t).test();
> >>         else if (t.classinfo is typeid(C)) (cast(C)cast(void*)t).test();
> >>         else t.test();
> >>       }
> >>     }()));
> >>   }
> >> }
> >>
> >>
> >> And as it turns out, virtual method calls were at least fast enough to not make any sort of difference in my calls.
> >>
> >> Here's the output of my little proggy in the last iteration:
> >>
> >> Naive: 560958
> >> Speculative for B: 574602
> >> Speculative for B/C: 572429
> >>
> >> If anything, naive is often a little faster.
> >>
> >> This kind of completely confuses my established knowledge on the matter. Looks like recent CPUs' branch predictions really are as good as people claim.
> >>
> >> Sorry for the confusion.
> > 
> > you are looking with a binocular at a coin a mile away and tryin' to figure quarter or nickel. never gonna work. most likely ur benchmark is buried in randf timing.
> > 
> > make iteration cost next to nothing. put objects in a medium size vector. then iterate many times over it.
> 
> I know. But if it doesn't matter in that case, it most likely won't matter in practial situations.
> 
> Nonetheless, here are some better timings, including a faster (dirtier) randf() function, and a null pass that only generates the object.
> 
> Null: 729908
> Naive: 1615314
> Speculative for B: 1692860
> Speculative for B/C: 1664040

the whole premise of speculation is it oils the common path. you have uniform probabilities. what you gain on speculating for B you lose in the extra test when misspeculating on others. so to really test speculation make B like 90% of cases and retry.



More information about the Digitalmars-d mailing list