std.benchmark is in reviewable state
Christophe
travert at phare.normalesup.org
Wed Sep 28 05:19:16 PDT 2011
"Robert Jacques" , dans le message (digitalmars.D:145443), a écrit :
> *sigh* By multiple runs, I mean multiple runs of the benchmark function, _including_ the loops. From my post last Tuesday:
>
> double min_f1 = int.max;
> double min_f2 = int.max;
> foreach(i;0..10_000) {
> auto b=benchmark!(f1,f2)(3);
> min_f1 = min(min_f1, b[0].to!("seconds",double));
> min_f2 = min(min_f2, b[1].to!("seconds",double));
> }
>
> All the looping that benchmark currently does is to lengthen the total
> time a function takes. This makes it possible to avoid sampling errors
> with regard to the performance counter itself. But you are still only
> making a single, noisy measurement of a (meta)function. In order to
> get a robust result, you need to make multiple measurements and take
> the minimum.
Sorry to question you, but what makes you say that the minimum is more
interesting than, let's say, the mean + the standard deviation. Is there
any paper supporting this ?
I am not a benchmark specialist, but an average statistician who knows
that the minimum is not always very informative (a very low value can be
a piece of luck). I am sure you may have good reasons to choose the
minimum, but I just want to make sure we make the right choice by
choosing to use the minimun of consecutive benchmarks.
--
Christophe
More information about the Digitalmars-d
mailing list