std.unittests for (final?) review

Jens Mueller jens.k.mueller at gmx.de
Mon Jan 3 07:34:53 PST 2011


> >>In fact (without looking at std.unittest) I think it should be grouped
> >>with a simple benchmark facility. That's what the homonym frameworks in
> >>Google's and Facebook's code base do.
> >
> >I'm afraid that I don't see what unit test helper functions have to do with
> >benchmarking.
> 
> They both help the decision: "Is this module good to go?" At work we
> almost always put benchmarks in the same file as the unit tests, and
> run them together (subject to a --benchmark flag because benchmarks
> tend to take a fair amount of time). I find that very natural,
> although I should add that the unittest and benchmark libraries are
> nominally distinct.

How do you do it? I had similar thoughts. When I do testing I also want
to add some performance tests. In D I thought about having
version(performance) (maybe better version(benchmark)) inside a unittest
{} to assess performance. This way compiling with -unittest
-version=performance will execute the benchmarking code.

> >And I don't believe that we have a benchmarking module at the
> >moment regardless, so if you want to do that, we'd need to create one. The only
> >benchmarking stuff that I'm aware of is the bencharking stuff in std.datetime that
> >SHOO did, which isn't all that much code. I would have thought that unit test
> >helper functions would merit their own module, particularly when I don't see
> >what they have to do with benchmarks.
> 
> Adding basic support for benchmarking shouldn't be difficult and
> does not entail a lot of code, but I understand it if you're not all
> that motivated to work on that.

What do you think is needed for basic support? I found std.perf and used
it but I think it's deprecated. But std.datetime will do it as well (I
haven't looked at it though).

Jens


More information about the Digitalmars-d mailing list