Unit tests in D

Lutger lutger.blijdestijn at gmail.com
Wed May 5 13:51:31 PDT 2010


bearophile wrote:

> dmd D 2.045 improves the built-in unit tests resuming their run when they
> fail (it reports only the first failed assert for each unit test).
> 
> There are many features that today a professional unittest system is
> expected to offer, I can write a long list. But in the past I have
> explained that it's a wrong idea to try to implement all those things in
> dmd.
> 
> So a good solution that has all the advantages is:
> - To add dmd the "core" features that are both important and hard to
> implement nicely in an external library or IDE (or make D more flexible so
> writing such libs is possible, but this can be not easy). - To add dmd the
> compile-time reflection, run-time reflection or hooks that external
> unittest libs/IDEs can use to extend the built-in unit testing
> functionality.
> 
> It's not easy to find such core features (that can be used by an IDE, but
> are usable from the normal command line too), this is my first try, and I
> can be wrong. Feel free to add items you think are necessary, or to remove
> items you know can be implemented nicely in an external library. Later I
> can write an enhancement request.

I think that most of the features you mention can be implemented in a 
library, but at some cost. For example, tests get more verbose. Or for 
example with the string mixin syntax with unaryFun and binaryFun you are 
limited to the parameters and symbols from a select few phobos modules. There 
is something to be said for the simplicity of how D natively handles 
unittesting I think. 

Perhaps some issues with local template instantiation and / or mixin 
visibilty can be sorted out to improve this, I'm not sure - it's a bit above 
my head. Anyway I think if unittests can get a name as a parameter and dmd 
let's the user set the AssertError handler, that will be a sufficient hook to 
provide some useful user extension of the unittest system. I'll post some 
more specific comments below: 
 
> ---------------------
> 
> 1) It's very useful to have a way to catch static asserts too, because
> templates and other things can contain static asserts too, that for example
> can be used to test if input types or constants are correct. When I write
> unittests for those templates I want to test that they actually statically
> asserts when I use them in a wrong way.
> 
> A possible syntax (this has to act like static assert):
> static throws(foo(10), StaticException1);
> static throws(foo(10), StaticException1, StaticException2, ...);
> 
> A version that catches run-time asserts (this has to act like asserts):
> throws(foo(10), Exception1);
> throws(foo(10), Exception1, Exception2, ...);
> 
> 
> There are ways to partially implement this for run-time asserts, but badly:
> 
> void throws(Exceptions, TCallable, string filename=__FILE__, int
> line=__LINE__)
>            (lazy TCallable callable) {
>     try
>         callable();
>     catch (Exception e) {
>         if (cast(Exceptions)e !is null)
>             return;
>     }
> 
>     assert(0, text(filename, "(", line, "): doesn't throw any of the
>     specified exceptions."));
> }

I have set up a unittesting system in a hacky way that does something like 
this, but instead of asserting it just prints the error and collects the test 
result in a global storage. When the program ends this gets written to a json 
file. Seems to work well enough, what do you think the above lacks? I have 
wrapped it something like this:

expectEx!SomeException(foo(10));

I have also done this for compile time assertions, but it is more limited:

int a;
expectCompileError!(isSorted, a);
or: expectCompileError!(q{ isSorted(a) }, a);

One cool thing of D2 however is that you can get the exact name and value of 
an alias parameter by local instantiation, for example this test:

int[] numbers = [3,2,1,7];
expect!( isSorted, numbers );

prints:
test.d(99) numbers failed: isSorted(alias less = "a < b",Range) if 
(isForwardRange!(Range)) ( numbers ) :: numbers == [3 2 1 7]  
 
> 2) Names for unittests. Giving names to things in the universe is a first
> essential step if you want to try to understand some part of it. The
> compiler makes sure in each module two unittest tests don't share the same
> name. An example:
> 
> int sqr(int x) { return 2 * x; }
> 
> /// asserts that it doesn't return a negative value
> unittest(sqr) {
>     assert(sqr(10) >= 0);
>     assert(sqr(-10) >= 0);
> }

I agree, as mentioned I have done this by writing to some global state which 
records the currently running test, then are 'assertions' write to the same 
state. If only one thing could improve in the native system I think this 
should be it.

> ---------------------
> 
> 3) Each unittest error has to say the (optional) name of the unit tests it
> is contained into. For example:
> 
> test4(sqr,6): unittest failure

One tiny tip: test.d(6): unittest sqr failed

This way ide's and editors can parse it like regular D errors and jump to the 
failed test.

> ---------------------
> 
> 4) The dmd JSON output has to list all the unitttests, with their optional
> name (because the IDE can use this information to do many important
> things).
> 
> ---------------------
> 
> 5) Optional ddoc text for unittests (to allow IDEs to answer the programmer
> the purpose of a  specific test that has failed).
> 
> Unittest ddocs don't show up inside the HTML generated with -D because the
> user of the module doesn't need to know the purpose of its unittests. So
> maybe they appear only inside the JSON output.

I think there's a report in bugzilla from Andrei requesting that the 
unittests themselves can be turned into documentation. Together with 
preconditions in ddoc, that would seem very useful.

> ---------------------
> 
> 6a) A way to enable only unittests of a module. Because in a project there
> are several modules, and when I work on a module I often want to run only
> its unittests. In general it's quite useful to be able to disable
> unittests.

Shouldn't this be part of a tool that compiles and runs tests of each module? 
rdmd has the option --main, together with -unittest you can easily do this.
 
...
> 
> 
> Three more half-backed things, if you know how to improve/design this ideas
> you can tell me:
> 
> A) Serious unittest system needs a way to allow sharing of setup and
> shutdown code for tests.
> 
> From Python unittest: a test fixture is the preparation needed to perform
> one or more tests, and any associate cleanup actions. This may involve, for
> example, creating temporary or proxy databases, directories, or starting a
> server process, etc.
> 
> Fixtures can be supported at the package, module, class and function level.
> Setup always runs before any test (or groups of tests).
> 
> setUp(): Method called to prepare the test fixture.
> tearDown(): Method called immediately after the test method has been called
> and the result recorded.

I think this is standard xUnit. I have this:

class TestSuiteA : Fixture
{
    void setup() { /* initialize */}
    void teardown() { /* cleanup or restore */ }
    void test_foo() { /* test ...*/}
    void test_bar() { /* test ...*/}
}

// running setup, test_foo, test_bar and finally teardown:

unittest
{
    runSuite!TestSuite(); 
}


But it seems like a good idea to separate the fixture from the test suite, so 
they can easily be reused. 
 
> ---------------------
> 
> B) There are situations when you don't want to count how many unittests
> have failed, but you want to fix a bug, with a debugger. For this it can be
> useful a command line switch to turn unittest asserts back into normal
> asserts.

This is possible if you replace the assert statement with your own or perhaps 
hook up an assertHandler. At the moment this is a bit weird, perhaps it is a 
bug?

unittest
{
    void inner()
    {
        assert( false, "bar" );
    }
    assert( false, "foo" );
    inner();
    assert( false, "baz" );
}

This unittest halts with an AssertError inside inner().
 
> ---------------------
> 
> C) I'd like a way to associate a specific unittest to a function, class or
> module, or something else that is being tested, because this can be useful
> in several different ways. But this seems not easy to design. Keep in mind
> that tests can be in a different module. I don't know how to design this,
> or if it can be designed well.

Looks hard to me too. I think I like the idea that unittests should be close 
to the code it tests, otherwise it is a different kind of test. Perhaps we 
should consider ddoc, contracts and unittest as much part of the code as the 
code itself? 
 
> 
> Bye,
> bearophile



More information about the Digitalmars-d mailing list