Re: DMD unittest fail reporting…

Jacob Carlborg via Digitalmars-d digitalmars-d at puremagic.com
Sun Dec 6 03:11:08 PST 2015


On 2015-12-05 21:44, Russel Winder via Digitalmars-d wrote:

> For the purposes of this argument, let's ignore crashes or manually
> executed panics. The issue is the difference in behaviour between
> assert and Errorf. assert in languages that use it causes an exception
> and this causes termination which means execution of other tests does
> not happen unless the framework makes sure this happens. D unittest
> does not. Errorf notes the failure and carries on, this is crucial
> important for good testing using loops.

I think that the default test runner is completely broken for 
terminating the complete test suite if a test fails. Although I do think 
I should terminate the rest of the test that failed. I also don't think 
one should test using loops.

> Very true, and that is core to the issue here. asserts raise exceptions
> which , unless handled by the testing framework properly, cause
> termination. This is at the heart of the problem. For data-driven
> testing some form of loop is required. The loop must not terminate if
> all the tests are to run. pytest.mark.parametrize does the right thing,
> as do normal loops and Errorf. D assert does the wrong thing.

Nothing says that you have to use assert in a unit test ;)

I'm not sure how your data looks like or what you're actually testing. 
But when I had the need to test multiple values it was either a data 
structure, then I could do one assert for the whole data structure. Or I 
used multiple tests.

> I think this is the evidence that proves that the current D testing
> framework is in need of work to make it better than it is currently.

Absolutely, the built in support is almost completely broken.

> If a stacktrace is needed the testing framework is inadequate.

I guess it depends on how you write your tests. If you only test a 
single function which doesn't call anything else that will work. But as 
soon ass the function you're testing calls other functions a stack trace 
is really needed.

What do you do when you get a test failure due to some 
exception/assertion is thrown deep inside some code you have never seen 
before and how no idea how the execution got there?

> dspecs

I'm not sure if you're referring to my "framework" [1] or this one [2]. 
But none of them will catch any exception and behave just as the 
standard test runner. But would like to implement a custom runner that 
catches assertions and continues with the next tests.

> and specd are

This one seems to only catch "MatchException". So if any other exception 
is thrown, including assert error, it will have the same behavior as the 
standard test runner.

[1] https://github.com/jacob-carlborg/dspec
[2] https://github.com/youxkei/dspecs

-- 
/Jacob Carlborg


More information about the Digitalmars-d mailing list