Named unittests

Andrei Alexandrescu SeeWebsiteForEmail at erdani.com
Fri May 17 21:56:45 UTC 2019


On 5/17/19 2:17 PM, H. S. Teoh wrote:
> On Fri, May 17, 2019 at 05:42:16PM +0000, KnightMare via Digitalmars-d wrote:
>> On Monday, 30 March 2015 at 21:52:35 UTC, Andrei Alexandrescu wrote:
>>> we need named unittests
>>
>> 4 years have passed. what the status of subj?
> 
> Use a UDA to name your unittest, then write a custom unittest runner to
> invoke it by name.
> 
> There are already several alternative unittest runners on
> code.dlang.org, such as unit-threaded.

It would be great if the default test runner would, in case of failure, 
print the file and line of the failing unittest. If there are string 
UDAs associated with the unittest, those should be printed too. 
Something like:

#line 42
@("Is this a pigeon?") unittest
{
     assert(0);
}

would print out something like:

program.d(42): failed unittest "Is this a pigeon?"
program.d(44): Error: core.exception.AssertError: Assertion failure

Currently, we print:

core.exception.AssertError at onlineapp.d(103): Assertion failure
----------------
??:? _d_assertp [0x5648d4103839]
onlineapp.d:103 _Dmain [0x5648d410252a]

which is terrible, and gratuitously so. This is because the format of 
the file/line information is not compatible with the format of 
compile-time errors, so people cannot use a variety of tools (IDEs, 
emacs/vim modes, editor plug-ins) to quickly jump to the offending line. 
And that should be the case, because what are unittests if not a natural 
extrapolation of compile-time diagnostics? We should - must! - market 
unittests consistently like that: really an extension of what the 
compiler can check. (All that nonsense with running unittests and then 
the application should go too, though there's scripts relying on that.)

Not to mention the whole thing with stopping all unittesting once one 
unittest failed. Does the compiler stop at the first error? No? Then 
running tests should not, either.

Again: the entire unittest workflow should be designed, handled, and 
marketed as an extension of the semantic checking process. The fact that 
it's done after code generation is a minor detail.

This matter bubbles up with some frequency to the top of our community's 
consciousness. Yet there's always something that prevents it getting 
fixed. Like that curse in Eastern European mythology with the builders 
who work on a church all day but then the walls fall at night.

People went off and created their own test runners, which is very nice, 
but there's a word to be said about choosing good defaults.


More information about the Digitalmars-d mailing list