Helper unit testing functions in Phobos (possible std.unittests)
spir
denis.spir at gmail.com
Sun Nov 7 03:22:15 PST 2010
On Sat, 06 Nov 2010 19:45:32 +0000
Adam Burton <adz21c at gmail.com> wrote:
> bearophile wrote:
>
> > spir:
> >
> >> Jonathan M Davis:
> >>
> >> > I believe strongly that a unit test block which has a failure should
> >> > end excecution. For many such tests, continuing would be utterly
> >> > pointless, since each successive test relies on the last.
> >>
> >> I don't understand. I can have one dozen test cases for each of one dozen
> >> funcs. All 144 tests are independant. I prefere the possibility to see
> >> all test errors in one go, if any. Anyway, there may be a flag
> >> STOP_AT_FIRST_TEST_ERROR (or the opposite).
> >
> > If you look at all the unittests used in the real world (D is not one of
> > them yet), they give you statistics, they tell you how many tests have
> > failed. So unittesting goes on when there is an error. A single unittest
> > may be stopped if there is a failure inside it. This is how D behaves now,
> > and I think it's correct.
> >
> > Bye,
> > bearophile
> Depends how you are defining test. Do you mean an individual test on a unit
> or tests among multiple units? Personally I prefer to design my tests on a
> single unit to all be independant of each other (so any common data and
> types are reset at the start of each test). By having multiple tests fail I
> can often find the fault common among all of them which usually allows me to
> fix an issue faster. If all the tests on the unit depended on each other I'd
> only be able to rely on the first test that failed which gives me less
> information to go on.
Exactly. I completely share this point of view.
But there is more for me: unit tests are the runtime equivalent of a compiler -- only far less reliable, except in trivial cases. We want the compiler to go on decoding after a first error, if possible, to feed us with relevant data also allowing to correct several errors at once. The same applies to unit tests.
In other words, I consider a unit test as a tool for monitoring my code. By default, my tests are verbose, they write out outcome even if passed; a switch allows making them silent.
From this point of view, asserts are too low-level, but sophisticated unittest engines are far too complicated: there are rather a barrier than a help. I tried to find the proper level of practicality & generality, as explained in a previous post; but it's hard because various kinds of applications seem to require different approaches. so that I often end up writing ad hoc testing tools. Here is an example, for a PEG parsing library I just started to write yesterday (*):
class Pattern {
...
void test (in string text, bool failure=false, string silent="") {
/+ Test pattern on text.
Note: use constants FAILURE & SILENT as parameters.
(Did not yet a way to have 2 optional bool params.)
* expected outcome may be either
success (--> node) or failure (--> exception)
* case wrong outcome, TestError exception is thrown
* if !silent, outcome is written on console
* if silent, outcome is written only in case of failure
Uses writeTestOutcome.
+/
string result ; // waiting for Node type
// perform test
try {
result = this.match(text) ;
} catch (MatchError e)
result = "*failure*" ;
// check outcome --> possibly write test error
if ( (failure && (result != "*failure*")) ||
(!failure && (result == "*failure*")) ) {
writeTestOutcome(this, text, result) ;
throw new TestError() ;
}
// write outcome if not silent
if (silent != SILENT) {
writeTestOutcome(this, text, result) ;
}
}
Denis
(*) As a means to learn D, so it may well be really naive code. If there is need for it (did not find any D PEG tool), I will publish it for the community. I did not have it right at once: had to write a meta testTest() func!
-- -- -- -- -- -- --
vit esse estrany ☣
spir.wikidot.com
More information about the Digitalmars-d
mailing list