Unit tests, asserts and contradictions in the spec

Jonathan M Davis newsgroup.d at jmdavisprog.com
Fri Feb 8 13:48:19 UTC 2019


On Friday, February 8, 2019 3:04:35 AM MST John Colvin via Digitalmars-d 
wrote:
> On Thursday, 7 February 2019 at 18:06:24 UTC, H. S. Teoh wrote:
> > On Thu, Feb 07, 2019 at 04:49:38PM +0000, John Colvin via
> > Digitalmars-d wrote: [...]
> >
> >> A fork-based unittest runner would solve some problems without
> >> having to restart the process (could be expensive startup) or
> >> have people re-write their tests to use a new type of assert.
> >>
> >> The process is started, static constructors are run setting up
> >> anything needed, the process is then forked & the tests run in
> >> the fork until death from success or assert, somehow
> >> communicating the index of the last successful test to the
> >> runner (let's call that tests[i]). Then if i < test.length - 1
> >> do another fork and start from tests[i + 2] to skip the one
> >> that failed.
> >>
> >> There are probably corner cases where you wouldn't want this
> >> behavior, but I can't think of one off the top of my head.
> >
> > One case is where the unittests depend on the state of the
> > filesystem, e.g., they all write to the same temp file as part
> > of the testing process. I don't recommend this practice,
> > though, for obvious reasons.
>
> Why would this cause a problem? Unless the tests are dependent on
> the *order* they're run it, which is of course madness. (Note
> that I am not suggesting running in parallel and that file
> descriptors would be inherited in the child fork)
>
> Can you sketch out a concrete case?

I've worked on projects before where a set of tests that ran built on top of
the previous ones in order to be faster - e.g. a program operating on a
database could have each test add or remove items from a database, and each
test then depends on the previous ones, because whoever wrote the tests
didn't want to have to recreate everything with each test. IIRC, the tests
for an XML parser at a company that I used to work for built on one another
in such a manner so that they didn't have to keep reading or writing the
file from scratch. And I'm pretty sure that I've seen other cases where
global variables were used between tests with the assumption that the
previous tests left them in a particular state, though I can't think of any
other concrete examples at the moment.

In general, I don't think that this is a good way to write tests, but it
_can_ reduce how long your tests take, and I've seen it done in practice.

IIRC, in the past, there was some discussion of running unittest blocks in
parallel, but it was quickly determined that if we were going to do
something like that, we'd need a way either to enforce that certain tests
not be run in parallel or to mark them so that they would be run in
parallel, because the odds were too high that someone out there was writing
tests that required that the unittest blocks be run in order and not in
parallel. Forking the program to run each unittest block does change the
situation somewhat over running each unittest in its own thread but though
not as much as it would for many languages because of D's thread-local by
default (which makes each thread more or less the same as a fork with
regards to a most of the state in your typical D program - just not all of
it).

- Jonathan M Davis





More information about the Digitalmars-d mailing list