Approach to Integration Testing in D
H. S. Teoh
hsteoh at quickfur.ath.cx
Fri Feb 4 17:39:00 UTC 2022
On Fri, Feb 04, 2022 at 12:38:08PM +0000, Vijay Nayar via Digitalmars-d-learn wrote:
> What is your approach to integration testing in D? Do you use
> `unittest` blocks? Do you write stand-alone programs that interact
> with a running version of your program? Is there a library that makes
> certain kinds of testing easier?
Unittests are, by definition, *unit* tests, :-) meaning they are more
suitable for testing individual functions or modules, not really for
integration testing of the entire program.
Though in practice, the line is somewhat blurry, and I have written
unittests that do test functionality across modules at times. The key is
to write your code in a way that's amenable to testing, one principle of
which is to avoid dependency on global state (cf. dependency injection).
For example, a function that uses std.stdio.File could be made
unittest-able by parametrizing `File` as a template parameter, so that a
unittest block can inject a proxy type that performs the test without
actually touching the filesystem (which could cause unwanted
side-effects, esp. if the same file(s) are touched by multiple tests).
> For example, if I have a D project based on vibe.d, and I have custom
> converters to receive REST API request bodies in different formats
> based on the "Content-Type" HTTP header and other converter for the
> response based on the "Accepts" header, what is the best approach to
> test the entire program end-to-end?
Depending on how you structured your program, templatizing types used by
your program could make it unittestable on a large scale, e.g., if there
was a way to inject your own request/response types into the code. Then
unittests could just pass in mock request/response types containing test
data and test program logic that way.
In some cases, however, it may be difficult to do this on a program-wide
scale, so in many cases an external program tester may be necessary. In
some of my projects I've written helper programs that read a directory
of test cases (which specifies program options, inputs, expected
outputs, etc.) and invokes the program and compares its output to the
expected output. This is hooked up to my build script so that it gets
run automatically upon each rebuild. Then adding a test case is just a
matter of adding some files to the directory and re-running the build.
They say that "guns don't kill people, people kill people." Well I think the gun helps. If you just stood there and yelled BANG, I don't think you'd kill too many people. -- Eddie Izzard, Dressed to Kill
More information about the Digitalmars-d-learn