Strategies for resolving cyclic dependencies in static ctors

Steven Schveighoffer schveiguy at yahoo.com
Fri Mar 25 05:26:04 PDT 2011


On Thu, 24 Mar 2011 20:38:30 -0400, Graham St Jack  
<Graham.StJack at internode.on.net> wrote:

> On 25/03/11 06:09, Steven Schveighoffer wrote:
>> On Thu, 24 Mar 2011 00:17:03 -0400, Graham St Jack  
>> <Graham.StJack at internode.on.net> wrote:
>>
>>> Regarding unit tests - I have never been a fan of putting unit test  
>>> code into the modules being tested because:
>>> * Doing so introduces stacks of unnecessary imports, and bloats the  
>>> module.
>>
>> As Jonathan says, version(unittest) works.  No need to bloat  
>> unnecessarily.
>
> Agreed. However, all the circularity problems pop up when you compile  
> with -unittest.

This might be true in some cases, yes.  It depends on how much a unit test  
needs to import.

>
>>
>>> * Executing the unittests happens during execution rather than during  
>>> the build.
>>
>> Compile-time code execution is not a good idea for unit tests.  It is  
>> always more secure and accurate to execute tests in the environment of  
>> the application, not the compiler.
>
> I didn't say during compilation - the build tool I use executes the test  
> programs automatically.

Your build tool can compile and execute unit tests automatically.

>> Besides, this is an implementation detail.  It is easily mitigated.   
>> For example, phobos' unit tests can be run simply by doing:
>>
>> make -f posix.mak unittest
>>
>> and it builds + runs all unit tests.  This can be viewed as part of the  
>> "Build process".
>
> The problem I have with this is that executing the tests requires a  
> "special" build and run which is optional. It is the optional part that  
> is the key problem. In my last workplace, I set up a big test suite that  
> was optional, and by the time we got around to running it, so many tests  
> were broken that it was way too difficult to maintain. In my current  
> workplace, the tests are executed as part of the build process, so you  
> discover regressions ASAP.

It is as optional as it is to build external programs.  It all depends on  
how you set up your build script.

phobos could be set up to build and run unit tests when you type make, but  
it isn't because most people don't need to unit test released code, they  
just want to build it.

>>
>> The whole point of unittests are, if they are not easy to do and  
>> conveniently located, people won't do them.  You may have a really good  
>> system and good coding practices that allows you to implement tests the  
>> way you do.  But I typically will forget to update tests when I'm  
>> updating code.  It's much simpler if I can just add a new line right  
>> where I'm fixing the code.
>
> In practice I find that unit tests are often big and complex, and they  
> deserve to be separate programs in their own right. The main exception  
> to this is low-level libraries (like phobos?).

It depends on the code you are testing.  Unit testing isn't for every  
situation.  For example, if you are testing that a client on one system  
can properly communicates with a server on another, it makes no sense to  
run that as a unit test.

Unit tests are for testing units -- small chunks of a program.  The point  
of unit tests is:

a) you are testing a small piece of a large program, so you can cover that  
small piece more thoroughly.
b) it's much easier to design tests for a small API than it is to design a  
test for a large one.  This is not to say that the test will be small, but  
it will be more straightforward to write.
c) if you test all the small components of a system work the way they are  
designed, then the entire system should be less likely to fail.

This does not mean that to test a function or class cannot be complex.

I can give you an example.  It takes little thinking and effort to test a  
math function like sin.  You provide your inputs, and test the outputs.   
It's a simple test.  When was the last time you worried that sin wasn't  
implemented correctly?  If you have a function that uses sin quite a bit,  
you are focused on testing the function, not sin, because you know sin  
works.  So the test of the function that uses sin gets simpler also.

>>> * Much easier to manage inter-module dependencies.
>>
>> Not sure what you mean here.
>
> I mean that the tests typically have to import way more modules than the  
> code under test, and separating them is a key step in eliminating  
> circular imports.

This can be true, but it also may be an indication that your unit test is  
over-testing.  You should be focused on testing the code in the module,  
not importing other modules.

> As for the time tests take, an important advantage of my approach is  
> that the test programs only execute if their test-passed file is out of  
> date. This means that in a typical build, very few (often 0 or 1) tests  
> have to be run, and doing so usually adds way less than a second to the  
> build time. After every single build (even in release mode), you know  
> for sure that all the tests pass, and it doesn't cost you any time or  
> effort.

This can be an advantage time-wise.  It depends on the situation.   
dcollections builds in less than a second, but the unit tests build takes  
about 20 seconds (due to a compiler design issue).  However, running unit  
tests is quite fast.

Note that phobos unit tests are built separately (there is not one giant  
unit test build, each file is unit tested separately), so it is still  
possible to do this with unit tests.

> The difference in approach is basically this:
>
> With unittest, tests and production code are in the same files, and are  
> either built together and run together (too slow); or built separately  
> and run separately (optional testing).

Or built side-by-side and unit tests are run automatically by the build  
tool.

> With my approach, tests and production code are in different files,  
> built at the same time and run separately. The build system also  
> automatically runs them if their results-file is out of date (mandatory  
> testing).

Unit tests can be built at the same time as building your production code,  
and run by the build tool.  You have obviously spent a lot of time  
creating a system where your tests only build when necessary.  I believe  
unit tests could also build this way if you spent the time to get it  
working.

> Both approaches are good in that unit testing happens, which is very  
> important. What I like about my approach is that the tests get run  
> automatically when needed, so regressions are discovered immediately (if  
> the tests are good enough). I guess you could describe the difference as  
> automatic incremental testing versus manually-initiated batch testing.

Again, the manual part can be scripted, as can any manual running of a  
program.

What I would say is a major difference is that using unittest is prone to  
running all the unit tests for your application at once (which I would  
actually recommend), whereas your method only tests things you have deemed  
need testing.  I think unittests can be done that way too, but it takes  
effort to work out the dependencies.

I would point out that using the "separate test programs" takes a lot of  
planning and design to get it to work the way you want it to work.  As you  
pointed out from previous experience, it's very easy to *not* set it up to  
run automatically.

With D unit tests, I think the setup to do full unit tests is rather  
simple, which is a bonus.  But it doesn't mean it's for everyone's taste  
or for every test.

-Steve


More information about the Digitalmars-d mailing list