Parallel execution of unittests

Steven Schveighoffer via Digitalmars-d digitalmars-d at puremagic.com
Thu May 1 07:42:54 PDT 2014


On Thu, 01 May 2014 00:49:53 -0400, Jonathan M Davis via Digitalmars-d  
<digitalmars-d at puremagic.com> wrote:

> On Wed, 30 Apr 2014 20:33:06 -0400
> Steven Schveighoffer via Digitalmars-d <digitalmars-d at puremagic.com>
> wrote:
>
>> On Wed, 30 Apr 2014 13:50:10 -0400, Jonathan M Davis via
>> Digitalmars-d <digitalmars-d at puremagic.com> wrote:
>>
>> > On Wed, 30 Apr 2014 08:59:42 -0700
>> > Andrei Alexandrescu via Digitalmars-d <digitalmars-d at puremagic.com>
>> > wrote:
>> >
>> >> On 4/30/14, 8:54 AM, bearophile wrote:
>> >> > Andrei Alexandrescu:
>> >> >
>> >> >> A coworker mentioned the idea that unittests could be run in
>> >> >> parallel
>> >> >
>> >> > In D we have strong purity to make more safe to run code in
>> >> > parallel:
>> >> >
>> >> > pure unittest {}
>> >>
>> >> This doesn't follow. All unittests should be executable
>> >> concurrently. -- Andrei
>> >>
>> >
>> > In general, I agree. In reality, there are times when having state
>> > across unit tests makes sense - especially when there's expensive
>> > setup required for the tests.
>>
>> int a;
>> unittest
>> {
>>     // set up a;
>> }
>>
>> unittest
>> {
>>     // use a;
>> }
>>
>> ==>
>>
>> unittest
>> {
>>     int a;
>>     {
>>        // set up a;
>>     }
>>     {
>>        // use a;
>>     }
>> }
>>
>> It makes no sense to do it the first way, you are not gaining
>> anything.
>
> It can make sense to do it the first way when it's more like
>
> LargeDocumentOrDatabase foo;
>
> unittest
> {
>     // set up foo;
> }
>
> unittest
> {
>    // test something using foo
> }
>
> unittest
> {
>   // do other tests using foo which then take advantage of changes made
>   // by the previous test rather than doing all of those changes to
>   // foo in order to set up this test
> }
>
> In general, I agree that tests shouldn't be done that way, and I
> don't think that I've ever done it personally, but I've seen it done,
> and for stuff that requires a fair bit of initialization, it can save
> time to have each test build on the state of the last. But even if we
> all agree that that sort of testing is a horrible idea, the language
> supports it right now, and automatically parallelizing unit tests will
> break any code that does that.

I recommend optimizing using a function.

i.e.:

version(unittest)
{
   LargeDocumentOrDatabase foo;

   auto getFoo() {/* check to see if foo is set up, return it*/}
}

I understand what you are saying. I think the largest problem with  
parallelizing unit tests is that people haven't been careful to make sure  
that's possible. Now they should, or face the consequences.

The point I was making, however, is that within a module, you can choose  
whether you want parallel or serial unit tests. If you want parallel,  
separate them into multiple unittest blocks. If you want serial, put them  
in one.

For the super-rare cases where it needs to be serial, put them in one.  
It's not hard.

>> > Honestly, the idea of running unit tests in parallel makes me very
>> > nervous. In general, across modules, I'd expect it to work, but
>> > there will be occasional cases where it will break.
>>
>> Then you didn't write your unit-tests correctly. True unit
>> tests-anyway.
>>
>> In fact, the very quality that makes unit tests so valuable (that
>> they are independent of other code) is ruined by sharing state across
>> tests. If you are going to share state, it really is one unit test.
>
> All it takes is that tests in two separate modules which have separate
> functionality access the file system or sockets or some other system
> resource, and they could end up breaking due to the fact that the other
> test is messing with the same resource. I'd expect that to be a
> relatively rare case, but it _can_ happen, so simply parallelizing
> tests across modules does risk test failures that would not have
> occurred otherwise.

Right, and with the knowledge that unit tests are being run in parallel,  
one can trivially change their design to fix the problem.

I agree my assumptions were not what you were thinking of. I wasn't  
thinking of shared system resources. But this isn't too difficult to  
figure out.

I do think there should be a way to mark a unit test as "don't parallelize  
this".

>> > Across the unittest
>> > blocks in a single module, I'd be _very_ worried about breakage.
>> > There is nothing whatsoever in the language which guarantees that
>> > running them in parallel will work or even makes sense. All that
>> > protects us is the convention that unit tests are usually
>> > independent of each other, and in my experience, it's common enough
>> > that they're not independent that I think that blindly enabling
>> > parallelization of unit tests across a single module is definitely
>> > a bad idea.
>>
>> I think that if we add the assumption, the resulting fallout would be
>> easy to fix.
>>
>> Note that we can't require unit tests to be pure -- non-pure
>> functions need testing too :)
>
> Sure, they need testing. Just don't test them in parallel, because
> they're not guaranteed to work in parallel. That guarantee _does_ hold
> for pure functions, because they don't access global, mutable state.
> So, we can safely parallelize a unittest block that is pure, but we
> _can't_ safely paralellize one that isn't - not in a guaranteed way.

A function may be impure, but run in a pure way.

>
>> I can imagine that even if you could only parallelize 90% of unit
>> tests, that would be an effective optimization for a large project.
>> In such a case, the rare (and I mean rare to the point of I can't
>> think of a single use-case) need to deny parallelization could be
>> marked.
>
> std.file's unit tests would break immediately. It wouldn't surprise me
> if std.socket's unit tests broke. std.datetime's unit tests would
> probably break on Posix systems, because some of them temporarily set
> the local time zone - which sets it for the whole program, not just the
> current thread (those tests aren't done on Windows, because Windows only
> lets you set it for the whole OS, not just the program). Any tests which
> aren't pure risk breakage due to changes in whatever global, mutable
> state they're accessing.

It depends on what the impure function is doing. Anything that's using the  
same temporary file should not be, that's easy. Anything that's using the  
same socket port should not be, that's easy. Anything that requires using  
the local time zone should be done in a single unit test. Most everything  
in std.datetime should use a defined time zone instead of local time.

Unit tests should be as decoupled from system/global state as possible.

> I would strongly argue that automatically parallelizing any unittest
> block which isn't pure is a bad idea, because it's not guaranteed to
> work, and it _will_ result in bugs in at least some cases. If we make it
> so that unittest blocks have their purity inferred (and allow you to
> mark them as pure to enforce that they be pure if you want to require
> it), then any unittest blocks which can safely be parallelized will be
> known, and the test runner could then parallelize those unittest
> functions and then _not_ parallelize the ones that it can't guarantee
> are going to work in parallel.

It would be at least a start, and I agree it would be an easy way to add  
some level of parallelism without breaking any existing tests. But I fear  
that there are many things that would be excluded incorrectly.

Take for example, std.datetime. The constructor for SysTime has this line  
in it:

_timezone = tz is null ? LocalTime() : tz;

All unit tests that pass in a specific tz (such as UTC) could be pure  
calls. But because of that line, they can't be!

> So, then we get safe, unittest parallelization without having to insist
> that folks write their unit tests in a particular way or that they do or
> don't do particular things in a unit test. And maybe we can add add
> some sort of UDA to tell the test runner that an impure test can be
> safely parallelized, but automatically parallelizing impure unittest
> functions would be akin to automatically treating @system functions as
> @safe just because we thought that only @safe code should be used in
> this particular context.

As I pointed out to Andrei, there is an enhancement, with a pull request  
even, that would potentially add the ability to detect UDAs on unit tests,  
giving us the ability to name them or to mark them however we want, so the  
runtime can take appropriate action.

https://issues.dlang.org/show_bug.cgi?id=10023

-Steve


More information about the Digitalmars-d mailing list