RTest, a random testing framework

Bruce Adams tortoise_74 at yeah.who.co.uk
Thu Jul 24 01:26:23 PDT 2008


On Tue, 22 Jul 2008 22:01:37 +0100, dsimcha <dsimcha at yahoo.com> wrote:

> I disagree.  Random testing can be a great way to find subtle bugs in  
> relatively
> complex algorithms that have a simpler but less efficient equivalent.   
> For
> example, let's say you're trying to write a super-efficient  
> implementation of a
> hash table with lots of little speed hacks that could hide subtle bugs in
> something that's only called a relatively small percentage of the time  
> to begin
> with, like collision resolution.  Then, let's say that this bug only  
> shows up
> under some relatively specific combination of inputs.  An easy way to be
> reasonably sure that you don't have these kinds of subtle bugs would be  
> to also
> implement an associative array as a linear search just for testing.   
> This is
> trivial to implement, so unlike your uber-optimized hash table, if it  
> looks right
> it probably is.  In any event, it's even less likely to be wrong in the  
> same way
> as your hash table.  Then generate a ton of random data and put it in  
> both your
> hash table and your linear search and make sure it all reads back  
> properly.  If
> the bug is subtle enough, or if you don't think of it, it may just be  
> near
> impossible to manually generate enough test cases to find it.

I agree with the strategy of using a slow version to test a fast version of
an algorithm. I often use it myself. I would still be less keen on throwing
random numbers at it. Rather I would try to write interfaces that exposes
the bit where you're being clever. In this case maybe the collision  
resolution
dohickey. I try to test things at the lowest level possible first. Really  
test
the units and then the other unit tests become more like integration  
tests. They
are mainly there to check that the logic of calling the simpler cases is  
correct.



More information about the Digitalmars-d mailing list