RTest, a random testing framework

BCS ao at pathlink.com
Tue Jul 22 14:13:31 PDT 2008


Reply to dsimcha,

> I disagree. 

I'm not sure you do as I'm not sure what you are disagreeing with. 

All I was saying is that most (not all) errors are edge cases so spend more 
time (but not all of it) plugging away there.

If 90% of the errors can be found in 3% of the domain, I'd rather spend 90% 
of my time in that 3%

Aside from that, I have no issues with your assertions.

> Random testing can be a great way to find subtle bugs in
> relatively complex algorithms that have a simpler but less efficient
> equivalent.  For example, let's say you're trying to write a
> super-efficient implementation of a hash table with lots of little
> speed hacks that could hide subtle bugs in something that's only
> called a relatively small percentage of the time to begin with, like
> collision resolution.  Then, let's say that this bug only shows up
> under some relatively specific combination of inputs.  An easy way to
> be reasonably sure that you don't have these kinds of subtle bugs
> would be to also implement an associative array as a linear search
> just for testing.  This is trivial to implement, so unlike your
> uber-optimized hash table, if it looks right it probably is.  In any
> event, it's even less likely to be wrong in the same way as your hash
> table.  Then generate a ton of random data and put it in both your
> hash table and your linear search and make sure it all reads back
> properly.  If the bug is subtle enough, or if you don't think of it,
> it may just be near impossible to manually generate enough test cases
> to find it.
> 





More information about the Digitalmars-d mailing list