Do everything in Java…

H. S. Teoh via Digitalmars-d digitalmars-d at puremagic.com
Fri Dec 5 08:36:50 PST 2014


On Fri, Dec 05, 2014 at 03:55:22PM +0000, Chris via Digitalmars-d wrote:
> On Friday, 5 December 2014 at 15:44:35 UTC, Wyatt wrote:
> >On Friday, 5 December 2014 at 14:53:43 UTC, Chris wrote:
> >>
> >>As I said, I'm not against unit tests and I use them where they make
> >>sense (difficult output, not breaking existing tested code). But I
> >>often don't bother with them when they tell me what I already know.
> >>
> >>assert(addNumbers(1,1) == 2);
> >>
> >>I've found myself in the position when unit tests give me a false
> >>sense of security.

This is an example of a poor unittest. Well, maybe *one* such case isn't
a bad idea to stick in a unittest block somewhere (to make sure things
haven't broken *outright*, but you'd notice that via other channels
pretty quickly!). But this is akin to writing a unittest that computes
the square root of a number in order to test a function that computes
the square root of a number. Either it's already blindingly obvious and
you're just wasting time, or the unittest is so complex that it proves
nothing (you could be repeating exactly the same bugs as the code
itself!).

No, a better way to writing a unittest is to approach it from the user's
(i.e., caller's) POV. Given this function (as a black box), what kind of
behaviour do I expect from it? What if I give it unusual arguments, will
it still give the correct result? It's well-known that most bugs happen
on boundary conditions, not in the general output (which is usually easy
to get right the first time). So, unittests should mainly focus on
boundary and exceptional cases. For example, in testing a sqrt function,
I wouldn't waste time testing sqrt(16) or sqrt(65536) -- at the most,
I'd do just one such case and move on. But most of the testing should be
on the exceptional cases, e.g., what happens with sqrt(17) if the
function returns an int? That's one case. What about sqrt(1)? sqrt(0)?
what happens if you hand it a negative number?


> >Sure, you need to test the obvious things,
> 
> Everywhere? For each function? It may be desirable but hard to
> maintain.  Also, unit tests break when you change the behavior of a
> function, then you have to redesign the unit test for this particular
> function. I prefer unit tests for bigger chunks.

Usually, I don't even bother unittesting a function that isn't generic
enough that I know it won't drastically change over time. Usually, it's
when I start factoring out code in generic form that I really start
working on the unittests. When I'm still in the experimental /
exploratory stage, I'd throw in some tests to catch boundary conditions,
but I wouldn't spend too much time on that. Most of the unittests should
be aimed at preserving certain guarantees -- e.g., math functions should
obey certain identities even around boundary values, API functions
should always behave according to what external users would expect,
etc.. But internal functions that are subject to a lot of changes -- I
wouldn't do too much more than just stick in a few things that I know
might be problematic (usually while writing the code itself). Any cases
not caught by this will be caught at the API boundary when something
starts failing API guarantees.

Besides these, I'd add a unittest for each bug I fix -- for regression
control.

I'm not afraid of outright deleting unittests if the associated function
has been basically gutted and rewritten from scratch, if said unittests
are more concerned with implementation details. The ones concerned with
overall behaviour would be kept. This is another reason it's better to
put the unittest effort on the API level than on overly white-box
dependent parts, since those are subject to frequent revisions.


> >but I find the real gains come from being able to verify the
> >behaviour of edge cases and pathological input; and, critically,
> >ensuring that that behaviour doesn't change as you refactor.  (My day
> >job involves writing and maintaining legacy network libraries and
> >parsers in pure C.  D's clean and easy unit tests would be a godsend
> >for me.)
> >
> >-Wyatt
> 
> True, true. Unfortunately, the edge cases are usually spotted when
> using the software, not in unit tests. They can be included later, but
> new pathological input keeps coming up (especially if you write for
> third party software).

I guess it depends on the kind of application you write, but when
writing unittests I tend to focus on what ways the code could break,
rather than how it might work. Sure, you won't be able to come up with
*all* the cases, and unittests sure don't guarantee 100% bug-free code,
but generally you do catch the most frequent ones, which saves time
dealing with the whole cycle of customer reports, generating bug fix
change orders, QA testing, etc.. The ones that weren't caught early will
eventually be found in the field, and they would be added to the growing
body of unittests to control future regressions.


> Now don't get me wrong, I wouldn't want to miss unit tests in D, but I
> use them more carefully now, not everywhere.

As with all things, I'm skeptical of blindly applying some methodology
even when it's not applicable or of questionable benefit. So while I
definitely highly recommend D unittests, I wouldn't go so far as to
mandate that, for example, every function must have at least 3 test
cases or something like that. While I do use unittests a lot in D
because I find it helpful, I'm skeptical of going all-out TDD.
Everything in real-life is context- and situation-dependent, and such
overly-zealous rule application usually results in wasted efforts for
only marginal benefits.


T

-- 
Тише едешь, дальше будешь.


More information about the Digitalmars-d mailing list