Everyone who writes safety critical software should read this

H. S. Teoh hsteoh at quickfur.ath.cx
Wed Oct 30 20:25:57 PDT 2013


On Thu, Oct 31, 2013 at 02:17:59AM +0100, deadalnix wrote:
> On Wednesday, 30 October 2013 at 19:25:45 UTC, H. S. Teoh wrote:
> >"This piece of code is so trivial, and so obviously, blatantly
> >correct, that it serves as its own proof of correctness." (Later...)
> >"What do you *mean* the unit tests are failing?!"
> >
> 
> I have quite a lot of horror stories about this kind of code :D Now
> I do not try to argue with people coming with this, simply write a
> test. Usually you don't need to get very far : absurdly high volume,
> malformed input, contrived memory, run the thing is a thread and
> kill the thread in the middle, etc . . .

A frighteningly high percentage of regular code already fails for
trivial boundary conditions (like pass in an empty list, or NULL, or
empty string, etc.), not even getting to unusual input or stress tests.


> Hopefully, it is much less common for me now to have to do so.
> 
> A programming school in France, which is well known for having
> uncommon practices (but form great people at the end) do run every
> program submitted by the student in an environment with 8ko of RAM.
> The program is not expected to do its job, but to at least fail
> properly.

Ha. I should go to that school and write programs that don't need more
than 8KB of RAM to work. :) I used to pride myself on programs that
require the absolute minimum of resources to work. (Unfortunately, I
can't speak well of the quality of the code though! :P)


> >Most software companies have bug trackers,
> 
> I used to work in a company with a culture strongly opposed to the
> use of such tool for some reason I still do not understand. At some
> point I simply answered to people that bugs didn't existed when they
> weren't in the bug tracker.

Wow. No bug tracker?? That's just insane. How do they keep track of
anything?? At my current job, we actually use the bug tracker not just
for actual bugs but for tracking project discussions (via bug notes that
serve as good reference later when we need to review why a particular
decision was made).


> >For automated testing to be practical, of course, requires that the
> >system be designed to be tested in that way in the first place --
> >which unfortunately very few programmers have been trained to do.
> >"Whaddya mean, make my code modular and independently testable? I've
> >a deadline by 12am tonight, and I don't have time for that! Just
> >hardcode the data into the global variables and get the product out
> >the door before the midnight bell strikes; who cares if this thing is
> >testable, as long as the customer thinks it looks like it works!"
> >
> 
> My experience tells me that this pay off in matter of days. Days as in
> less than a week. Doing the hacky stuff feel like it is faster, but
> measurement says otherwise.

Days? It pays off in *minutes* IME. When I first started using unittest
blocks in D, the quality of my code improved *instantly*. Nasty bugs
(caused by careless mistakes) were caught immediately rather than the
next day after ad hoc manual testing (that also misses 15 other bugs
that automated testing would've caught).

This is the point I was trying to get at: manual testing is tedious,
error-prone, because humans are no good at repetitive processes. It's
too boring, and causes us to take shortcuts, thus missing out on
retesting critical bits of code that may just happen to have acquired
bugs since the last code change. But you *need* repetitive testing to
ensure the new code didn't break the old, so some kind of unittesting
framework is mandatory. Otherwise tons of bugs get introduced silently
and bite you at the most inopportune time (like when a major customer
just deployed it in their production environment).

D's unittests may have their warts, but the fact that they are (1)
written in D, and thus encourage copious tests and *up-to-date* tests,
(2) are automated when compiling with -unittest (which I'd recommend to
be a default flag during development), singlehandedly addresses the
major points of automated testing already. I've seen codebases where
unittests were in a pariah class of "run it if you dare, don't pay
attention to the failures 'cos we think they're irrelevant, 'cos the
test cases are outdated", or "that's QA's job, it's not our department".
Totally defeats the purpose. Tests should be (1) automatically run *by
default* during development, and (2) kept up-to-date.

Point (2) is especially hard when the unittesting framework isn't built
into the language, because nobody wants to shift gears to write tests
when they could be "more productive" cranking out code (or at least,
that's the perception). The result is that the tests are outdated, and
the programmers stop paying attention to failing tests just like they
ignore compiler warnings.

D does it right for both points, even if people complain about issues
with selective testing, etc..


T

-- 
The fact that anyone still uses AOL shows that even the presence of options doesn't stop some people from picking the pessimal one. - Mike Ellis


More information about the Digitalmars-d mailing list