Effective testing is vital in software development for ensuring a reliable product that meets its requirements and is fit for purpose. A common technique when testing at the component level is ensuring you use a set of test cases that provide a high level of statement and decision coverage - and it's easy to think this is adequate (it's often used as an exit criteria from that stage of testing, in fact). But is it really enough?
It's relatively simple (although not always easy) to achieve high coverage: 80% or more in most cases. But one has to be a little careful about the way this information is used - after all, managers love nothing better than metrics they can report to the Board which paint a clear, simple (some would say simplistic) picture of progress. And what could be clearer than high coverage statistics?
The problem is that even achieving a hundred percent coverage does not mean a component is "fully tested", because some defects only show up in very specific scenarios. In the following example, a single test case can provide 100% statement and decision coverage of the code snippet:
if (x>1) { x=x/y }
Testing this with x=2 and y=1 will work just fine. Nevertheless, there's a potentially nasty bug lurking inside the if statement - a "division by zero" error that would not be exposed unless your test case happened to use y=0!
Granted this is a trivial and somewhat contrived example, but it illustrates the point, and in fact these kinds of bugs are quite prevalent in newly-written code. The only way to expose them is to examine the code and deliberately test with a range of values that cover these kind of conditions.
So beware of being lulled into a false sense of security by high coverage figures, and whatever you do, make sure you share this information with your manager too!
previous post
next post