Is 100% test coverage a BAD thing?
I’m a huuuge advocate of TDD and high test coverage, and I will often go to great lengths to ensure this, but is 100% such a good thing?
I recently heard Tim Lister talking about risk in software projects and the CMM (powerpoint slides).
The ‘ultimate’ level of CMM ensure that everything is documented, everything goes through a rigorous procedure, blah blah blah. Amusingly, Tim pointed out that no CEO in their right mind would ever want their organization to be like that as they would not be effectively managing risk. You only need this extra stuff when you actually need this extra stuff. If there’s little risk, then this added process adds a lot of cost with no real value – you’re just pissing away money.
This also applies for test coverage. There are always going to be untested parts of your system but when increasing the coverage you have to balance the cost with the value.
With test coverage, you get the value of higher quality software that’s easier to change, but it follows the Law of diminishing returns. The effort required to get from 99% to 100% is huge… couldn’t that be spent on something more valuable like adding business functionality or simplifying the system?
Personally, I’m most comfortable with coverage in the 80-90% region, but your mileage may vary.