Is 100% test coverage a BAD thing?

I’m a huuuge advocate of TDD and high test coverage, and I will often go to great lengths to ensure this, but is 100% such a good thing?

I recently heard Tim Lister talking about risk in software projects and the CMM (powerpoint slides).

The ‘ultimate’ level of CMM ensure that everything is documented, everything goes through a rigorous procedure, blah blah blah. Amusingly, Tim pointed out that no CEO in their right mind would ever want their organization to be like that as they would not be effectively managing risk. You only need this extra stuff when you actually need this extra stuff. If there’s little risk, then this added process adds a lot of cost with no real value – you’re just pissing away money.

This also applies for test coverage. There are always going to be untested parts of your system but when increasing the coverage you have to balance the cost with the value.

With test coverage, you get the value of higher quality software that’s easier to change, but it follows the Law of diminishing returns. The effort required to get from 99% to 100% is huge… couldn’t that be spent on something more valuable like adding business functionality or simplifying the system?

Personally, I’m most comfortable with coverage in the 80-90% region, but your mileage may vary.

  • Trackback are closed
  • Comments (7)
  1. More to the point: 100% test coverage doesn’t mean you’ve covered everything. In particular, you haven’t covered the requirements that are not yet in the code. :)

    This was Bob Glass’ “Fact 33” in “Facts & Fallacies”.

  2. Nice blog… I agree.

    So, has someone mentioned this to Aslak? Maybe Guantanamo needs a threshold :)

  3. +1 Robert – I’ve had trouble explaining that to management before. As for testing, I’ve certainly come across code which, because of it’s design, is very hard to unit test. Rather than go to the hassle of creating large (perhaps even unclear an dunmaintainable) unit tests we added functional tests by way of a comprimise.

  4. Managers, when they learn of code coverage, usually assume that 90% or higher is realistic. It’s a great goal, but I’ve worked on large, complex projects (such as a J2EE app server) that might have reached about 80% code coverage through a massive effort and thousands of tests, unit, integration and system scope, from internal and external sources (such as CTS – the Sun TCK for J2EE app servers).

    Every project I’ve worked on (all enterprise sized middleware and app servers) never get out of the 70-78% range across the board. There are always some areas tested to 100% and others that might never get beyond 40%, at least in automated testing. It takes a lot of discipline, good coding, good test writing and skillful automation (to save time to allow the aforementioned activities to flourish) to get good coverage — all requiring a lot of skill and hard work. Tools help, but I’ve yet to find a tool (e.g., such as one as JTest claims to be) that can replace the human effort involved in writing skillful tests that get meaningful coverage.

    That’s the other thing — when management or some QA folks get “metrics happy” about coverage numbers, you often see a push to get more coverage, which leads to a lot of contrived and redundant tests meant simply to hit code to get the coverage number up (which reduces time spent on more useful tests that might be more in the main path of the user experience), unless the effort is closely coordinated and managed.

  5. I’m fascinated ;-)

    When you write a piece of code do you stop and consciously ask yourself whether or not to test it? What are the criteria for that decision? Do the criteria ever vary?

    Which 80-90% of your application is the part that are you comfortable with having test coverage for?
    Do you find that the “uncovered” part of an application similar across different projects? Or do your business customers get to decide which part of the application not to test? Do they even know which part of the application is untested?!

  6. If you write your tests _before_ you write the methods that impliment them, 100% coverage is a lot easier. I’ve yet to see an example where you couldn’t unit test a method as long as you have a decent testing framework and can use mock objects. The basic rule of thumb is that if you can’t figure out how to test it, you _really_ need to refactor it until you can. Just remember, you have no way to know that the code that you think isn’t important enough to have unit tests today won’t be used in an extremely important part of the application in the future. If that happens, lack of test coverage (long after the original developers have left the project) will make maintenance a nightmare.

  7. Personally, I agree with the law of diminishing returns – and ultimately you have to be pragmatic(TM).

    However, on a recent (albeit small) project I worked on, at the end of every iteration (every 2 weeks) we had an “anal hour”. The anal hour was the last 1 or two hours before release where we would look at such anal things as clover reports, checkstyle reports and intellij code inspections, etc.

    The surprising thing was how many little bugs we flushed out with the seemingly “pointless” tests. “Oh, we are catching that exception there, instead of here – thats not what we want…” (It was not always the pointless test that flushed it out – but sometimes the change we had to make in order to write the test).
    Or sometimes: “ahh, that code is dead now – we forgot to delete it”.

    At 1 or two hours per iteration, it couldnt really be argued to be a big expenditure of effort. However, with this constant level of minor attention applied from the beginning, our unit test coverage sat at 98+%.

    (ultimately though, % coverage is meaningless because you can write tests that exercise 100% of the code and assert nothing… its the Red, ie the UNtested stuff thats interesting)

    -Nick

Comments are closed.
%d bloggers like this: