Posts Tagged ‘ Testing ’

NMock article in MSDN Magazine

An article about NMock has appeared in the October issue of MSDN Magazine. Chuffed :)

Mock Objects to the Rescue! Test Your .NET Code with NMock

Unit Testing Asynchronous Code

I try to avoid using code that instantiates threads from unit-tests. They’re awkward to write, brittle and I would rather extract the controlling thread, using the test as the controller. However there are times when it’s unavoidable.

Here’s an example. A PriceFinder is a class that goes and retrieves a price for a symbol asyncronously, returning immediately. Sometime later it receives a response and performs a callback on a handler. I’ve left out the implementation of how it actually does this (maybe a web-service, database call or JMS message).

public class PriceFinder {
 public void findPrice(String symbol, PriceHandler handler) { ... }
}

public interface PriceHandler {
 void foundPrice(Price price);
}

To test this, a handler can be used from the test case that records what is passed in. The test can then wait for a specified time and assert that the correct result is received. If the asyncronous code does not complete within the specified time, the test will fail (suggesting the code is either running very slowly or is never going to complete).

public class PriceFinderTest extends TestCase {

 private PriceFinder finder = new PriceFinder();
 private Price receivedPrice;

 public void testRetrievesAValidPrice() throws Exception {
  finder.findPrice("MSFT", new PriceHandler() {
   public void foundPrice(Price price) {
    receivedPrice = price;
   }
  });

  // Smelly!
  Thread.sleep(2000); // Wait two seconds.
  assertNotNull("Expected a price", receivedPrice);
 }

}

However, this sucks as it will slow your test suite right down if you have loads of tests using Thread.sleep().

A less time consuming way to do it is by using wait() and notify() on a lock. The handler notifies the lock and the test waits for this notification. In case this notification never happens, a timeout is used when calling wait().

public class PriceFinderTest extends TestCase {

 private PriceFinder finder = new PriceFinder();
 private Price receivedPrice;
 private Object lock = new Object();

 public void testRetrievesAValidPrice() throws Exception {
  finder.findPrice("MSFT", new PriceHandler() {
   public void foundPrice(Price price) {
    receivedPrice = price;
    synchronized(lock) {
     lock.notify();
    }
   }
  });

  synchronized(lock) {
   lock.wait(2000); // Wait two seconds or until the
   // monitor has been notified.
   // But there's still a problem...
  } 
  assertNotNull("Expected a price", receivedPrice);
 }

}

This optimistic approach results in fast running tests while all is good. If the PriceFinder is behaving well, the test will not wait any longer than the PriceFinder takes to complete its work. If PriceFinder has a bug in it and never calls the handler, the test will fail in at most two seconds.

However, there’s still a subtle issue. In the case that the PriceFinder is really fast, it may call notify() before the test starts wait()ing. The test will still pass, but it will wait until the timeout occurs.

This is where threading and synchronization get messy and beyond me. Doug Lea has a nice little class called Latch in his concurrency library (available in JDK 1.5 as CountDownLatch). A latch can only be locked once, once released it will never lock again.

public class PriceFinderTest extends TestCase {

 private PriceFinder finder = new PriceFinder();
 private Price receivedPrice;
 private Latch latch = new Latch();

 public void testRetrievesAValidPrice() throws Exception {
  finder.findPrice("MSFT", new PriceHandler() {
   public void foundPrice(Price price) {
    receivedPrice = price;
    latch.release();
   }
  });

  latch.attempt(2000); // Wait until the latch is released
  // or a timeout occurs.
  assertNotNull("Expected a price", receivedPrice);
 }

}

That’ll do it.

OT2004 : Personal Practices Map

In the evening, everyone had a chance to run a Birds of a Feather session. I thought I’d give it a go and repeat the session from XtC last year.

It was a chance to talk to other developers about the little things we do that help guide development.

It was a calm and enlightening session. I felt I learned a lot from the attendees. Techniques that I would probably otherwise never know about unless I actually paired with them.

I look forward to doing this again at ADC.

Romilly generated a topic map from the results: (click here)

OT2004 : Mock Objects: Driving Top-Down Development

Nat and I were first up with our talk on Mock Objects. Yes, we are still harping on about them :).

Here’s what we covered:

* OO concepts: an application is a web of collaborating objects, each providing a distinct responsibility and taking on multiple roles to provide services to other objects.
* How the process of using mock objects complements TDD to drive out the design of these responsibilities and roles.
* How our original usage of mocks for testing system boundaries such as databases, web-apps, GUIs, external libraries turned out to be a bad approach and the success we started achieving when inverting this to only mock types we can change.
* The process of using mocks very quickly points out key abstractions in your system, difference in responsibilities between objects and services objects require.
* Clearing up misconceptions about mocks, including: not using them at system boundaries and what they actually are (not stubs, not recorders, no behaviour).
* Our first public demonstration of the new JMock API to walk through a single iteration of the process.
* Usage patterns.

Feedback from: James Robertson @ Cincom Smalltalk and Mike Platt @ Microsoft.

Making JUnit friendlier to AgileDox

Here’s a little trick to make JUnit display your test results in a similar way to AgileDox.

Override TestCase.getName() in your unit test…

public String getName() {
return super.getName().substring(4).replaceAll("([A-Z])", " $1").toLowerCase();
}

… and your test runner results are transformed from this …

… to this …

To make this easier, stick it in an abstract TestCase and inherit from that instead.

Tutorial: Using mock objects to drive top-down development

Tim Mackinnon and Nat Pryce and Steve Freeman and I are presenting a session on how mock objects can be used to drive code from the top (requirements first, infrastructure last) to produce high quality, clean and decoupled object designs that allow for business change.

Come see us at:
* XPDay Benelux – Fri 21st Nov 2003, Breda, Netherlands
* XPDay London – Tue 2nd Dec 2003, London, UK.
* OT2004 – Tue 30 Mar 2004, Cambridge, UK.

Excerpt:

Mock objects are usually regarded as a programming technique that merely supports existing methods of unit testing. But this does not exploit the full potential of mock objects. Fundamentally, mock objects enable an iterative, top-down development process that drives the creation of well designed object-oriented software.

This tutorial will demonstrate the mock object development process in action. We will show how using mock objects to guide your design results in a more effective form of test driven development and more flexible code; how mock objects allow you to concentrate more on end-user requirements than on infrastructure; and how the objects in the resultant code are small and oosely coupled, with well-defined responsibilities.

Includes:
* Brief introduction to mock objects and the dynamic mock API.
* The mock object design process explained.
* Top down vs. bottom up design.
* What to mock. And what not.

More…

Test Driven Development is not about testing

Dan writes:

bq. “Writing the test before you write the code focuses the mind – and the development process – on delivering only what is absolutely necessary. In the large, this means that the system you develop does exactly what it needs to do and no more. This in turn means that it is easy to modify to make it do more things in the future as they are driven out by more tests. ”

Read the rest here.