Archive for the ‘ Software ’ Category

OT2004 : Towards EJB 3.0

Despite the title, this session, led by Scott Crawford from the EJB expert group, turned out to be one of my favourites.

I’m a bit rusty on EJB. I was big into EJB for a few years and then one day I just stopped using it. I spent some time before the session reminding myself of the way of the EJB. Actually, EJB ain’t that bad.

This session was an opportunity to have our say about problems we see with EJB. Something we were clear on doing was focussing on the problems and consequences rather than “wouldn’t it be nice if…” solutions. Between us we came up with our top thirty gripes. In fact there were many more we could have come up with but didn’t have time.

From there we went through each of them and voted on whether we recognised the problem and, if so, whether it was deemed important. This narrowed it down to ten. Each of us then represented an issue close to our hearts and defended it in a debate with the goal to narrow it down to three.

The problem I represented was that EJBs only expose one coarse grained business interface to the outside world (ignoring the home/local/remote thing). This coarse grained API based exposes methods to clients (including other EJBs) that are not necessarily relevant. Objects (and EJBs) are different things to different people and by depending on the interface for the entire implementation, rather than a specific fine grained role, you end up with a higher coupling of components in the system and reduced flexibility. Most other APIs get this right but EJB prevents you.

Needless to say, I never made it into the top three. I had no chance against:

* Testability.
* Transparent persistence for entity beans.
* Death to RemoteException.

And to be fair, they are much more important :).

Some of the other things that made it into the top list (off the top of my head):

* More flexible O/R mapping, relations and finders.
* Breaking the 70’s paradigm shift of separating state and behaviour into entity and session beans. Whatever happened to OO?
* Simpler deployment.
* Declarative error handling.
* Lazy loading / pageable iterators in the specification.

Anyway, the results of this are to be fed back to the EJB group and I really hope they can make a difference. Time will tell.

Kudos to Scott for hosting a good session. Everyone came out feeling good about themselves. And well done to Robin Roos for being on best behaviour and not mentioning JDO once :). It was also reassuring to see the developers from BEA violently agreeing with all of our gripes – they too are victims!

OT2004 : Generic Mechanisms in Software Engingeering

This workshop, hosted by Simon Davies, and chums from Microsoft (with a bit of Bruce Anderson), was a thinking excercise about the pros and cons of some of the language features of C# (and conveniently also Java) including generic types, attributes and dynamic code generation.

We were given some sample scenarios to which we had to contrast the approaches of using these features in isolation and combined together, taking into account ease of development, runtime performance, flexibility, maintainability and simplicity:
* Creating a strongly typed list.
* Serializing objects to XML (a subject I’m familiar with).
* CRUD persistence of domain objects to a database.

This was quite thought provoking. While it was very easy to see the advantages of using generics to implement a strongly typed list, experimenting with all the different approaches was a fun exercise. Code generation may be less simple but offers better runtime performance.

It was fun brainstorming ideas with developers from different backgrounds and seeing which approaches appealed to them.

I was also shown the anonymous delegate feature in the next version of C#. Mmmm closures…

// usage
int totalAge = 0;
peopleFinder.All(delegate(Person person) {
totalAge += person.Age;
});

// implementation
class PersonFinder {

public delegate PersonDelegate(Person person);

public void All(PersonDelegate delegate) {
some magic iteration code {
Person person = ...;
delegate(person);
}
}

}

Now for a staticly typed language, that’s a lot cleaner than I was expecting.

This is something I’m really excited about as it’s the style of development I use all the time in Java, without all the syntactic cruft of anonymous inner classes. And I’m sure it’ll appeal to all the Ruby/Smalltalkers out there.

Something else I learned is that (unlike Java 1.5), generic definitions are available at runtime.

Plenty more info in the C# 2.0 specification.

OT2004 : Mock Objects: Driving Top-Down Development

Nat and I were first up with our talk on Mock Objects. Yes, we are still harping on about them :).

Here’s what we covered:

* OO concepts: an application is a web of collaborating objects, each providing a distinct responsibility and taking on multiple roles to provide services to other objects.
* How the process of using mock objects complements TDD to drive out the design of these responsibilities and roles.
* How our original usage of mocks for testing system boundaries such as databases, web-apps, GUIs, external libraries turned out to be a bad approach and the success we started achieving when inverting this to only mock types we can change.
* The process of using mocks very quickly points out key abstractions in your system, difference in responsibilities between objects and services objects require.
* Clearing up misconceptions about mocks, including: not using them at system boundaries and what they actually are (not stubs, not recorders, no behaviour).
* Our first public demonstration of the new JMock API to walk through a single iteration of the process.
* Usage patterns.

Feedback from: James Robertson @ Cincom Smalltalk and Mike Platt @ Microsoft.

OT 2004

So here I am, about to start the final day of OT2004. I overslept this morning and missed my chance to join the main conference room for todays keynote. Joshua Bloch is in there now talking about how to design a good API and why it matters. I’d like to have heard that. Instead I’m taking advantage of the wireless network to blog.

It’s been an awesome conference. Besides going to many great sessions, I’ve had the chance to hang out with awesome people, talk tech and play with code.

How to do Dynamic Proxies in C#

Background: A dynamic proxy dynamically generates a class at runtime that conforms to a particular interface, proxying all invocations to a single ‘generic’ method.

Earlier, Stellsmi asked if it’s possible to do this in .NET (it’s a standard part of Java). Seeing as it’s the second time I’ve talked about it in as many days, I reckon it’s worth blogging…

As far as I know, there are two ways to do this:

* Using the magical RealProxy class that monkeys with the context (whatever that means). This requires that the object that is being proxied must extend ContextBound.
* Using Reflection.Emit to generate a new class at runtime that overrides/implements the necessary methods and dispatches invocations to a generic handler. This can implement an interface or override any virtual method.

The first approach is pretty trivial, but it locks you into the fact that you can only proxy objects that extend ContextBound. Not ideal as this pulls a lot of stuff into your class you don’t necessarily need and prevents you from inheriting something else.

The second approach is more suitable, less intrusive, but not pretty to write as it involves writing low-level IL op-codes. However, I did this already in NMock and at GeekNight, Steve and Jon lovingly decoupled the ClassGenerator from the core of NMock, so you can create generic dynamic proxies. So now it’s easy to create a proxy.

h3. Example

Here’s an interface you want to create a dynamic proxy for:

interface IFoo {
string DoStuff(int n);
}

Note:

* You can use one invocation handler to handle all method calls in the interface. The example above only has one method for clarity.
* This doesn’t have to be an interface. It could be a class, so long as the methods you want handle are all marked virtual (sigh).
* You can do the same thing for properties as well as methods.

To create the proxy, you need to create an implementation of IInvocationHandler. This is called any time a method is invoked on the proxy.

class MyHandler : IInvocationHandler {
public object Invoke(string methodName, param object[] args) {
return "hello from " + methodName;
}
}

Notes:

* The name of the method being called and parameters passed in are passed to this handler.
* Whatever is returned by the Invoke() method is returned by the proxy.
* If the method is of type void, just return null.

Finally, you need to generate the proxy itself so you can actually use the damn thing:

ClassGenerator generator = new ClassGenerator(
typeof(Foo), new MyHandler()  );
IFoo fooProxy = (IFoo)generator.Generate();

string result = fooProxy.DoStuff(2); // returns "hello from DoStuff"

Ermm and that’s it! Use your dynamic proxy like it’s a real class.

ClassGenerator is part of the NMock library. Use it for AOP style interceptors, decorators, stubbing, mocking :), whatever.

Brain rumblings… Hmm… maybe that should be a delegate instead… maybe I should revisit some stuff…

Making JUnit friendlier to AgileDox

Here’s a little trick to make JUnit display your test results in a similar way to AgileDox.

Override TestCase.getName() in your unit test…

public String getName() {
return super.getName().substring(4).replaceAll("([A-Z])", " $1").toLowerCase();
}

… and your test runner results are transformed from this …

… to this …

To make this easier, stick it in an abstract TestCase and inherit from that instead.

Inversion of Control and Dependency Injector Pattern

Martin Fowler has pubished a new article:

bq. In the Java community there’s been a rush of lightweight containers that help to assemble components from different projects into a cohesive application. Underlying these containers is a common pattern to how they perform the wiring, a concept they refer under the very generic name of “Inversion of Control”. In this article I dig into how this pattern works, give it the more specific name of “Dependency Injector”, and contrast it with the Service Locator alternative. The choice between them is less important than the principle of separating configuration from use.

Read more

What’s a YAUL? Or a YAUC?

Hani has a very valid post about util libraries.

Libraries that just have a bunch of non focussed utils in them, rarely provide a benefit to anyone other than the original authors.

When developers are looking into using a third party library, it is for a specific reason. A library that has a specific focus may meet this requirement, whereas an all-purpose util library rarely will (and even if it does, it is unlikely that it will be found).

Much of the code within the library may serve little purpose outside the context of the application it was originally built for, but not necessarily all.

By identifying the code within that has defined focus, you can extract that out and provide a library with a specific purpose.

Example

Whilst developing many parts of OpenSymphony (and our own bespoke applications that use OpenSymphony), we noticed there were many odd little methods that we found useful, which lead to the conclusion that other people would also find them useful. We slapped them together into a util library and OSCore was born.

This library lacked a clear responsiblity and in the end became a dumping ground for code that didn’t seem to belong elsewhere. As a consequence, it wasn’t successful. No one is going to choose to use a library that does this, that, a bit of this and has that really useful method there.

Eventually this was learned and the one part of the library that had a clearly defined use (PropertySet) was split out into its own project. Today, PropertySet is a well used library, whereas the rest of OSCore has faded into the background.

Incidently, this is a lesson we learned early on in OpenSymphony and the result has been many quality components since, many of which I rely on today.

YAUCs

As well as libraries, this also applies to individual classes. A class named CheeseUtil express very little about the focus of the class other than it’s got something to do with cheese. As a result util classes often grow fairly big and lack clear design (in fact, they often end up as a bunch of public static methods).

In this case a util class can be refactored into many smaller classes which can each flourish on their own, with their own design.

YAUC Example

In OSCore there’s a class called XMLUtils.

From the JavaDoc:

XMLUtils is a bunch of quick access utility methods to common XML operations.

These include:

* Parsing XML stream into org.w3c.dom.Document.
* Creating blank Documents.
* Serializing (pretty-printing) Document back to XML stream.
* Extracting nodes using X-Path expressions.
* Cloning nodes.
* Performing XSL transformations.

Authors: Joe Walnes, Hani Suleiman :)

Unless someone had specifically looked at the JavaDoc or methods of every single class in the system, there would be little chance of knowing that.

Breaking this class up into many smaller classes, errm, objects such as XMLParserFacade, XMLPrettyPrinter, NodeCloner, XPathEvaluator and XSLTransformere, expresses the intent clearer and is much more likely to be found.

(Hani and I have since learned from these mistakes).

h4. Conclusion

Small and clearly defined responsibilities for libraries and individual classes results in improved reuse as they have an increased chance of meeting part of a developer’s requirement.

When simplicity backfires

I put a lot of effort into refactoring a larger, more complicated library into the lean and simple Squiggle you see today.

Annoyingly…

bq. Freshmeat tries to avoid listing projects which fall below a certain level of size and/or complexity, and yours is unfortunately a bit too simple for our application index. Your contribution has been respectfully declined.

Easily build complicated SELECT statements with Squiggle

After serving me loyally for four years, I’ve finally got around to open-sourcing Squiggle – a small Java library for dynamically building complicated SQL SELECT statements.

Sometimes (not often these days) you just need to get your hands dirty and write a beastly SELECT statement. Maybe a persistence layer is deemed overkill for your application, or maybe a persistence layer is struggling with the type of query you want to do. There are times when writing some SQL is the right thing to do.

Here’s the blurb from the website:


Squiggle does one thing and only one thing. It generates SELECT statements based on criteria you give it. It’s sweet spot is for applications that need to build up complicated queries with criteria that changes at runtime. Ordinarily it can be quite painful to figure out how to build this string. Squiggle takes much of this pain away.

The code for Squiggle is intentionally clean and simple. Rather than provide support for every thing you could ever do with SQL, it provides support for the most common situations and allows you to easily modify the source to suit your needs.

Features

* Concise and intuitive API.
* Simple code, so easy to customize.
* No dependencies on classes outside of JDK1.2.
* Small, lightweight, fast.
* Generates clean SQL designed that is very human readable.
* Supports joins and sub-selects.

Here’s a very simple example:

Table people = new Table("people");

SelectQuery select = new SelectQuery(people);

select.addColumn(people, "firstname");
select.addColumn(people, "lastname");

select.addOrder(people, "age", Order.DECENDING);

System.out.println(select);

Which produces:

SELECT
people.firstname ,
people.lastname
FROM
people
ORDER BY
people.age DESC

Go check out the website and two minute tutorial.