Monday, November 30, 2009

Services, clients, chickens and eggs

I find it remarkable how some teams manage to invent new and exciting ways to undermine the benefits of agile process and principles. The latest I have come across is what I have started to call the SOA chicken and egg dilemma - how do you develop a service and an associated client?

Here is the wrong (but sadly too common) way to approach this. Mock the client, build the service. Then mock the service and build the client. Then integrate. Each step being presented to the Product Owner as “done” separately. Worse still, some projects are using separate teams to develop each side.

See the problem? It’s mini-waterfall. Make the service, make the client, make it work. It can’t be used until you have developed both sides in isolation and integrated. Each side is based on BUFD assumptions about what it should do, and not what the system needs to do from a user perspective.

OK, so how would I handle this? Slice the functionality, not the components. Pick a nice, easy user oriented function involving the client-service combo, and make it work end-to-end. Then pick the next piece of functionality, make it work, thickening the slice. Iterate until sufficient. By all means use mocks to make life easier, but don't mislead yourself and your customer into thinking that if it works with the mock it’s ready to ship. Only ever demo as complete using a real client and service.

By approaching the problem in this way you get all the usual benefits - ability to ship working functionality as far as has been implemented, fast identification of design problems, ability to finish when there is just enough functionality.

The principle underlying this approach is simple. Done is DONE. No smoke. No mirrors. No excuses. If it can’t ship it ain’t finished.

Tuesday, November 03, 2009

Government IT failures - could we do better?

Once again the UK Government has a failed IT project. Failed as in almost three times over budget (an approximate overspend of a whopping £456 million), 3 years late (and still not delivered) and not satisfying fundamental requirements.

The chairman of the committee commented, "There was not even a minimum level of competence in the planning and execution of this project." Ouch!

So what went wrong? From the Public Accounts Committee report it appears that some of the key issues were:

  • Underestimating technical complexity
  • Underestimating the need to standardise ways of working to avoid excessive customisation
  • Poor planning
  • Poor financial monitoring
  • Poor supplier management
  • Too little control over changes

Costs and progress were not monitored for 3 years?!

Sounds familiar? It does to me - it smells like a typical "throw the requirements over the wall and hope" waterfall failure pattern.

But would a more agile, flexible approach helped us here?

Continuous delivery of requested features that were truly "Done" and ready to ship would have highlighted the project was floundering early. Iteratively delivering real features enables open, honest and irrefutable reporting, including useful data like estimated end dates, and cash burn so providing financial monitoring.

Highlighting a failing project early allows the right conversations to take place, leading to corrective action before the situation is critical. It is highly likely that problems with work practices, excessive customisation and technical difficulties would have been identified and fixed early.

Finally, controlling changes. By making the resistance to change as low as possible and applying a customer focussed "what is important" criteria the system would have had the most valuable features implemented first. This ensures the true nature of the project reveals itself very quickly - so if it is technically complex, it is discovered quickly and either mitigated (maybe the difficult feature is not that important after all?), or the project can be shelved or cancelled completely.

So in my view, using some common sense could have helped this project deliver, or at very least not burn £456 million in a failure.

There are two more points I would make. I don't know who was to blame, nor do I care. But I can see two fundamental behaviours that compounded a bad situation:

  • The Government abdicated its responsibility as Product Owner. Allowing a team to go dark for 3 years, especially when they have effectively a blank cheque, is simply ridiculous.
  • The Supplier abdicated its responsibility as development team. Even if the Product Owner is not engaging fully, I believe that a Team has a professional responsibility to inform them of progress, good or bad.

I see these failure modes all the time. But it does not have to be like this - there are a growing number of genuinely professional companies, teams and developers who could improve on this performance if only they were given the chance.