Thursday, February 25, 2010

“We’ll paint it red and call it a Ferrari”

We have all seen them. Aspiring boy racers who have taken an old banger and put on the trimmings of a sports car - spoilers, big noisy exhausts etc. But it doesn’t change what is under the disguise - an old vehicle, nearing the end of its useful life, but can still limp along and get the owner from A to B. Eventually. If you have an old, rusty, Ford Fiesta, you can’t just get away with painting it red and calling it a Ferrari. It is still a Fiesta.

Yet it still appears that some parts of the software industry genuinely believe that they can get away with this mentality. Let me say this one more time (with feeling) - you cannot simply take a broken software development process, put some agile tools and ceremonies around it and expect it to perform like the real thing. Or to put it another way, no matter how many spoilers and exhausts you bolt on, they can never have a significant impact on the performance of something that already has the aerodynamics of a brick and the power of an asthmatic hamster....

To change a Fiesta into a Ferrari you need to change what is at the very heart - the engine. And you need to upgrade what that engine is attached to - the chassis, brakes, bodywork, oil. It’s the same with software. To turn your clapped out system into a high performance delivery-focussed process you need to change the people by educating, empowering and inspiring them - they’re your development engine, and many of them will already be up to the new, high performance job if given the chance. Then change anything that stops them performing - existing processes, bureaucracy, corporate inertia. Make nothing sacred - if it’s in the way, remove it. Now add some performance oil - make the resistance to further change as low as possible.

Any vehicle needs a skilled, motivated, passionate driver at the wheel who knows where he wants to go. This is your Product Owner. Put someone like this at the wheel and you just might have your race winning formula.

One final thought to consider. At least with software development you stand a chance of changing your old banger to a Ferrari.

Monday, February 22, 2010

Repeat after me....

"What are we trying to do?"

"Where is the failing test?"

Repeat this mantra all the while you are developing. When you pick up a story, when you start a new slice, when you rotate onto a story, when you simply feel bogged down in detail and need enlightenment.

These questions leave you with nowhere to run to, nowhere to hide. And certainly no excuses.

Wednesday, February 03, 2010

Starting with Kanban Q&A

OK, I admit it. I have played on the Dark Side, and can confirm they have some damned good cookies. Yes, I introduced kanban to the Scrum process being used by my customer.....

I know there are many people interested in this technique, so here are some key questions (and answers!) that you will come across during rollout. It's not exhaustive, but hopefully it will provide a practical starting point from which you can experiment. I have assumed some knowledge of what kanban is - limited work in progress, not timeboxed etc etc.

Firstly, some background. The project I was coaching consisted of three cooperating but independent teams consisting of 4-6 devs and 1 QA. There is just one Business Analyst acting as Customer for the project - Product Owner by proxy. The project used fairly standard Scrum techniques to manage progress - stories on cards, on a board, estimated using planning poker. The board consisted of swim lanes Not Started, In Progress, QA and Done. Standard stuff.

However they were suffering from problems associated with the abstract nature of story points - are they really days or fluffy bunnies? And how many Gummi Bears make a Mars Bar? So I was looking for a way to simplify planning further but without losing the ability to plan ahead.

So what happened when we rolled out a kanban system into the first team and limited their work in progress? They had to address certain key questions:

Q. "How many columns should we have?"

The first team initially stuck with the old trusty "Not Started, In Dev, QA, Done" pattern. But they put a work in progress (WIP) limit across both In Dev and QA columns. Later teams simply didn't bother differentiating, and simply have an "In Progress" column, forcing developers to actually talk to QAs when a card is finished (a radical idea at the time!).

Q. "What are our WIP limits?"

There was some interesting discussion around this, especially since all the teams were starting to actively encourage pair programming. They quickly realised that the WIP limit can be used to enforce pairing, but also allow a degree of solo development.

Remembering we have a team of 6 developers, initially they went for 3 in the 'Not Started' and 4 for 'In Dev/QA'. Considering the latter first, a limit of 4 allows 2 developers to work independently without pairing, the rest are forced to pair up.

The team quickly found that the 3 card 'Not Started' WIP limit was too small. It was running out too quickly, and it did not allow enough focus on what comes next (eg stories that naturally group together). It currently stands at 6, which definitely seems to be better without changing the cycle time too much.

The team is operating with no slack in their In Dev/QA queue. It's always full. This appears to be working so far, but there have been no "urgent, do now" stories yet.

Q. "How do we stop blowing our WIP limit?"

A problem quickly arose from the particular flavour of storycard discipline I teach. I ask people to start with a simple story, and write decisions and agreed requirements on the back to capture them as they are discovered - hence it's often useful to take it with you when working on the card. An avatar is used as the placeholder on the board to show what's missing.

It quickly became apparent that stories were being sucked into empty WIP spaces because the card was on someone's desk, even though an avatar was there; the WIP space wasn't "real". This meant the agreed WIP limit was being exceeded. This was the case even though the team only had enough magnets for the limit (magnets stay on the board, but were being reused).

They solved this initially by having 'placeholder' cards - generic cards that were swapped with real story cards. This carried on until the team had the discipline to only pull in cards when the magnets were in an "available" area on the board. (Photos to follow!)

Q. "How do we track our cycle time?"

A. Cycle time is the time from a story being pulled out of the backlog to be done to actually reaching "Done" - a critical measurement for planning. The solution for tracking this was elegantly simple - the Customer has a date stamp. When a card gets pulled in to 'Not Started', it gets stamped. When the Customer sees the story has been finished to his satisfaction, it gets stamped. No-one else is allowed to use the stamp, so the Customer has to see the story demonstrated. Simple.

Q. "I can't line up cards that follow a natural order"

This was a Big Issue for some of the team. Limiting the incoming work had an interesting effect - it forced people to talk to each other more! Especially the developer-customer conversations. Rigidly limiting the number of cards in 'Not Started' encouraged the Customer and Developers to work out what really needed to be there. An unexpected but valuable effect.

Q. "What about blocked stories?"

A. Many of the stories in the project are dependent on 3rd party software. So if that software is not working, the story cannot progress to "Done". It is blocked. There is no point in a blocked story blocking a WIP slot, so the team introduced the idea of a "Sin Bin" - stories that are blocked beyond the team's control. Stories in the Bin are monitored for progress, and when they become unblocked they are re-added to the backlog for prioritisation as normal. The Sin Bin also provided a useful visual indication of just how much work was being blocked by the 3rd party.

Q. "How do we show burndown/progress?"

This is the one issue that I am not yet happy with. How to estimate a finish date for a given amount of work.

On the one hand, there is the cycle time. No problem there. A story card start-to-finish is taking 3 days or so. But the problem is the extensive backlog. There is no guarantee that the stories are correctly sized, so either the team needs to go through every single story to make sure it is roughly the right size (or is an 'epic' equivalent to 2, or 3, or however many real stories), or to increase the size of a story to a "Minimum Marketable Feature" and track those.

I admit I don't like the idea of MMFs - in this context they would be too big a slice through the system and remove the benefits of iterative development. Currently the teams are taking the first approach - estimating epics as stories.

Another issue around planning is the external business. Like many companies, they are addicted to Gantt charts and are nervous about not having a definite statement that "this story will take x days". Story points and burndown graphs showing the real situation were not always well received because they didn't fit the "plan". Switching to counting cards is having similar problems, and is seen by many as an oversimplification.

Q. "We don't have timeboxes any more, so do we still need regular demonstrations? Retrospectives?"

A. YES! The teams demonstrate their progress to a wider audience of stakeholders every week. Everything demonstrated is truly "Done" - it is ready to ship. No smoke and mirrors, no mock layers, and everything running on the real product. This approach has been hugely successful in motivating the team, and ensuring the stakeholders know exactly where the issues are. By using a weekly heartbeat like this, any problems and decisions can be identified and addressed quickly.

Teams are still retrospecting every week too. However the next logical step would be to change this to ad hoc kaizen events as problems are found, and maybe holding a larger, formal retrospective less frequently.

Planning Games are held when required - i.e. when the 'Not Started' column is nearly empty. And because they only need to pick a maximum of 6 cards they tend to be brief.

So overall, how has the change been received? The main feedback from the people actively involved with the project has been almost entirely positive. Everyone feels that it feels like it is a more natural way to work.

From a coach's point of view, the change to "Scrumban" was far easier than I imagined. Why did I wait so long?

Don't just interview new developers. Audition them!

Theatre Masks PNGIt's such a common mistake. A company claims to be "agile" (whatever that is) yet keeps the old, stale accoutrements of waterfall interviews. The developer candidate walks in, answers random questions on what can relate to extremely wide and deep subject matter, and is then judged on their ability without them ever actually demonstrating that they can do the job. Sometimes the successful candidate really can fit in with the team, sometimes not. But most importantly, good, well suited people will inevitably be dismissed as unsuitable simply because they learn the principles and not the detail.

Being a "good" software developer is no longer about memorising a spec. The ability to regurgitate arbitrary sections of a language specification parrot fashion on request, although previously useful, has been de-emphasised by the invention of radical new ways to store information. Like books. And the internet. All you need are the basic frameworks, objects and principles; i.e. you know the bare bones, but look up the detail.

Nor is software development about how you react as an individual to unexpected coding puzzles while denied even the most basic of tools to help you - if the first reaction is "I wouldn't normally solve it like this..." (or worse, "There's a smarter way but you're not letting me use it") then the entire exercise is likely to be pointless. I have lost count of the number of companies who, over the years, have handed me a pen and paper, or a whiteboard marker, and expected me to write a technically correct solution to an abstract problem. No real world example, so it's not real world coding.

Agile teams are about people and the way they interact. It is all about being able to work smart. It is about being able to write good code collaboratively, helped by tests. The recruitment process needs to put more emphasis on interactions, initiative, and ability to process unfamiliar information - the wetware aspect of development. So what if you cannot remember the exact interface to QuadCurve2D? As long as you know it exists and understand the design patterns it is based on you can look up the details.

So how to identify the right people for the job if traditional interview techniques don't work?

Simple. Audition them. Get the candidate actually doing something related to the job they have applied for. Do they look for the right things from the start? ("Where's the continuous integration server?", "Where are your tests?"). Get them down and dirty in the codebase, see their reaction. Do they get lost in the detail, or do they start to tease out the structure with tests? Code with them. Cooperate with them. Can you work with them? Does their coding behaviour fit team culture? How fast do they learn? Are they a craftsman or hacker?

You could even say you are unit testing the developer.

Auditions do require a little more effort, but in my experience they take some of the guesswork out of the traditional interview process when trying to find good people for the team.

Tuesday, February 02, 2010

Sign up for change

The Independent has recently published an article highlighting the state of the UK Government's IT projects. Let's just say that it's not a success story:

...the total cost of Labour's 10 most notorious IT failures is equivalent to more than half of the budget for Britain's schools last year. Parliament's spending watchdog has described the projects as "fundamentally flawed" and blamed ministers for "stupendous incompetence" in managing them.

Think about it. That's £26 billion of taxpayers' money down the drain because of "fundamentally flawed" projects and "stupendous incompetence". Putting this into context, university funding is being cut by a mere £449 million next year - so we are cutting investment in our future, while frittering away £26 billion on avoidable failures. Epic Fail.

I have already suggested that these failures are likely to have been entirely avoidable. So have others. But there needs to be a fundamental change in the way that Government IT projects are run.

Rob Bowley has started a petition to Number 10 to try and raise the profile of this problem with the Government. It reads:

We the undersigned are tired of hearing how much of our money is being blown on failed governemt IT projects.

We believe that this is mainly due to the nature in which these projects are firstly procured and then delivered - that of demanding and committing to all the requirements for enormous projects up front, consigning them to failure before they have even begun.

The private sector has mostly moved away from this failed model to an incremental approach, which allows for changes in understanding and requirements and enormously reduces the chances of failure. It's about time our government did too.

We ask the Prime Minister to demand a review of the current approach and look at adopting a more incremental and agile approach to Government IT projects

Whether it makes a difference only time will tell, but I urge anyone who cares about public sector IT development to sign it. Any journey starts with a first step, and this might be it.