Tuesday, December 21, 2010

Broken Windows effect proved?

There has been a bunch of stuff written about the “Broken Windows” effect - the way that even good people will begin to do bad things if everything else is bad around them. In areas that look badly maintained, people tend to drop more litter. Where people are obviously flouting the rules others are more likely to bend them themselves. And in software, when you are working on bad code you are more likely to write more bad code.

I first stumbled across the idea in an interview with the Pragmatic Programmers, but the evidence given is fairly apocryphal. So I was pleased to find something more definite. And it does make for interesting reading. The effect is indeed real, and significant. The number of people breaking social norms when they see that others have done something similar more than doubles. It's the 'no-one will notice another little bit of mess' effect.

The study quoted in the article reinforces the need to maintain a high level of hygiene in your software. Some typical pointers:
  • Keep classes small, compact and easy to read. Keep honest by using peer review (pair programming, formal review etc).
  • Keep your code syntactically green. Don't tolerate compiler warnings unless absolutely unavoidable, and even then try to make these exceptions explicit (eg by using annotations). All modern IDEs are able to apply rules to warn of potentially error prone constructs and styles.
  • If you are serious about unit testing, unit test everything. If you need exceptions (mutators?) then make sure everyone knows what they are and agrees with the decision, then get everyone to police it. Reinforce the social norm.
  • Keep your tests green. No excuses. If it breaks, fix it or roll back quickly (I have heard of one team using an SVN hook on their gateway build; if that fails, the changes are immediately and automatically reverted to the last successful build).
  • Make sure your processes (QA, deployment etc) are slick and clean. If they're not, iterate them until they are. Anything that is difficult to do will encourage quick, dirty hacks and workarounds "until we get around to sorting things out".
  • Actively drive out all forms of technical debt. Or as one team succinctly put it, “Don’t step over the poo”. Don’t let tech debt accumulate and become acceptable otherwise it will fester and grow. Technical debt is the software equivalent of broken windows.
I could go on, but you get the idea. The golden rule is:
Don’t live with broken windows. If you see one, fix it!

Friday, October 01, 2010

Stop Press! It's official. You're better off promoting people randomly

Those fine people at the Improbable Research Institute have awarded the 2010 Ig Nobel Prize for Management to Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo of the University of Catania, Italy, for demonstrating mathematically that organizations would become more efficient if they promoted people at random.
 
So that's it. Maths supports what many people strongly suspected to be true for years.The Peter Principle is real. We can replace all those pointless appraisal systems with a random number generator.

Read their paper here.

Monday, September 13, 2010

Too easy to pair? Then automate it!

"We don't pair all the time, some things are so simple they don't need it". I have lost count of the number of times I have heard this.

It's undoubtedly a seductive idea. Something done multiple times that is so easy that one person can do it, without error, guaranteed. But it is in fact a very subtle process smell, one so subtle that it's easy to overlook/justify.

If something is so simple that an individual can do it without error, guaranteed, it is a prime candidate for automation. Take some time and care (and pair) to free up developers so they can focus on what they're best at - things that require creative thought. Don't waste any more time on the dull and scriptable than you need to.


What's in a name?


Everything, young padawan!




Thursday, July 01, 2010

The "Dev Complete" Fallacy

Has your manager ever walked over to you and asked when the story you're working on will be "dev complete"? Or have you ever overheard someone ask in a planning meeting "When will the product be dev complete?". Yes, I'm sure it sounds familiar. It's also one of my least favourite managementspeak phrases because it nonsensical.


Let's stop for a moment and consider what's being asked. What is "development complete"?


Many managers are stuck in a waterfall, sequential way of thinking. Requirements are thrown over the Chinese wall to developers, who throw it over the wall to QA and then, somehow, magically, working software appears out of the end. To them "dev complete" usually means ready to pass to QA. However the tacit implication is that it is not expected to come back again….that, well, development is complete…. The problem is that unless it's been tested, who can know whether this is the case? And if it there is a problem, then more development is required, so it's not complete.


So here's my alternative definition for this bizarre phrase:

"Dev Complete" means Done. As in Complete. Finished. Tested. Ready to ship.

Any other definition is simply "Dev incomplete".


Wednesday, May 26, 2010

Drop the "Certified" tag, guys!

Scrum Alliance has now launched its "Certified Scrum Developer" qualification. It's a seductive idea - hire people with this certification and they will have been pre-bootstrapped into an agile way of thinking and working, same as Certified Scrum Master. So why do I think it is such a bad idea?


It's taken a while to work it out, but I think I finally understand why I (and some others) don't like it. It's wordology. Pure and simple. "Certificate" and "Certification" carries a whole bunch of mental baggage.
Consider these two phrases:

"I attended a 5 day Object Oriented Design course"
"I am a Certified Object Oriented Designer"

I am referring to the exact same thing - way back in the Dark Ages I did indeed go on a week long course in OOAD. They gave me a certificate at the end, presumably because I turned up and didn't snore too loudly. Yet the first phrase carries far less authority than the second.

And here lies the problem with Scrum so-called "Certification". It suggests that you are an "expert", or at least have attained a level of competence over and above "I have stayed awake through a 2 day course". However, on the flip side, it undoubtedly does make for a better sales pitch and allow people to make a ton of hard cash issuing the certifications. Then there are the courses to certify the trainers, and certify the trainer trainers, not to mention the yearly subscription to keep your certification....

The problem could be fixed with the strike of a pen. Renaming Certified Scrum Master to the "Introduction to Becoming A Scrum Master Course" would be a great start. While we're at it, rename Certified Scrum Developer as the "Introduction to Agile Development in Scrum Course". You get the idea. Reserve "Certified" for anything that gets assessed properly, where you show that you have indeed mastered the skills.

Needless to say I won't be rushing out to become Certified just for the paper certificate. I'll let my CV show that I'm post-certification standard. If a company believes that attending the course is more important than actually being able to do it then I probably don't want to work there.

Thursday, February 25, 2010

“We’ll paint it red and call it a Ferrari”

We have all seen them. Aspiring boy racers who have taken an old banger and put on the trimmings of a sports car - spoilers, big noisy exhausts etc. But it doesn’t change what is under the disguise - an old vehicle, nearing the end of its useful life, but can still limp along and get the owner from A to B. Eventually. If you have an old, rusty, Ford Fiesta, you can’t just get away with painting it red and calling it a Ferrari. It is still a Fiesta.

Yet it still appears that some parts of the software industry genuinely believe that they can get away with this mentality. Let me say this one more time (with feeling) - you cannot simply take a broken software development process, put some agile tools and ceremonies around it and expect it to perform like the real thing. Or to put it another way, no matter how many spoilers and exhausts you bolt on, they can never have a significant impact on the performance of something that already has the aerodynamics of a brick and the power of an asthmatic hamster....

To change a Fiesta into a Ferrari you need to change what is at the very heart - the engine. And you need to upgrade what that engine is attached to - the chassis, brakes, bodywork, oil. It’s the same with software. To turn your clapped out system into a high performance delivery-focussed process you need to change the people by educating, empowering and inspiring them - they’re your development engine, and many of them will already be up to the new, high performance job if given the chance. Then change anything that stops them performing - existing processes, bureaucracy, corporate inertia. Make nothing sacred - if it’s in the way, remove it. Now add some performance oil - make the resistance to further change as low as possible.

Any vehicle needs a skilled, motivated, passionate driver at the wheel who knows where he wants to go. This is your Product Owner. Put someone like this at the wheel and you just might have your race winning formula.

One final thought to consider. At least with software development you stand a chance of changing your old banger to a Ferrari.

Monday, February 22, 2010

Repeat after me....

"What are we trying to do?"

"Where is the failing test?"


Repeat this mantra all the while you are developing. When you pick up a story, when you start a new slice, when you rotate onto a story, when you simply feel bogged down in detail and need enlightenment.

These questions leave you with nowhere to run to, nowhere to hide. And certainly no excuses.

Wednesday, February 03, 2010

Starting with Kanban Q&A

OK, I admit it. I have played on the Dark Side, and can confirm they have some damned good cookies. Yes, I introduced kanban to the Scrum process being used by my customer.....

I know there are many people interested in this technique, so here are some key questions (and answers!) that you will come across during rollout. It's not exhaustive, but hopefully it will provide a practical starting point from which you can experiment. I have assumed some knowledge of what kanban is - limited work in progress, not timeboxed etc etc.

Firstly, some background. The project I was coaching consisted of three cooperating but independent teams consisting of 4-6 devs and 1 QA. There is just one Business Analyst acting as Customer for the project - Product Owner by proxy. The project used fairly standard Scrum techniques to manage progress - stories on cards, on a board, estimated using planning poker. The board consisted of swim lanes Not Started, In Progress, QA and Done. Standard stuff.

However they were suffering from problems associated with the abstract nature of story points - are they really days or fluffy bunnies? And how many Gummi Bears make a Mars Bar? So I was looking for a way to simplify planning further but without losing the ability to plan ahead.

So what happened when we rolled out a kanban system into the first team and limited their work in progress? They had to address certain key questions:


Q. "How many columns should we have?"

A.
The first team initially stuck with the old trusty "Not Started, In Dev, QA, Done" pattern. But they put a work in progress (WIP) limit across both In Dev and QA columns. Later teams simply didn't bother differentiating, and simply have an "In Progress" column, forcing developers to actually talk to QAs when a card is finished (a radical idea at the time!).


Q. "What are our WIP limits?"

A.
There was some interesting discussion around this, especially since all the teams were starting to actively encourage pair programming. They quickly realised that the WIP limit can be used to enforce pairing, but also allow a degree of solo development.

Remembering we have a team of 6 developers, initially they went for 3 in the 'Not Started' and 4 for 'In Dev/QA'. Considering the latter first, a limit of 4 allows 2 developers to work independently without pairing, the rest are forced to pair up.

The team quickly found that the 3 card 'Not Started' WIP limit was too small. It was running out too quickly, and it did not allow enough focus on what comes next (eg stories that naturally group together). It currently stands at 6, which definitely seems to be better without changing the cycle time too much.

The team is operating with no slack in their In Dev/QA queue. It's always full. This appears to be working so far, but there have been no "urgent, do now" stories yet.


Q. "How do we stop blowing our WIP limit?"

A.
A problem quickly arose from the particular flavour of storycard discipline I teach. I ask people to start with a simple story, and write decisions and agreed requirements on the back to capture them as they are discovered - hence it's often useful to take it with you when working on the card. An avatar is used as the placeholder on the board to show what's missing.

It quickly became apparent that stories were being sucked into empty WIP spaces because the card was on someone's desk, even though an avatar was there; the WIP space wasn't "real". This meant the agreed WIP limit was being exceeded. This was the case even though the team only had enough magnets for the limit (magnets stay on the board, but were being reused).

They solved this initially by having 'placeholder' cards - generic cards that were swapped with real story cards. This carried on until the team had the discipline to only pull in cards when the magnets were in an "available" area on the board. (Photos to follow!)


Q. "How do we track our cycle time?"

A. Cycle time is the time from a story being pulled out of the backlog to be done to actually reaching "Done" - a critical measurement for planning. The solution for tracking this was elegantly simple - the Customer has a date stamp. When a card gets pulled in to 'Not Started', it gets stamped. When the Customer sees the story has been finished to his satisfaction, it gets stamped. No-one else is allowed to use the stamp, so the Customer has to see the story demonstrated. Simple.


Q. "I can't line up cards that follow a natural order"

A.
This was a Big Issue for some of the team. Limiting the incoming work had an interesting effect - it forced people to talk to each other more! Especially the developer-customer conversations. Rigidly limiting the number of cards in 'Not Started' encouraged the Customer and Developers to work out what really needed to be there. An unexpected but valuable effect.


Q. "What about blocked stories?"

A. Many of the stories in the project are dependent on 3rd party software. So if that software is not working, the story cannot progress to "Done". It is blocked. There is no point in a blocked story blocking a WIP slot, so the team introduced the idea of a "Sin Bin" - stories that are blocked beyond the team's control. Stories in the Bin are monitored for progress, and when they become unblocked they are re-added to the backlog for prioritisation as normal. The Sin Bin also provided a useful visual indication of just how much work was being blocked by the 3rd party.

Q. "How do we show burndown/progress?"

A.
This is the one issue that I am not yet happy with. How to estimate a finish date for a given amount of work.

On the one hand, there is the cycle time. No problem there. A story card start-to-finish is taking 3 days or so. But the problem is the extensive backlog. There is no guarantee that the stories are correctly sized, so either the team needs to go through every single story to make sure it is roughly the right size (or is an 'epic' equivalent to 2, or 3, or however many real stories), or to increase the size of a story to a "Minimum Marketable Feature" and track those.

I admit I don't like the idea of MMFs - in this context they would be too big a slice through the system and remove the benefits of iterative development. Currently the teams are taking the first approach - estimating epics as stories.

Another issue around planning is the external business. Like many companies, they are addicted to Gantt charts and are nervous about not having a definite statement that "this story will take x days". Story points and burndown graphs showing the real situation were not always well received because they didn't fit the "plan". Switching to counting cards is having similar problems, and is seen by many as an oversimplification.


Q. "We don't have timeboxes any more, so do we still need regular demonstrations? Retrospectives?"

A. YES! The teams demonstrate their progress to a wider audience of stakeholders every week. Everything demonstrated is truly "Done" - it is ready to ship. No smoke and mirrors, no mock layers, and everything running on the real product. This approach has been hugely successful in motivating the team, and ensuring the stakeholders know exactly where the issues are. By using a weekly heartbeat like this, any problems and decisions can be identified and addressed quickly.

Teams are still retrospecting every week too. However the next logical step would be to change this to ad hoc kaizen events as problems are found, and maybe holding a larger, formal retrospective less frequently.

Planning Games are held when required - i.e. when the 'Not Started' column is nearly empty. And because they only need to pick a maximum of 6 cards they tend to be brief.


So overall, how has the change been received? The main feedback from the people actively involved with the project has been almost entirely positive. Everyone feels that it feels like it is a more natural way to work.

From a coach's point of view, the change to "Scrumban" was far easier than I imagined. Why did I wait so long?

Don't just interview new developers. Audition them!

It's such a common mistake. A company claims to be "agile" (whatever that is) yet keeps the old, stale accoutrements of waterfall interviews. The developer candidate walks in, answers random questions on what can relate to extremely wide and deep subject matter, and is then judged on their ability without them ever actually demonstrating that they can do the job. Sometimes the successful candidate really can fit in with the team, sometimes not. But most importantly, good, well suited people will inevitably be dismissed as unsuitable simply because they learn the principles and not the detail.

Being a "good" software developer is no longer about memorising a spec. The ability to regurgitate arbitrary sections of a language specification parrot fashion on request, although previously useful, has been de-emphasised by the invention of radical new ways to store information. Like books. And the internet. All you need are the basic frameworks, objects and principles; i.e. you know the bare bones, but look up the detail.

Nor is software development about how you react as an individual to unexpected coding puzzles while denied even the most basic of tools to help you - if the first reaction is "I wouldn't normally solve it like this..." (or worse, "There's a smarter way but you're not letting me use it") then the entire exercise is likely to be pointless. I have lost count of the number of companies who, over the years, have handed me a pen and paper, or a whiteboard marker, and expected me to write a technically correct solution to an abstract problem. No real world example, so it's not real world coding.

Agile teams are about people and the way they interact. It is all about being able to work smart. It is about being able to write good code collaboratively, helped by tests. The recruitment process needs to put more emphasis on interactions, initiative, and ability to process unfamiliar information - the wetware aspect of development. So what if you cannot remember the exact interface to QuadCurve2D? As long as you know it exists and understand the design patterns it is based on you can look up the details.

So how to identify the right people for the job if traditional interview techniques don't work?

Simple. Audition them. Get the candidate actually doing something related to the job they have applied for. Do they look for the right things from the start? ("Where's the continuous integration server?", "Where are your tests?"). Get them down and dirty in the codebase, see their reaction. Do they get lost in the detail, or do they start to tease out the structure with tests? Code with them. Cooperate with them. Can you work with them? Does their coding behaviour fit team culture? How fast do they learn? Are they a craftsman or hacker?

You could even say you are unit testing the developer.

Auditions do require a little more effort, but in my experience they take some of the guesswork out of the traditional interview process when trying to find good people for the team.

Tuesday, February 02, 2010

Sign up for change

The Independent has recently published an article highlighting the state of the UK Government's IT projects. Let's just say that it's not a success story:

...the total cost of Labour's 10 most notorious IT failures is equivalent to more than half of the budget for Britain's schools last year. Parliament's spending watchdog has described the projects as "fundamentally flawed" and blamed ministers for "stupendous incompetence" in managing them.

Think about it. That's £26 billion of taxpayers' money down the drain because of "fundamentally flawed" projects and "stupendous incompetence". Putting this into context, university funding is being cut by a mere £449 million next year - so we are cutting investment in our future, while frittering away £26 billion on avoidable failures. Epic Fail.

I have already suggested that these failures are likely to have been entirely avoidable. So have others. But there needs to be a fundamental change in the way that Government IT projects are run.

Rob Bowley has started a petition to Number 10 to try and raise the profile of this problem with the Government. It reads:

We the undersigned are tired of hearing how much of our money is being blown on failed governemt IT projects.

We believe that this is mainly due to the nature in which these projects are firstly procured and then delivered - that of demanding and committing to all the requirements for enormous projects up front, consigning them to failure before they have even begun.

The private sector has mostly moved away from this failed model to an incremental approach, which allows for changes in understanding and requirements and enormously reduces the chances of failure. It's about time our government did too.

We ask the Prime Minister to demand a review of the current approach and look at adopting a more incremental and agile approach to Government IT projects

Whether it makes a difference only time will tell, but I urge anyone who cares about public sector IT development to sign it. Any journey starts with a first step, and this might be it.

Friday, January 22, 2010

Japanese Proverb

Just read an article containing this Japanese proverb:

"If you don't know what to do, take a step forward"

It's been a long time since I have read anything that sums up iterative improvement quite so well.

Thursday, January 21, 2010

Using agile tools takes away some of your intuition

It's no secret, I really don't like using tools to run agile projects. More than that, in my opinion giving an inexperienced agile team an "agile tool" is like giving a toddler a chainsaw - it's going to end badly. Give me index cards, pens and a whiteboard anyday.

Allan Kelly has beaten me to blogging about tools - and hit upon an interesting anecdote from Jack Kilby, inventor of the silicon chip. Basically Jack believes that the replacement of the slide rule with calculators has taken something away from engineering. Intuition. Using a tool (the calculator) distanced the engineer from needing to know what he was doing (the calculation). As someone who has used both calculator and slide rule, I tend to agree.

And here lies the problem with agile tools.

Using complex tools takes away a basal intuition about what you are trying to do. You lose that indefinable "connection" with the product. You might even say you lose the craftmanship. Another illustration might be the difference between bland, machined furniture and furniture that has been hand crafted by skilled cabinet makers - building 'by hand' copes effortlessly with the small, unexpected imperfections that rigid machine programs do not cope well with, resulting in a more polished product. That's not to say the cabinet makers don't use the occasional power drill or sander - but they understand when it is appropriate.

So stick with cards and pens at first until you understand when using tools (and which one!) is appropriate. Keep you intuition intact. Your product will be better for it.

Tuesday, January 19, 2010

Have tea with Energized Work

Those nice people at Energized Work have extended an offer for an informal chat - all for the cost of a cup of tea (or coffee, in Gus' case, although I've heard he is quite partial to green tea).

So if you would like to see a presentation, or brown bag, or simply want to chat with the 2009 Gordon Pask Award winners and see what makes them tick, then drop them a line.

One thing I can guarantee - you will find the meeting challenging and productive.