Tuesday, November 22, 2011


Charles Darwin 01Anyone who reads my articles will already be familiar with the quote I use at the top of each page - "It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.". For me, this cuts to the very heart of agile and lean techniques. Being successful is not about being powerful, or even knowledgeable in the traditional sense, but about being adaptable and smart.

Way back in June an anonymous poster pointed out that the quote was misattributed to Charles Darwin. To my shame, I sat on the comment until I had the time and patience to research the correct answer.

I can now confirm that you were absolutely correct, Mr Anonymous. It was misattributed. The quote most likely comes from the writings of Leon C. Megginson, Professor of Management and Marketing at Louisiana State University at Baton Rouge. The quote is a paraphrasing of Darwin.

I won't bore everyone with details, but here are links to the corroborating articles.

One thing Darwin didn't say: the source for a misquotation
Survival of the Pithiest

Quiet here, isn't it?

I've been fairly quiet recently for various reasons, but rumours of my online demise have been grossly exaggerated. The brief hiatus has given me time to think about several new articles that are now in the pipeline, so watch this space.

Wednesday, July 06, 2011

Blogger Profile Change

All change!

I am in the process of migrating this blog to another account. For anyone following via my Blogger profile, I am now over here.

It's not personal

Oh dear. It would appear that some people reading this blog andTwitter feed believe that I am writing specifically about them. Naturally, this makes them a little sensitive to the criticisms that I sometimes level at the industry and the comments/observations I make.

Let me reassure everyone that the articles I write generally don't relate to any one client, or incident, or person, or project. There really would be little point in writing about anything that is only specific to one project; to be fair it would be deathly dull, and irrelevant to everyone else. Even in the rare case when there is a particularly interesting success or failure story, I ensure that names are changed, and generally wait a while as well before publishing to anonymise the problem and protect the guilty. And remember I have had people I have never worked with before accuse me (albeit lightheartedly) of spying on their project, so some observations must be common across multiple projects.

Sometimes I publish idle thoughts. Sometimes (hopefully) insightful. Other times complete rubbish, the ramblings of a madman.

So where do these snippets come from? Certainly some of the material comes from personal experience, both past, and to a small degree, present. General observation of the industry provides a huge amount of material, with certain themes and problems coming up time and again (and often from way back too). Some comes from discussions with fellow developers and coaches working in other companies, a throwaway comment triggering anything from an idle thought to a complete theory. Some more comes from talking to managers, a very different but related discipline. Yet more from talking to people outside our small, mad world of software development. Then there are the books, films, documentaries, seminars... Everything influencing everything else, trying to compete for space in a grand unified theory of software development, the universe and everything.

So maybe, just maybe, it's not about you.


If you do start thinking that anything I write sounds like it is directly referencing your project. Or your company. Or even you personally. Stop and think. Why is it resonating with you? Why is it sounding familiar? Why are you embarrassed/angry at seeing something so familiar in print?

Then ask yourself the two most important questions of all:
"Are we doing something stupid here?"

"If we are, how do we stop doing it?"
Remember, it's not personal. It's about working smarter.

Tuesday, July 05, 2011

A blast from the past

Once upon a time, in a galaxy far, far away, I spent several happy years working for the legendary Acorn Computers and their near mythical set-top box subsidiary, Online Media. We did some seriously cool, groundbreaking stuff there, including being (arguably) the first to stream video over internet protocols. All the computers ran on RiscOS, a homegrown, lightweight, adaptable operating system for ARM processors.

Alas this time came to an end, and Acorn disappeared, but their legacy lived on. RiscOS carried on, supported by various companies.

I had mostly forgotten about it until now. An old colleague recently mentioned that in 2006 a group of ex-Acornites started a new company, RiscOS Open Limited (ROOL), dedicated to maintaining the operating system and take it forward. The OS is still available, and supported. So do take a look, and if you have an interest in RiscOS then do consider helping them out.

Wednesday, June 01, 2011

The hidden danger of accepting broken windows

Broken glassBy now we should all be aware of the "broken window effect", not least because there is now concrete evidence that it exists. As a reminder, the "broken window effect" is that subtle, damaging change in attitude that causes perfectly normal, law abiding, sane people to break social norms if they see others doing the same thing. People are more likely to litter an untidy, graffitied street than a clean one. They are more likely to damage a car that has already been damaged. And they are more likely to write badly structured code and cut corners in a codebase if there is evidence of others doing the same thing.

But I think there is an insidious, darker side to this. Accepting broken windows for too long desensitises and deskills your team.

What happens is this. The bad behaviours that happen due to being surrounded by the broken windows - broken builds, releasing untested code, mend-and-make-do workstations and so on - become the new social norms. People get used to them; they will become disinclined to fix them, and it becomes the accepted reality. "It's the way we do things here". But if they are now the accepted status quo, then the next thing that gets broken is on top of the existing problems. Since the culture has now become one of acceptance, the new problem doesn't get fixed either. And so on - the system degrades to the next level. And the next. The cycle spirals downwards.

Remember, these are sane, normal people being sucked into this. Suddenly there is a real danger that a perfectly good software team get so used to breaking the rules and living with the broken windows that they forget what the civilised rules are. They effectively forget what "good development" is. If you don't use the good practice patterns - green builds, continuous integration, TDD and so on - you do forget them. Your team actually begins to deskill, which in turn speeds up the descent into failure....

Fortunately the process is reversible, but depending how far it's gone and how much technical debt you have accumulated before accepting the situation, fixing it can be painful. So just bite the bullet, and fix those broken windows as soon as you see them. Demand better!

Friday, May 27, 2011

A bit of a Spring Clean

As regulars will have noted I'm having a bit of a spring clean of the blog layout - new title, template etc.

Feedback and reports of weirdness welcome - contact me here.

A note on electronic agile tools

It’s always a bit of a shock when you get unexpected feedback, especially from someone who you have known for a while. It happened to me quite recently with a conversation that went something like this:

Friend: “You’re dead against agile tools, aren’t you?” 
Me (confused): “Erm...not exactly....” 
Friend: “But didn’t you persuade several teams in <major co> to throw out <agile tool vendor> the other year?” 
Me: “Not me. The team simply decided that the tools were not providing value to the way they worked...
However, thinking back I think I can see how some people have formed the misconception that I think agile tools are Satan’s Spawn, and should be eradicated at every opportunity in favour of a simple pen, 5x3 cards and at a push an Excel spreadsheet. Reality, as always, is a little different....

My view is that many teams prematurely select an “agile tool” before they know what they need. Teams (and departments, and companies) are like kids in a sweetshop when it comes to new toys, and want to buy the glossiest, most feature-rich application. This is mostly because they want the mythical Silver Bullet that will allow them to “do Agile” painlessly and with minimal disruption.

So what happens when this shiny piece of software gets installed? After all, that nice salesman said it will kick-start your agility...

Feature Frenzy. That’s what happens. You can imagine the thought processes:
“We must use every single thing in this software package. We’re paying a pretty penny for the licensing after all. And it’s there for a reason, right? Oh look, there is an ‘Estimate’ field on the Story entry page, and it says ‘days’. So let’s ask developers to estimate in real days (after all, that’s the same as MS Project does).
“Wait a minute...we can put in Actual Time as well. Wouldn’t it be great if we knew how well we were estimating, and feed that back so we get better and better until we know exactly when a project will be delivered. 
“Oooh...even better. It gives us ‘Percentage Complete’, so we can tell how far from finished Stories are! Cool!”
...then when things start to slip from these “Estimate” fields...
“But you said that would take 1 day, but it’s taken 5 days....AND it’s been at 90% Complete for 4 of those days...that makes you a Bad Person”
This scenario is based on real behaviours I have seen more than once in teams who have asked me for help, and I have no doubt that I shall see them again. You get the idea - the process and behaviours begin to map to the tool, not the other way around. Or to put it another way, the tail is wagging the dog. The team didn’t know what they wanted, so they used the tool as their process model.

I suspect this is why some may think I am anti-tools (or manically pro-pen-and-paper?).To put the record straight, I am anti-inappropriate-tools. I am against prematurely selecting a tool before allowing your process to bed-in. All agile and lean teams end up doing things slightly differently - estimation, swim lane mapping of the current process and so on. The most efficient application of these proven techniques relies on flexibility. And yes, I have helped some teams to realise how some electronic tools were hindering them rather than helping.

So select your electronic tools to match your mature process, or at very least ensure that it is flexible enough to cope with anything you throw at it (products with this level of flexibility are starting to appear).

Also, never adopt any feature in an existing tool until you see a concrete, real-world and evidence-based reason that you need it.

If you are not using many of the facilities in a particular tool you have purchased, save your cash and look for a cheaper (free?) version that does less.

Finally, if you are using an electronic tool make it visible in the same way as a whiteboard would be or you will lose most of the benefits - bite the bullet and invest in a touch screen TV/monitor for each team at each location it is used. If this is too expensive, then I see a trip to a stationers for pens and cards in your future....

Monday, March 14, 2011

Updated: I can haz agile certification?

Via various convoluted routes:
Update: the link is now cobwebbed, but if you did the original then please get in touch!

Update 2: It's back! FTW!

Update 3: the link has disappeared again. Boo!

Tuesday, February 08, 2011

Hudson is fired, Jenkins hired!

Following a fairly public spat and veiled threats over trademarks by Oracle, the Hudson CI server has been renamed to Jenkins.

OK, technically Jenkins is a branch of the Hudson code, so it's more like Hudkins or Jenson. Oracle (sorry, "the Java community") will continue to develop the Hudson trunk, however the branch appears to have the blessing of the core Hudson Jenkins development team and the community (the vote to fork appears to have been virtually unanimous). If it looks like a duck & quacks like a duck....then the parent might just start resembling a dodo.

I'd like to wish the Jenkins team good luck. I've used Hudson to great effect many times, and look forward to the future improvements with the new project butler.

Monday, February 07, 2011

LSSC 2011

I'll be attending the Lean Software & Systems Conference in Long Beach, California 3-6 May. I'm not in the States very often any more, so make the most of it. Come over and introduce yourself!

Sunday, February 06, 2011

UPDATED: Keep behavioural driven naming out of unit testing

Something is happening in the Test First world. There appears to be growing use of "Behavioural Driven Design" (BDD). That is, the test names in test classes are becoming sentences describing what the class should do, eg testFailsForDuplicateCustomers(). In my opinion this trend is damaging the effectiveness of Test First Design in the field.

Don't get me wrong - I think the fundamental idea behind BDD is sound. But I am finding that this kind of naming approach for unit tests is seriously detracting from the "code is the documentation" ideal. If you have a class with method "X", you should expect a test method "testX" that defines the method's contract. If the method's contract is more complex, you might expect to see a set of methods like "testX_NullInput" etc. This approach documents the class pretty damned well in my experience.

Now image the same class being tested by something like "testFailsIfCustomerIsNull". What fails? What input? Have I tested method "Y"? But method "Z" shouldn't fail if Customer is null! You end up doing some serious analysis to work out what is going on which fundamentally defeats the object of the tests replacing design documentation. Using BDD test naming here is actively obfuscating how the system works at a class level, not clarifying.

(Note - nothing that I am suggesting excuses obscure method naming. A method should do what it says and say what it does, with the detailed clarification of the contract being defined in the tests. All I am saying is that there should be a simple, obvious mapping between methods and the tests that verify their behaviours)

Where BDD comes into its own is when defining interactions. Integration tests that check the interaction of closely cooperating classes benefit hugely from descriptive naming. Also User acceptance tests reap huge rewards from the approach since the tests are described in business language that allows a Customer to understand what is being tested. Yes, use BDD naming in these cases because it does help.

Just keep it away from Unit Tests!

UPDATE: I wrote this article back in 2007. It's now February 2011, and a lot has happened. Since then those fine chaps Nat Pryce and Steve Freeman have written their excellent Growing Object Oriented Software with Tests book showing how it should be done. Jason Gorman's observations on two distinct TDD styles also filled in some gaps I had missed. Also, the whole "Software Craftsmanship" movement has moved understanding forward a huge amount by daring to ask questions and clarify.  

And then there was the clincher, the tool that stuck everything together for me, giving the penny a clear path to drop with a loud Clang! The IntelliJ TestDox plugin. Suddenly the BDD approach to unit tests makes much more sense, to me at least, so I am happy to admit that my views expressed here have now changed. Or at very least, softened a huge amount. I now tend to use behaviours rather than specifics to define class level behaviour.

Which just goes to show what can happen if you keep an open mind and try different approaches.