Friday, February 09, 2018

How not to be an Igor

I recently wrote a somewhat tongue in cheek piece with a serious message. For agile - well, actually, any software development project - to succeed, business needs to be fully and intelligently engaged. Iterative delivery processes appear particularly sensitive to this (arguably, they just show it up sooner, but that’s another discussion).

So, how do you make sure that your software team(s) successfully deliver what’s needed? A very good question - and one with no definitive answer. But here are a few hints that should give you a better than average chance of success.

Firstly, and perhaps most importantly, is to recognise that “The Business” generally does not really understand the engineering discipine of software development. They might have an appreciation of it (at least, I hope they do!), but they don’t have skin in that particular game. They don’t cut code daily, so they don’t experience the trials and tribulations of producing the end product. So they don’t generally care about how many tests you have, or that you are using the latest whizzy ReacgularEmberAshSproutCabbageCarrotCore framework, nor do they care whether you are using continuous integration, containers, microservices or Tupperware. The business is usually only interested in one thing - the end game. Working, reliable software in front of the customer. And if they cannot see that, then they will be tempted to fall back on illusion of progress antipatterns as a proxy measure of progress, performance and value. For example, Number of Stories Delivered, Velocity Points Delivered, Accuracy of Estimates measures and so on (Hint: This is a Bad Thing(tm). None of these are productive. Quite the opposite, they are illusory).

Equally, “The Dev Team" don’t always understand the business. Since software development is a skilled, complex, creative endeavour, their focus needs to be on programming tools, techniques, patterns and paradigms. As a result, in the same way as business is disconnected from development pressures, they can miss important factors unrelated to their immediate engineering discipline. For example, time-to-market pressures, customer needs, visibility of real progress and so on.

I believe this situation is fine and good. It is rare to find teams that are so cross-skilled that developers and business representatives are interchangeable. Both disciplines require significant specialist personal investment of time and energy. But as a result, neither side can exist in isolation. Neither side is better than the other. Effective product delivery is a joint responsiblity. It has to be a symbiotic collaboration of the two sides, which requires communication and mutual respect. Frequent, effective, two-way, respectful communication. If this breaks down, then trust is lost, the illusory control mechanisms begin to appear, and both sides start an unproductive game of "blame whack-a-mole”. Business demands a meaningless progress metric, eg velocity points delivered. Dev team get metaphorically beaten up for “not delivering enough points”. Dev team learns how to game the metric to avoid pain. Software still doesn’t get delivered. Business demands another meaningless progress metric, which gets gamed…. and so the proverbial blame-moles get whacked.

So the first, most important rule of “Not being an Igor” is for Business and Development to communicate with each other openly, honestly, and with respect for each others’ needs and skillset. The rest follows on from here.

Once there is a meaningful, respectful dialogue between Business and Development, then both can openly and honestly discuss, challenge and align goals. For example, if the Business genuinely needs something working in time for, say, a key conference, then the thorny subject of how it can be delivered can be discussed. The Development team knows their subject, so they are the best people to ask. Equally if there is budgetary consideration, then perhaps Business are the best people to ask. Both sides have valuable input. However, both sides need to be alert for imagined terms. The needs of both side can be challenged (respectfully!) and discussed to see if they are real or based on flawed information or opinion. Eg. "What happens if we miss that date? Does the company go bankrupt?”, “Do we want this product so badly that budget isn’t really an issue? When does cost become a real issue?”.

There are facilitation techniques that can be used to ensure these discussions are effective and productive rather than damaging. This is a rich seam of material for other articles at some indeterminate time in the future - do get in touch if you need help with this.

Now the goals are aligned, actual development of software to satisfy them can start. More importantly, an ongoing, meaningful discussion of progress can begin. To be meaningful, progress needs to be directly measurable, and not abstract (story point burn is definitely out). The only measure is demonstrably working software. That is, software that you can give to the business and say “This is what it does now, compared to what it did last time”. No smoke, no mirrors, no voodoo SQL statements showing magic has happened. Plain old warts’n'all User Experience.
From visible progress, more meaningful discussion can take place. Does the software work? Does it look like we can deliver within budget/time? What are the unforeseen (unforeseeable!) problems stopping us? Are we likely to succeed? Is it worth continuing for another week/fortnight/month? The odds are, by making small, mutually agreed corrections early and frequently during delivery, the product becomes much more likely to succeed, and if not, significant budget, effort and stress saved by recognising this early.

Following these simple pieces of advice do not guarantee success. The supermarkets are sold out of silver bullets, and the world supply of magic project beans has had an unfortunate run-in with some weed killer. However, I do guarantee that by simply appreciating the skills others bring to the project, and having honest, sensible discussions with each other, you will be less Igor-like, and your project will stand a far better chance of succeeding.

Wednesday, November 08, 2017

Careful with that brain, Igor!


So - you have invested in transforming to an agile development process. Your teams have honed their processes until it runs like clockwork, and are resilient to changing needs. Software is being produced cleanly and efficiently with minimal fuss.

But there is still something missing. The software simply doesn’t do what you expect. The customers don’t like it, and won’t use it. It has annoying quirks and foibles that put people off. It simply isn’t right. What’s going on?

Congratulations! You have discovered the Achilles Heel of all software development. It has to have an undamaged brain. A well disciplined agile team will deliver whatever its business brain asks of it. The team is only as good as its inputs and subsequent feedback on its deliveries.



Or to put it another way, so-called ‘agile’, and all other software delivery processes for that matter, rely on sound business thinking. Make Abby Normal your customer/Product Owner, and you will be in serious trouble.

At its heart, agility is about delivering a product iteratively, delivering a small slice of value for validation, and getting feedback whether it is right, allowing corrective action as needed as lessons are learned. This means that for software delivery to succeed, business - however you define it; customer/product owner/whatever - must engage fully. This includes providing a definitive product vision to the team, prioritising what needs to be delivered next, and making decisions based on the results of each iteration. Throwing away work is perfectly valid, and can be beneficial. Break this feedback control loop and it will not work, no matter how good the team is.

By treating projects as a series of small, iterative gambles, the business becomes far more likely to succeed in delivering what it intends. The business, as well as the team providing the raw development talent, needs to be agile in its thinking. If it’s not, then you run the risk of putting an abnormal brain into a seven and a half foot long, fifty four inch wide gorilla....

Don’t be an Igor.

Monday, May 08, 2017

Of Goals, Backlogs and Sprints


Jean-François Millet (II) 014
As an experienced developer/coach, once in a while you get asked a truly fantastic question by a client who has actually done her homework, and you're absolutely stumped by it. I love it when this happens since it is a fantastic opportunity to not only learn, but also to teach the client how to solve similar questions themselves next time.

This particular team had been attempting to implement Scrum for some time, with a degree of success. But one thing was bugging them in particular. How to control stories in-sprint. The advice they had been given (and the way I understand Scrum to work) was that sprints are immutable. During the plannig game, the team sets the sprint backlog, populating it with the subset of backlog stories that the team believe they can fit into the timebox. That's it - only a major event causes that to change, in which case the sprint is terminated, replanned and restarted.

So when the Scrum Master asked about the 2016 edition of the Scrum Guide, and why it seemed to contradict that view, I was somewhat surprised. Here it is, in black and white:
As new work is required, the Development Team adds it to the Sprint Backlog. As work is performed or completed, the estimated remaining work is updated. When elements of the plan are deemed unnecessary, they are removed. Only the Development Team can change its Sprint Backlog during a Sprint. The Sprint Backlog is a highly visible, real -time picture of the work that the Development Team plans to accomplish during the Sprint, and it belongs solely to the Development Team.

Woah! Wut? What madness is this? The Scrum Manual contradicting Scrum as I understood it for years? Having battled with seemingly random, moving requirements during my early professional years I can relate to why processes such as Scrum limit the amount of churn. But this paragraph seems to encourage it.

What to do? What could I do to help? I did what any sane person would do in a situation like this, and asked some People Who Know - other coaches in my network and got three different answers back. Hmmm…obviously some room for interpretation here.

So, having scratched my head for a while, it turns out I had missed some nuances. Here's a fourth point of view, hopefully pulling in the salient points from the other three.... The documentation is not giving out an inconsistent message at all. But it is subtle, and open to (bad) interpretation if you don’t pay attention.

Looking at the wording above in isolation was the problem. Reading it in context, a previous paragraph says,
The Sprint Backlog is a forecast by the Development Team about what functionality will be in the next Increment and the work needed to deliver that functionality into a “Done” Increment.
(my emphasis)

So what is being added to/dropped from the sprint by the team mid-sprint is the work needed to achieve the functionality. So it is the functionality (defined in User Stories) that is fixed ("At the end of the sprint, the software wil do x, y and z that it didn't do last week"). Tasks (build a database, define a service etc etc) to get there can vary - which is unsurprising.

I believe one reason I missed this subtlety was because I generally recommend teams do not split stories ("functionality") into tasks since in my experience it encourages not finishing - all the easy things get done, but the software still doesn't work! Also I have found it encourages micro-management, and the use of the severely damaging "Predicted time/Actual time" fields in reporting tools like Jira. Sometimes splitting things out is useful, but usually less so. Most teams don’t miss this micro-planning step, and are very succesful without it.

I think this explains the whole mutable in-sprint work backlog pretty well. Now, what about the Sprint Goal mentioned in the title? It is another much abused Scrum artefact. Most (all?) teams I have worked with have had trouble with it, and in some cases just shoehorned it in retrospectively after selecting what to put into the sprint "because Scrum says so" (<sigh>). Looking deeper into how Scrum sees the Sprint Backlog also clarifies the purpose of the Goal.

The Scrum Manual doesn't have any explicit definition of the Sprint Goal that I can see - but the Goal appears to be there to focus the team on the answer the important question "What is the most important thing this software product needs to do next?" . And this is what links it to Features/Stories.

The Sprint Goal helps define what Stories to play next, what Features to implement next.
This means defining the Sprint Goal needs to come before defining the sprint backlog, not at the end as a retrofitted afterthought. Indeed, this is exactly what the manual says:
Sprint Planning, Topic One: What can be done this Sprint?
...
The Product Owner discusses the objective that the Sprint should achieve and the Product Backlog items that, if completed in the Sprint, would achieve the Sprint Goal
So the only logical order for the Planning Game has to be:
  1. Define the Sprint Goal (the Product Owner should already know this, or have a pretty good idea of what it should be)
  2. Pick the Sprint Backlog stories that will satisfy the Goal
  3. (Optionally!) break the stories into tasks (“work needed”)
I hope that clears up some confusion out there.

Sunday, November 06, 2016

Feature Branching - Taking the ‘Continuous’ Out Of Continuous Integration

XKCD 1597

Feature branch workflows appear to be the flavour of the month for software development right now. At its simplest, features are developed in isolation on their own branch, then merged back to the trunk. More complicated variants exist, most notably Git Flow.

Git is undoubtedly a powerful versioning tool. It makes branching easy and reliable, and it is truly distributed. These capabilities open up a whole range of possibilities and workflows to support different ways of working. But, as usual, the sharper the tool, the easier it is to cut yourself if you don't understand what you are doing.

My problem is this. Feature branching is a very effective way of working when you have a large, distributed team. Code is not integrated with the rest of the codebase without a lot of checking. Pull requests ensure changes - often done by developers who are in other countries, and often part of a loose-knit open source development team - are reviewed by a core team before being integrated, keeping the main product clean and working. Which is great.


Except it completely undermines the core principle of continuous integration! 


Let’s be clear here. If you are not combining every single code change into a single trunk branch at least once every day you are definitely not continuously integrating.

Constant merging of everything as you go is the key enabler for continuous integration. It is the ’secret sauce’ that provides the main benefit of the technique. The clue is in the name. Every1 commit triggers a build and test sequence that checks whether the code has regressed. And the only truly effective way to do that is to keep everyone developing on a single main branch. If something really, really needs a separate branch (the exception, not the rule), then its lifecycle needs to be kept short; ideally, less than 1 day. The longer code stays away from the main branch, unintegrated, the more likely mistakes will go unnoticed.

Having to branch for each feature inevitably means that integration only happens with the rest of the code when the branch is finally merged with the main development branch. Often this is days, even weeks later, and includes many different, separate commits. This is perilously close to the waterfall practice of late integration, and so needs to be avoided unless other factors dictate otherwise - for example, the team is a distributed and loose-knit group of volunteers. If you are working in a small, collocated team, there is absoutely no reason to adopt these regressive patterns.



1 Well, almost every...

Wednesday, June 01, 2016

The Importance of Being Able to Build Locally

A little while back I wrote an article describing how to do continuous integration. But I left out one important step that happens before any code even enters the CI pipeline. A step so important, so fundamental, so obvious that you don’t realise it is there until it is missing.

You must be able to build and test your software locally, on your developer machine. And by “test” I don’t just mean unit test level, but acceptance tests as well.

I said it was obvious. But once in a while I do stumble across a project where this rule has been missed, and it leads to a world of unnecessary pain and discomfort for the team. So what happens when you cannot easily build and test anywhere apart from the pipeline?

Without being able to run tests locally, the development team is effectively coding blind. They cannot know whether the code they are writing is correct. Depending on where the dysfunction is - compile, unit test or acceptance test stage - the code checked in may or may not break the pipeline build. The Dirty Harry Checkin (“Feeling lucky, punk?”). So the pipeline is likely to break. A lot. Good pipeline discipline means broken builds are fixed quickly, or removed, so that other team members can check in code. But here lies the rub - since there is no local feedback, any fix is unlikely to be identified quickly - how can it be when every single change to fix it has to run through the CI pipeline first? The inevitable result - slow development.

Let’s look a little closer at what is going on. Whenever I see this anti-pattern, it is usually the acceptance tests that cannot be run [1] - they are generally difficult to set up, and/or too slow to run quickly, and/or too slow to deploy new code to test. Let’s apply this to a typical ATDD development cycle. we should all know what this looks like:


Standard stuff - Write a failing acceptance test, TDD until it passes, repeat.

Now, let’s drop this development pattern into a system where the developers cannot run their acceptance tests locally and has a slow deployment. This happens:
The only way to check whether the acceptance criteria are complete - i.e. the feature is Done - is to push the changes into the CI pipeline. Which takes time. In this time no-one else can deploy anything (or, at least shouldn’t if we don’t want complete chaos). Getting feedback on the state of the build becomes glacially slow. Which means fixing problems becomes equally delayed. So if, say, the feedback cycle takes 30 minutes, and you have mistakes in the acceptance criteria (remember, you cannot test locally, so don’t really know whether the software works as expected) every single mistake could take 30 minutes each, plus development time!  

So how does being able to build locally fix this? Simple - if you can build and test locally, you know that changes being introduced into the CI pipeline most likely work.  Even if things are a bit slow. Also, instead of having a single channel to check the state of the build, every single development machine becomes a potential test system, so no waiting around for the pipeline to clear - just run it locally, and grab a coffee if needed, or even switch to your pair’s workstation if things are really that slow (clue: if things are that slow, spend the down time fixing it!).

I shall add one unexpected side effect of being able to build locally - it can be used as a poor-man's continuous integration pipeline. So if you have trouble commissioning servers to support a real pipeline (sadly there are still companies where this is the case - you know who you are!), with sufficient team discipline it is possible to use local builds as the main verification and validation mechanism. Simply make it a team norm to integrate and run all tests locally before checking in. It does introduces other risks, but it gives a team a fighting chance.

[1] If unit level tests have this problem, there is an even bigger problem. Trust me on this.



Tuesday, May 31, 2016

Twitter

I don’t know if anyone’s noticed, but I’m not very active on Twitter any more.

No, it’s not the annoying “While you were away” pseudo-window that is keeping me away.

Nor is it the Facebook-esque ‘heart’ button (WTF?).

Nor is it the endless supply of advertising bots and idiots that seem to frequent the service (I have most of them filtered out)  

It’s not even the regular misaddressed tweets meant for the Thirsty Bear Brewing Co in San Fransisco (fine fellows, and purveyors of fine beery comestibles that they are!)

Nope. None of the above. It’s the distraction.

On 5th November 2008, at 0525 (allegedly - I suspect there’s some sort of timezone shenanigans going on there…those who know me realise I am not a morning person) I wrote my first Tweet: "Trying out this Twitter thing….”)

Since that date I’ve got involved in all kinds of interesting discussions, mostly around software development, and often with the Great and the Good of the agile community - for which I am extremely grateful since I consider myself to be more "average chaotic neutral", with some habits that (mostly!) keep me from the Dark Side 😃.  Has it been useful? Yes. Has it been fun? Hell, yeah! Did it take far too much of my time? Oh yes!

The immediacy of Twitter meant that I was being dragged into endless conversations. Discussions that had more in common with IRC realtime chat than simply idle Post-It notes on a wall somewhere. They needed immediate attention. Which meant that I was wasting a huge amount of time. I could give up at any time…but…just …one…more…reply… 



So I’ve took a bit of a hiatus, and found out that the world didn’t end. The sky didn’t fall in. I could still find things out. I could still have discussions. But I had time. I had found that Twitter was actually getting in the way of what I want to do - work with good people to deliver cool stuff. 


I haven’t given it up completely, but I won’t be using Twitter anywhere near as much as I used to. My last serious tweet was October 2015, and I have not really missed the interaction. I will be keeping an eye on it to see if I can improve the ROI, suggestions gratefully received - but best use the comments to this post rather than use Twitter..... 

Thursday, December 31, 2015

Getting rid of those annoying ._ files

It all started so innocuously. All I was asked to do was put some photos onto a USB stick that could be played on a TV in my local pub. Easy. JPEGs duly loaded onto the drive, plugged in...and found to be incompatible. Weird.

So I checked the JPEG file format against the specification. I changed sizes, I changed encoding type.  Still not playing. Really annoying, and not a little embarrassing!

Then I noticed - the TV was trying to display files with names starting with '._'. Where the hell did they come from? And the penny dropped. OSX creates all kinds of special files to work with non-HFS file systems - like the USB drive's FAT32 format.

These files are part of the AppleDouble format, and don't show up if you reveal hidden files in OSX (see this excellent article on how to show/hide the hidden files). So to get rid of them, you can either use a Windows or linux machine, or use the dot_clean command in OSX.

Hopefully this will save someone some time tracking down this problem.