Wednesday, December 11, 2013

It may take two to tango, but it takes four to CI...

One of the most common conversations that I have with clients is about continuous integration (CI) - the discipline of pulling together work done as often as possible, usually after code checkin to verify that the project is still on track (ie "it works").

So how many environments do you need in order to get real benefit from CI? Unfortunately the answer is a fuzzy "It depends...". But I always recommend at least 4 for safety/effectiveness. The good news is that if you do it right, downstream environments are relatively cheap to add as the need arises.




(I have lost track of how many times I have drawn this diagram in various forms for clients, both in formal training and informal conversations. I am quite glad that I have finally managed to capture it onto one A4 sheet!)

So what have we got here?

Build & unit test
This is the initial stage, normally triggered by a developer checking code into a version control system. It checks out the code in its entirety and builds it using the standard project build script (Ant, Maven, Gradle, make(!)....take your pick!) so that it does the same thing each time. Compilation failures are caught here. Checked in code compiles at all times.

This phase then runs the unit tests. By unit test I mean anything that is not an end to end, requiring a specific deployment. This also includes code inspection checks if used. If a test fails, the build automatically fails. Checked in code passes all unit tests at all times

Finally, after the code is successfully compiled and unit tested, the build packages the code into something that will be deployed. This target is the product, and will be used downstream. No tweaks, no rebuild later, this is it! (The so called "build once" principle because you, well, build once and reuse it....). Every single check in creates what is effectively a Release Candidate[1], and yes, we will be building lots of these. Store it until we know whether it will be used or not.

Acceptance test
Once we have a Release Candidate, we can begin to check that it works. So the target we have created is automatically deployed to a test environment, and acceptance tests automatically run against it. These tests are similar to traditional manual test scripts but using modern automated testing frameworks - for example, Cucumber, Fit etc. If the tests fail, the Release Candidate fails. Checked in code passes all acceptance tests at all times. These are regression tests. They are automatically checking that nothing has broken that was previously working.

After passing this stage, the automated deployments stop. Any further progress is triggered manually in order to maintain release control (remember, we are talking Continuous Integration here, not Continuous Deployment).

Manual Quality Assurance
The Manual QA environment is a place where real people can verify and validate the current software. This is primarily for exploratory testing, trying to find faults that are not covered by the automated regression tests. If problems are found, new tests are added to the regression suite and the test hole plugged. However, since this is a manual stage, the version being tested needs to be manually controlled - the system must not update the version without the QA team knowing/consent.

The current successful Release Candidate is promoted to a QA phase usually with a click of a button which triggers the same deployment scripts as the Acceptance Test phase

UAT/Pre-Production
Here we have a chance to check the developing software against a production-like environment to ensure that it really does work. The same rules apply as for the QA environment. Deployment of new software to UAT is manually triggered to ensure consistency. The same deployment scripts are used, only changing the target system details. 

Production
And finally we reach the Production servers. We go live, comfortable in the knowledge that the product passes all of its tests and that the deployment scripts have been successfully run at least three times beforehand (usually more). Things can still go wrong, but if we have been diligent then the risks are small.


This is the simplest form of CI, and I count 4 deployed environments in the pipeline, plus somewhere to run the CI software and version control. 

As I said, this is the bare minimum. Skimping on this number will significantly increase your risk of failure. However it is highly likely that there are other, more complex tests that need to be done - for example, load testing against a production-like environment. Also, you might want feedback from a select group before going live. Or you might need to run tests in parallel due to time constraints (you want fast feedback, remember!). These extra needs require extra environments. But they are easy to add if we have invested the time and effort to make the initial 4 stages work correctly.

[1] There is a slight caveat to that statement. Multiple check-ins that arrive while the artefact is being built are generally grouped together for the next build. So keep the build time short. There are tools that create one artefact per check-in, but it is a case of diminishing returns.

Tuesday, December 03, 2013

There is no such thing as "Pragmatic BDD"

I practice, teach and coach Test Driven techniques. Test Driven Design (i.e. using unit tests), Behaviour Driven Development, I use them all on an almost daily basis. I find them useful since they help me do my main job of delivering software. Yet once in a while I am asked whether I could be more "pragmatic" and less "purist" over Test Driven Development and Behavioural Driven Development techniques. I always find this an odd request, especially when I have been brought in as the subject expert to teach a company, or as a lead developer with a solid grounding in these subjects. But maybe this is not obvious to some, especially those who have never used the techniques correctly - or at all. This article looks bit deeper into the request, and see why I am totally nonplussed by even the idea.

Firstly, what does TDD and BDD (arguably the "pure" version of them) usually look like? To me they look like closely related cousins, which makes this brief summary simple. Both require you to define - normally in collaboration with the business, or other developers -what requirement you are going to deliver next, whether in the large (a behaviour of the system) or in the small (class/object contract). You could describe them both as red bar pattern techniques - a failing test - before you proceed. Once you have this failing test, the next step is to make the test pass. Once you satisfy the test, you are done[1]Seemples.

Something like this:
  1. Define the requirement
  2. Write a test that proves the requirement is not there yet
  3. Write code until the requirement is there
  4. Goto 1.

Bearing this in mind, let's think about what pragmatic TDD/BDD might be. Well, we can't really drop step 3, can we? Else we would never produce anything. Equally, we can't really drop step 1 in some form or another, else we would genuinely be like the infinite monkeys with typewriters desperately trying to reproduce Hamlet. That leaves the iteration and testing. 

Removing iteration from the process leaves you with what is generally accepted to be a waterfall process. You define everything up front. This can work. As long as nothing changes, and the requirement is sufficiently trivial that nothing can be missed. A big ask in my opinion and supported by rather a lot of first hand experience, but others' opinions may differ.

But usually asking for a "pragmatic" approach is about reducing massively or removing completely the need to write tests first before doing anything. If you are not used to front-loading your project to fail-fast, and you don’t expect to see anything meaningful until towards the end, then it can feel like running through treacle at first - you don’t get that warm fuzzy illusion of progress that not having to deliver anything meaningful gives you. And the temptation is to rip the heart out of the approach - the very thing that makes it so effective. 

Digging deeper, what does this “pragmatic testing” really mean? Test what you feel like? Test what you have time for? Leave the rest to trust? Test it all later at the end? Or to put it another way “Where do you want your bugs, Mr Customer?” This is sounding like the Code Complete Fallacy in another form. 

Rather than look at why we should do less of writing testing first, let's turn the question around and speculate on why would we not test first? Some of the reasons I have come across over the millennia:

  • We cannot test it/We don't know how to test it
  • We don't have time to test it
  • We will test it at the end
  • We write perfect code so it won't need testing/will only need minimal testing that will pass first time
  • We have always done it like this
  • We make our money from change requests that fix bugs[2]

Not good, eh? None of these have anything to do with applying TDD/BDD pragmatically. However they do say a lot about quality, professionalism, even honesty. It is about developing good or bad code. These techniques are about baking quality into the product rather than the impossible task of inspecting it in later. It is a very bad idea (not to mention nonsensical) to change the process. 

And here we have the heart of why I do not recommend compromising. Test driven techniques, coupled with an iterative approach, force you to demonstrate and practice your professional skills every day, day in day out. They validate the assumptions in your design and architecture as you go, feeding back information to help you understand your weaknesses and improve. Taking away this fast feedback loop but claiming it is still there is at best misguided, possibly disastrous, at worst dishonest.

Face facts - if you aren't doing B/TDD right, you are not doing it at all. I believe it is that clear cut. Be courageous and recognise this and stop hiding behind the cowardly "pragmatist" excuse. Recognise what you are doing, and how it helps - or hinders - you. Then take the next step to getting it right.

So next time you think you are being too purist, take a step back and think about what you mean, and what you are actually suggesting. I suspect you will be surprised.



[1] Yes, yes, I have left out the important Refactoring step defined in TDD. Some (myself included) argue that it is also in BDD. But work with me here…
[2] Long, long ago, in a galaxy far, far away, I worked with a consultancy where I was told that… I didn't stay long.

Saturday, November 16, 2013

Some Words on having "Skin in the Game"

"It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat."P

Theodore Roosevelt, Paris23 April, 1910

excerpt from the speech "Citizenship In A Republic"

Tuesday, November 12, 2013

Chasing Rabbits

By Hironori Akutagawa [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons
Old Russian Proverb:

If you chase two rabbits you will not catch either one

Thursday, May 16, 2013

Don't shave that yak!



(If anyone would like to claim this is their work, I am more than happy to recognise the fact - credit where credit's due. Just drop me a line)

Monday, March 18, 2013

Estimation is dead! Long live estimates!

Currently it seems trendy to say that your team no longer estimates. Or that estimating work is pointless (pointless? Geddit? oh never mind....). Hell, even I have said similar on Twitter.

So estimation is finally dead. Hurrah! No more picking meaningless numbers out of thin air when you really have no idea how long something will take. Finally we are rid of a process that is demonstrably flawed.

Except...is that a twitch I see from the lifeless corpse? I do believe it is!

There will always be times when it is genuinely useful to answer questions such as "Will we be able to show our new product at the April trade show?". Or "Which quarter do we need to start ramping up our marketing for the new feature?". All of which require estimates. Not necessarily to-the-day accurate estimates, but still estimates.

Even teams who claim not to estimate are finding that it is useful to make a judgement call - dare we call it an estimate - on the size of a requirement that they are about to pull into play. It might be to see if it can be done in a reasonable time to satisfy a service level agreement, or it might be to make sure that it is about the same size as other stories that have been played in order to reduce variance and improve flow. But this is still an estimate.

So estimating has taken a beating - and rightly so, having been abused for so long. But what is emerging is a better, leaner form of estimating, supported by real live feedback, that is far more fit for purpose.



Wednesday, January 02, 2013

An achievement!

Hurrah! I have earned another piece of paper (with distinction, no less :-) )! This time for completing the excellent Functional Programming with Scala course run by Martin Odersky and Coursera. I have touched on functional programming before, back in the days when I used to write XSLT for a mashup engine, but this course really helped open my eyes to what the underlying principles are. Not to mention forcing me to reassess the level of test-driven you apply to the new techniques.

Now the journey really begins!