It may take two to tango, but it takes four to CI...

One of the most common conversations that I have with clients is about continuous integration (CI) - the discipline of pulling together work done as often as possible, usually after code checkin to verify that the project is still on track (ie "it works").

So how many environments do you need in order to get real benefit from CI? Unfortunately the answer is a fuzzy "It depends...". But I always recommend at least 4 for safety/effectiveness. The good news is that if you do it right, downstream environments are relatively cheap to add as the need arises.




(I have lost track of how many times I have drawn this diagram in various forms for clients, both in formal training and informal conversations. I am quite glad that I have finally managed to capture it onto one A4 sheet!)

So what have we got here?

Build & unit test
This is the initial stage, normally triggered by a developer checking code into a version control system. It checks out the code in its entirety and builds it using the standard project build script (Ant, Maven, Gradle, make(!)....take your pick!) so that it does the same thing each time. Compilation failures are caught here. Checked in code compiles at all times.

This phase then runs the unit tests. By unit test I mean anything that is not an end to end, requiring a specific deployment. This also includes code inspection checks if used. If a test fails, the build automatically fails. Checked in code passes all unit tests at all times

Finally, after the code is successfully compiled and unit tested, the build packages the code into something that will be deployed. This target is the product, and will be used downstream. No tweaks, no rebuild later, this is it! (The so called "build once" principle because you, well, build once and reuse it....). Every single check in creates what is effectively a Release Candidate[1], and yes, we will be building lots of these. Store it until we know whether it will be used or not.

Acceptance test
Once we have a Release Candidate, we can begin to check that it works. So the target we have created is automatically deployed to a test environment, and acceptance tests automatically run against it. These tests are similar to traditional manual test scripts but using modern automated testing frameworks - for example, Cucumber, Fit etc. If the tests fail, the Release Candidate fails. Checked in code passes all acceptance tests at all times. These are regression tests. They are automatically checking that nothing has broken that was previously working.

After passing this stage, the automated deployments stop. Any further progress is triggered manually in order to maintain release control (remember, we are talking Continuous Integration here, not Continuous Deployment).

Manual Quality Assurance
The Manual QA environment is a place where real people can verify and validate the current software. This is primarily for exploratory testing, trying to find faults that are not covered by the automated regression tests. If problems are found, new tests are added to the regression suite and the test hole plugged. However, since this is a manual stage, the version being tested needs to be manually controlled - the system must not update the version without the QA team knowing/consent.

The current successful Release Candidate is promoted to a QA phase usually with a click of a button which triggers the same deployment scripts as the Acceptance Test phase

UAT/Pre-Production
Here we have a chance to check the developing software against a production-like environment to ensure that it really does work. The same rules apply as for the QA environment. Deployment of new software to UAT is manually triggered to ensure consistency. The same deployment scripts are used, only changing the target system details. 

Production
And finally we reach the Production servers. We go live, comfortable in the knowledge that the product passes all of its tests and that the deployment scripts have been successfully run at least three times beforehand (usually more). Things can still go wrong, but if we have been diligent then the risks are small.


This is the simplest form of CI, and I count 4 deployed environments in the pipeline, plus somewhere to run the CI software and version control. 

As I said, this is the bare minimum. Skimping on this number will significantly increase your risk of failure. However it is highly likely that there are other, more complex tests that need to be done - for example, load testing against a production-like environment. Also, you might want feedback from a select group before going live. Or you might need to run tests in parallel due to time constraints (you want fast feedback, remember!). These extra needs require extra environments. But they are easy to add if we have invested the time and effort to make the initial 4 stages work correctly.

[1] There is a slight caveat to that statement. Multiple check-ins that arrive while the artefact is being built are generally grouped together for the next build. So keep the build time short. There are tools that create one artefact per check-in, but it is a case of diminishing returns.

Comments