Shift your build left!
I get it. Modern build pipelines now have a huge amount of functionality. They are now far more than simple script runners waiting for a trigger - just look at GitHub Actions, or what you can script these days in GitLab. So it is becoming far more tempting to let critical functionality drift right, and into your pipeline. Builds, publishing scripts, testing etc can all drift right. But don't let it happen! As with all late actions in software, by the time it is in the pipeline it is too late. Pipelines are shared resources, so can be delayed in running. Often the runners are slow, and take time to spin up. And perhaps most importantly, the pipeline scripting languages are designed to tie you in to a specific vendor. What happens when your pipeline vendor goes out of business? Or price gouges, raising their prices to eye-watering levels? Or suffers downtime (it happens - remember how we discovered how many commercial systems depend on AWS when that went down)? How quickly can you lift-and-shift those complicated vendor-specific scripts to a different vendor? What hidden business risk are you accepting?
Additionally, how can developers validate their work before checking in if the build-and-test cycle is delegated to the pipeline (clue: they can't). This slows down the feedback loop significantly, making teams less effective. This is a death spiral for AI assisted development - data shows the development cycle needs to be fast and effective to gain any benefits from AI at all - not being able to quickly test changes locally undermines this.
So here's the message: shift your build left. Hard left. Be able to run it on your developer machines. Here's a checklist:
- Can check out the repository trunk/deployment branch, build and test successfully on a local developer machine by running the build script.
- Can build the deployable artifact locally by running the build script.
- Can publish the artifact to any downstream repositories and environments from the local machine if the right credentials are available by running the build script.
That last point needs some comment. The development lifecycle governance often demands that production environments require the correct access to deploy (rightly so - you probably don't want just anyone to be able to deploy from anywhere). Auditability and traceability are key concepts here. The pipeline can have access rights that developers don't, forcing deployments through a controlled (pipelined!) process. But what if the pipeline is broken for any reason? If the pipeline delegates to scripts that are available locally, then a developer can request a controlled, audited BTG ('Break The Glass') access to get the necessary credentials, and deploy the build using a local machine using the exact same process the pipeline uses, just triggered manually. A simple and effective disaster recovery mechanism.(Note that the exact details of how to provide secure governance is a subject in its own right, and beyond what I want to discuss here - get in touch if you want to talk about it)
So what is a pipeline good for?
First and foremost, it is a quality control gate. It enforces a controlled, auditable process that is followed to build, test, package and deploy an artifact. It does this by following a build script that is a part of the repository. Gradle, make, sbt...choose your build technology. The exact same script is used for local builds and pipelined publishing, but without a special access request only the pipeline has the right credentials to publish to protected environments (eg. production).
By enforcing an audited process, it guarantees that the test are run and pass. Tests not clean and green? Pipeline fails. Code does not conform to the agreed formatting rules? Fail. Problematic structures in the code? Fail. All automated. But as with all quality control gates, if this happens then it is too late - which is why we insist on the build working locally, and the pipeline uses the exact same build process. The aim is for the development team to never break the build. When a team masters this fundamental skill then the ability to deliver skyrockets.
Which brings me to the final feature of a well defined pipeline. It provides a master build that trumps all others - I sometimes call it the "golden build". It provides the definitive current state of the quality of your software. The pipeline is the team's andon, indicating when there is a quality problem - if it breaks and lights up the andon then the team needs to stop everything else and fix it (rolling back bad commits is a fix, remember...).
All of this pipeline discipline (quality control, common build process, andon, keeping the pipeline clean and green), coupled with shifting the build left and keeping it fast (10 minutes or less - time to go to the kitchen and make a cup of tea, no more) provide a huge boost to team productivity.
===
Interested in knowing more? Want some help and guidance improving your continuous delivery pipelines? Or getting your tests more streamlined? Get in touch and let's chat.
Comments
Post a Comment
Comments on this blog are moderated. Intelligent comments will be published, but anything spammy or vexatious will be ignored.