Continuous delivery (CD) is the practice of using automation to release software in short iterations
What is continuous delivery?
Continuous delivery is an approach where teams release quality products frequently and predictably from source code repository to production in an automated fashion.
Some organizations release products manually by handing them off from one team to the next, which is illustrated in the diagram below. Typically, developers are at the left end of this spectrum and operations personnel are at the receiving end. This creates delays at every hand-off that leads to frustrated teams and dissatisfied customers. The product eventually goes live through a tedious and error-prone process that delays revenue generation.
Figure 1: Manual release of products to customers
Now, check out the continuous delivery pipeline below. It illustrates how developers write code on their laptops and commit changes to a source code repository, like Bitbucket. By code, we mean the system under test, the tests, and the infrastructure used to deploy and maintain the system. Bitbucket Pipelines can ship the product from test to staging to production, and help customers get their hands on those shiny new features.
Figure 2: Continuous delivery pipeline doing automated releases
How does continuous delivery work?
A continuous delivery pipeline could have a manual gate right before production. A manual gate requires human intervention, and there could be scenarios in your organization that require manual gates in pipelines. Some manual gates might be questionable, whereas some could be legitimate. One legitimate scenario allows the business team to make a last-minute release decision. The engineering team keeps a shippable version of the product ready after every sprint, and the business team makes the final call to release the product to all customers, or a cross-section of the population, or perhaps to people who live in a certain geographical location.
The architecture of the product that flows through the pipeline is a key factor that determines the anatomy of the continuous delivery pipeline. A highly coupled product architecture generates a complicated graphical pipeline pattern where various pipelines could get entangled before eventually making it to production.
The product architecture also influences the different phases of the pipeline and what artifacts are produced in each phase. The pipeline first builds components - the smallest distributable and testable units of the product. For example, a library built by the pipeline can be termed a component. This is the component phase.
Loosely coupled components make up subsystems - the smallest deployable and runnable units. For example, a server is a subsystem. A microservice running in a container is also an example of a subsystem. This is the subsystem phase. As opposed to components, subsystems can be stood up and tested.
Therefore, the pipeline can be taught to assemble a system from loosely coupled subsystems in instances where the entire system should be released as a whole. This is the system phase.
We recommend against this composition where subsystems are assembled into a system. See an illustration of this in Figure 3.
Figure 3: Subsystems assembled into a system
This all-or-none approach causes the fastest subsystem to go at the speed of the slowest one. “The chain is only as strong as its weakest link” is a cliche we use to warn teams who fall prey to this architectural pattern.
Once validated, the assembled system is then promoted to production without any further modification, in the final phase, called the production phase.
Note that these phases are more logical than physical, and created only to break down a large problem into multiple smaller sub-problems. You may have less phases or more, depending on your architecture and requirements.
Speed without quality is useless to our customers. Continuous testing is a technique where automated tests are integrated with the software delivery pipeline, and validate every change that flows through it. Tests execute in each phase of the pipeline to validate artifacts produced in that phase. Unit tests and static code analysis validate components in the component phase of the pipeline. Functional, performance, and security tests validate subsystems in the subsystem phase. Integration, performance, and security tests validate systems in the system phase. Finally, smoke tests validate the product in the production phase.
Automated tests integrate with the pipeline
A monolithic product architecture, or a “Big Ball of Mud,” can result in a “Big Ball of Tests.” We recommend investing in microservices so that independently deployable artifacts can flow through pipelines without needing a highly integrated environment for certification. Also, independently deployable artifacts enable faster teams to not get bogged down by slower teams.
Value of continuous delivery
The software delivery pipeline is a product in its own right and should be a priority for businesses. Otherwise, you should not send revenue-generating products through it. Continuous delivery adds value in three ways. It improves velocity, productivity, and sustainability of software development teams.
Velocity means responsible speed and not suicidal speed. Pipelines are meant to ship quality products to customers. Unless teams are disciplined, pipelines can shoot faulty code to production, only faster! Automated software delivery pipelines help organizations respond to market changes better.
A spike in productivity results when tedious tasks, like submitting a change request for every change that goes to production, can be performed by pipelines instead of humans. This lets scrum teams focus on products that wow the world, instead of draining their energy on logistics. And that can make team members happier, more engaged in their work, and want to stay on the team longer.
Sustainability is key for all businesses, not just tech. “Software is eating the world” is no longer true — software has already consumed the world! Every company at the end of the day, whether in healthcare, finance, retail, or some other domain, uses technology to differentiate and outmaneuver their competition. Automation helps reduce/eliminate manual tasks that are error-prone and repetitive, thus positioning the business to innovate better and faster to meet their customers' needs.
Who should do continuous delivery and when?
When is a good time to adopt continuous delivery? It was yesterday.
Teams need a single prioritized backlog where:
- Continuous delivery has been embraced, instead of being relegated to the background
- The acceptance criteria of user stories explicitly mention automated software delivery approaches instead of manual
- The sprint Definition of Done (DoD) prevents teams from finishing sprints where the product shipped manually
Continuous delivery is the right thing to do and occasionally requires champions to jumpstart the transformation. Eventually, when designed right, continuous delivery pipelines pay for themselves.
So, who is involved?
Some organizations put inexperienced people to design and implement continuous delivery pipelines, and learned the hard way that there were deep intricacies involved. Appointing junior members sends the wrong signal to teams, and implies that continuous delivery has a low priority. We strongly recommend putting a senior architect in charge, who has a deep appreciation for technology and business.
Beyond continuous delivery
Irrespective of where you are in your journey of continuous everything (integration, testing, delivery, deployment, analytics, etc.), it is neither a checklist nor a destination, and continuous improvement is at the heart of it.
Sooner or later, everyone in the organization gets a call when continuous delivery pipelines are being constructed. Executives, engineering, product management, governance, risk, compliance, InfoSec, operations, legal, and whatever you have. Pipelines gnaw through silos and break down walls. All of us are part of this transformation, one way or the other, and continuous everyone is the new normal!