Business value of continuous delivery
Follow the overall trend in the software industry: increasing the ability to react.
Why continuous delivery?
What emotions does the word "release" trigger in you? Relief? Elation? A fist-pumping sense of accomplishment? If your team is still living with manual testing to prepare for releases and manual or semi-scripted deploys to perform them, your feelings may be closer to "dread" and "blinding rage."
That's why software development is moving towards the continuous delivery (CD) model. In the continuous paradigm, quality products are released frequently and predictably to customers. Therefore, the ceremony and the risk around releasing is reduced. If you're relying on your pipelines daily, you will notice (and resolve!) its deficiencies much quicker than if they flow once every few weeks or months. That is, reduce difficulty by increasing frequency of your product releases.
The emphasis on continuous integration, continuous testing, constant monitoring, and pipeline analytics all point toward an overall trend in the software industry - increasing your ability to respond to market changes. Make no mistake - CD is not the exclusive domain of "unicorn" companies and tech darlings. Every team - from the humblest start-up to the stodgiest enterprise - can and should practice continuous delivery.
This article takes a look at the business value for making this switch. It will discuss the work that lies ahead and the benefits it will yield to ship software using CD pipelines.
Top business benefits of continuous delivery
Continuous delivery improves velocity, productivity, and sustainability of software development teams. Let’s see how.
First off, velocity
Automated software delivery pipelines help organizations respond to market changes better. The need for speed is of utmost importance to reduce shelf time of new features. With a low Time2Market, organizations have a better chance to outmanoeuvre their competition and stay in business.
Remember that speed by itself is not a success metric. Without quality, speed is useless. There is no value in having continuous delivery pipelines shoot erroneous code into production at high speed.
So, in the world of continuous delivery, velocity means responsible speed, and not suicidal speed.
Productivity translates to happiness, and happy teams are more engaged.
Productivity increases when tedious and repetitive tasks, like filling out a bug report for every defect discovered, can be performed by pipelines instead of humans. This lets teams focus on vision while pipelines do the execution. And who doesn’t want to delegate the heavy-lifting to tools?
Teams investigate issues reported by their pipelines and once they commit the fix, pipelines run again to validate whether the problem was fixed and if new problems were inadvertently introduced.
Businesses aim to win marathons, not just sprints. We know that staying ahead of the pack takes grit. Consistently staying ahead of the pack can be even harder. It takes discipline and rigor. Working hard 24/7 will lead to premature burnouts. Instead, work smart, and delegate the repetitive work to machines, which by the way don’t need coffee breaks and don’t talk back!
Every organization, whether or not a tech company, uses technology to differentiate. Automated pipelines reduce manual labor and lead to eventual savings since personnel is more expensive than tools. The steep upfront investment can cause concern to inexperienced leadership, however, well-designed pipelines position organizations to innovate better and faster to meet their customers' needs. CD provides the business with more flexibility in how it delivers features and fixes. Specific sets of features can be released to specific customers, or released to a subset of customers, to ensure they function and scale as designed. Features can be tested and developed, but left dormant in the product, baking for multiple releases. Your marketing department wants that "big splash" at the yearly industry convention? With continuous delivery, it's not only possible, it's a trivial request.
Top challenges of continuous delivery
While we firmly believe continuous delivery is the right thing to do, it can be challenging for organizations to design and build resilient continuous delivery pipelines. Because CD requires a large overhaul in technical processes, operational culture, and organizational thinking, it can often feel like there's a large hurdle to getting started. The fact that it requires a hefty amount of investment in a company's software delivery infrastructure that may have been neglected over the years, can make it an even more bitter pill to swallow.
There are many problems that organizations face and the following three are the most common pitfalls - budget, people, and priority.
Budget: Is yours too low?
Construction of continuous delivery pipelines consumes your best people. This is not a side project whose cost can be shoved under the carpet. I have always been surprised at how some organizations start out by allocating junior members and by cutting corners on purchasing modern tools. At some point, they course correct and assign their senior architects to invest in architecture decoupling and resilient continuous delivery pipelines.
Don’t lowball. Based on your vision, set aside an appropriate amount of funds to make sure execution is uninterrupted. Deliver a continuous delivery pipeline MVP (minimum viable product) and then scale it throughout your organization.
People: Do you have forward-thinkers?
Even when you have budget, at the end of the day, execution could be a people problem.
Teams should fearlessly automate themselves out of their jobs, and move on to new projects. If you have people who are apprehensive of automated agents performing tasks they were otherwise doing manually, you are housing the wrong people.
In case you feel stalled, shift gears. Jumpstart with the help of experienced champions who will see you through this initial hump. They know how to give your team a car when all they have asked for is a faster horse! People, after all, are your greatest assets and train them to do the right thing. Moreover, make it easy to do the right thing, and hard to do the wrong thing, and you will be pleasantly surprised at the outcome.
Priority: Process vs product
“Let’s stop the line and build continuous delivery pipelines!” said no product owner ever.
In their defense, they are focused on cutting ahead of competition with new features that wow the world. At the same time, you know you have a problem if in every sprint planner, pipelines are weighed against product features and are traded off.
In some product backlogs, pipelines (if they appear at all) hang on for their dear lives near the bottom. Near-sighted leadership classify work related to pipelines as expenses, rather than investments that will stand the teams in good stead. They remain in denial over the long-term damage they create, and sadly, sometimes get away with it.
Continuous delivery metrics
OLTP (online transaction processing) and OLAP (online analytical processing) are two well known techniques in the industry. Both concepts apply to continuous delivery pipelines and help generate insights that steer organizations in the right direction. Let’s study how.
Pipelines see hordes of data
Imagine a typical day in a software development team’s life. The team commits a feature that business prioritized, commits tests for that feature, and integrates deployments with the continuous delivery pipeline, so that every change deploys automatically. The team realizes that the application got sluggish after adding this new feature and commits a fix for the performance issue. The team also adds performance tests to make sure that bad response times would get caught before promoting the application from test to staging.
Think of each of those commits as a transaction. And that’s how software development teams roll - one transaction after another - till a product is born that wows the world. And then off they go again. Multiply these transactions across all engineers and teams in your organization, and you have hordes of transactional data at your disposal.
This is a good segway into the next section on pipeline analytics and how to make the most out of that transactional data.
Let’s analyze the pipeline’s transactional data
Could we analyze the transactional data seen by pipelines to extract nuggets of organizational insights? Sure, we can!
Like with all transactional data, the sheer volume prevents us from making any immediate sense. That’s why we should aggregate, and perform analytics to glean insights into our organization. Analytics help us see the forest through the trees. Here are three examples how we improved our practices from pipeline analytics and insights.
In our first example, we learned that out of the hundreds of deployments that happen every week, the number of deployment failures of application A are thrice as high as those of application B. This discovery led us to investigate application A’s design choices on environment stability and configuration management. We learned that the team used flaky virtual machines in their data center to deploy, while application B was containerized. We prioritized an investment into immutable infrastructure and checked back in a month to make sure the return on that investment was good. Sure enough, it was. What can be measured, can be fixed.
The second example is when we learned that application B’s static code analysis errors have steadily trended upward over the last few quarters.
This could mean that the team behind application B needs to be (re)trained to write better code. We also discovered that the static code analyzers reported false positives, which means they flagged coding violations when there were none. So, we upgraded their analyzer to a well-known industry-standard tool and the false positives reduced to some extent. We organized a coding workshop where we discussed and resolved the legitimate static analysis errors. By the end of it, the team was humming along again.
Our third example stems from the interesting insight that application A’s unit tests had less code coverage than application B and C, and yet application A had the least number of production issues in the last year. Writing unit tests and measuring code coverage are fine. Overdoing that exercise is unproductive for the team and useless for the customers. Lesson learnt.
Key Performance Indicators (KPIs) for business value
We can not rely on opinions when it comes to steering the organization in the right direction. We need to make data-driven decisions.
Many a times we have seen individual departments define their own success metrics. It’s a good thing for departments to understand what success looks like for them, as long as those metrics tie in to the organization’s goals.
Organizational KPIs vs. departmental KPIs
For example, a few organizations require developers to own the test environment, QA to own staging, and Ops to own production. Instead of developers getting buried under code coverage reports for unit tests that run with builds, it is important for them to step back and look at the big picture on all environments - whether they own it or not.
Pipeline failures due to Unit Tests
Pipeline failures due to Functional Tests
Pipeline failures due to Performance Tests
Pipeline failures due to Security Vulnerabilities
You might discover that the percentage of failures due to performance tests is high, and it could be caused by incorrect performance benchmarks or sluggish code. Or, a comparative analysis might show that pipelines fail on functional tests the most. The root cause could be real bugs in the product, or it could be buggy test code, inaccurate test data, incorrect test configuration, misunderstandings between product and engineering, and the like.
Further digging could reveal that incorrect test configurations are rampant, and you could prioritize fixing those issues to fix frequent functional test failures.
Business KPIs vs. technical KPIs
Another popular KPI that organizations like to keep an eye on is the value delivered in a sprint. A common bad practice is to record the number of releases, which by themselves don’t add value. You could move bits from point A to point B without moving the needle for your business, and that’s not good enough. Some organizations measure the number of tests freshly added in that sprint or the total number of tests executed, and those do not reflect business results either, just engineering effort. The value delivered in a sprint has to be relevant to business.
A couple of examples of business KPIs could be the number of customer acquisitions in the last quarter and the number of advertisement clicks in the last month. Pipelines do not directly influence these business metrics. The only reason we try to map business KPIs to technical KPIs is to understand the relationship of technical craftsmanship to business goals.
Business Value per Sprint
Not # of releases
Not # of tests executed
Business KPIs when mapped to pipelines also help us calculate the RoI (return on investment) of pipelines. Leadership teams use these metrics to understand possible areas of improvement and to plan for budget.
Embark on your CD journey
Don’t waste time debating whether continuous delivery is right for you, or whether continuous integration is enough. Now is the time to find out. If you shy away now, and later your business runs out of steam, you will have only yourself to blame. When you embark on this journey, it will open up an avenue of opportunities for your team to learn and continuously improve! And if you factor in the employees who won't burn out due to "midnight oil" releases, and people who get to work on making their own worlds better instead of propping up a building that's constantly falling apart, the value becomes obvious. Don’t you agree?
On a related note
Practical continuous deployment
Curious about some of our experiences with continuous delivery and deployment at Atlassian? Read on.
Put it into practice
Continuous delivery from code to customer
Bamboo is a continuous integration and delivery tool that ties automated builds, tests and releases together in a single workflow.