Also, check out our tutorial on how to get started with Continuous Delivery with Bitbucket Pipelines.

Sometimes in software development, you see a big discrepancy between how long it takes a developer to introduce a change and how long it takes for that change to get into production. The simplest patch could take weeks to be shipped to customers while it only took a few minutes to write and test locally. This is often due to the deployment process being seen as a risky thing to do, requiring a lot of care and preparation to execute.

That's a problem when you're practicing agile because you can't have a truly fast feedback loop with your customers as you're still doing big releases with weeks or months worth of changes in them. It takes a long time before your customers can try a new feature and when bugs occur it's hard to identify which change is the root cause.

Continuous delivery is a practice that can fix these issues by making tracking and deploying software trivial. The goal is to ship changes to your customers early and often, multiple times a day if possible, to help you minimize the risk of releasing, and giving your developers the opportunity to get feedback as soon as possible.

Continuous integration as a foundation

To get started with your continuous delivery (CD) journey, you will need to start by adopting continuous integration (CI) to make sure that all changes pushed back to the main repository are tested and ready to be deployed. If you decide to automate your deployment to production without having a good CI process in place, this will just result in shipping bugs faster to your customers. You may be able to remove the pain out of the deployment process, but without proper testing, you will still have a high risk of breaking production.

Continuous integration is not a silver bullet. It's inevitable that some issues will occur during development, but CI will give you confidence in the quality of your code and make sure that the core features of your application are working before you release new changes to your customers.

So start by having a good CI culture in place before you move into your continuous delivery journey – in this guide, we will assume that your CI workflow is already in place (see our guide on how to get started with CI here) and that your next step is to set up your continuous delivery pipeline.

Building your continuous delivery pipeline

The continuous delivery pipeline is a set of steps your code changes will go through to make their way to production. It includes building and testing the application as part of the CI process and extends it with the ability to deploy to and test staging and production environments. Achieving continuous delivery can be summed up in two statements:

Your code is always ready to be deployed to production.
Releasing changes is as simple as clicking a button.

We'll cover the essentials of your CD pipeline so that it looks like the following.

CD pipeline screenshot

Get a staging environment to act as a buffer to production

"Worked on my machine... it's Ops' problem now!"

Even with an extensive suite of tests, you might see production breaking inexplicably while everything worked fine on your local machine. Quite often it is caused by a difference in configuration between your local environment and your production environment, but many other factors can come into play.

You can mitigate this issue by having another environment, a staging environment, that you deploy to before releasing to your customers. It doesn't need to support the same scale as your production environment but it should be as close as possible to the production configuration to ensure that the software is tested in similar conditions.

A staging environment will act as a buffer, and your release workflow should be similar to the following:

  1. A developer makes changes locally and tests locally that everything is fine.
  2. The developer pushes their changes to the main repository where CI tests are run automatically against their commits.
  3. If the build is green, the changes are released to the staging environment.
  4. Acceptance tests are run against the staging environments to make sure nothing is broken.
  5. Changes are now ready to be deployed to production.

If you are using a container technology such as Docker, you can simply reuse the same images for your staging environment that you are using for your production. Some platforms also make this process easy by self-managing the underlying configuration of your infrastructure. The bottom line here is that you want to avoid discrepancies between the two environments to avoid getting infrastructure related bugs in production.

The other advantage of having a staging environment is that it allows your QA team and product owners to verify the software works as intended before releasing to your customers, and without requiring a special deployment or access to a local developer machine.

Automate your deployment via scripts

Once you have your production and staging environments set up, the next step is to start automating your deployments. The goal is to get to the point where your application can be released without the need for human intervention once the deployment script launches.

You can divide deployment automation into two phases to simplify the work. In the first phase, you create deployment scripts that contain all the instructions required to release changes to your staging and production environments, and your deployments are still launched from your local machine using these scripts. In the second phase, you will plug the deployment scripts onto the service or server that you use for CI and use them as part of your continuous delivery pipeline.

There are many ways to deploy software, but there are common rules that you should observe as you start to write down your deployment scripts:

  • Version your deployment scripts with your code. That way you will be able to audit changes to your deployment configuration easily if necessary.
  • Do not store passwords in your script. Instead, use environment variables that can be set before launching the deployment script.
  • Use SSH keys when possible to access your deployment servers. They'll allow you to connect to your servers without providing a password and will resist a brute force attack.
  • Make sure that any build tools involved in your pipeline does not prompt for user input. Use a non-interactive mode or provide an option to automatically assume yes when installing dependencies.

When you reach the point where you can perform a deployment with a single command line, do not move straight to the next phase. Instead, try to deploy several times from your local machine to make sure that they're working as expected with different types of changes (data migration, upgrade, new dependencies).

This is also a good time to write smoke tests that will ensure the newly deployed environment is not broken. A smoke test is a simple test that verifies your application is up and running. As much as possible, do not hit a static page. Try to test a part of your application that requires all the underlying services to work (database, 3rd-party APIs, etc.).

Once you're happy with the deployment scripts and smoke tests, it should be straightforward to turn your CI workflow into a continuous delivery pipeline. But before that, we need to talk about managing changes to your data structure and making sure they're properly executed as part of your deployment.

Make data structure changes part of your deployment process

If a database backs your application, there's a high chance that you will have to perform updates on the data structure as part of a deployment. It could be adding a column, changing an index, creating or deleting an entire table, etc. These operations will require execution at the same time the code is deployed to make sure that the application works, and you'll need to handle rollbacks if production breaks after a deployment.

No matter what method you use to manage your data, you should start by performing backups of any data structure before a new deployment. That way, you can always restore the previous state of your application if something goes wrong during a new release.

To make things easier, it's best to use a database migration tool that can help manage data structure changes as code. Some frameworks like Laravel, Django or Rails come with their migration services bundled, and you can simply call a command to execute migrations as part of your deployment script. In most cases, the same migration tools will provide the ability to rollback changes to the data structure, but be wary of relying too much on it as there will be times where the only way for you to rollback will be a restore from backup.

Set up your deployment pipeline

If you can now trigger deployments and database changes from your local machine via a single command line, with no other human intervention needed, then you are ready to create your continuous delivery pipeline. Assuming you're practicing continuous integration, you should be already using a CI server like Bamboo or Jenkins or a cloud CI service like Bitbucket Pipelines.

You can now go back to your CI configuration and extend it to run your deployment automatically to staging when you get a green build. Below is an example of a configuration for Bitbucket Pipelines that executes a deployment to staging when tests are successfully completed. In the example below the script deploy-staging.sh is checked with the repository and contains all the instructions required to run the deployments.

pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - npm install
          - npm test
          - ./deploy-staging.sh
 

When you get to that stage, try to push several commits to your main branch, including database changes, and make sure your staging environment gets updated accordingly. When you're satisfied with your workflow, you can commit the script to release to production to your repo, and add a manual release step to your pipeline. With Bitbucket Pipelines, for instance, you can either configure a custom pipeline that can be executed from the UI or you can use branches and pull requests to trigger the release to production.

At the end of this stage, you should have a continuous delivery pipeline where all changes published to the main branch are deployed automatically to staging, and you can release to your customers via the UI. There shouldn't be a need for your team to log onto a server, perform database updates manually, or install dependencies. 

Continuous delivery will only be as good as your release cadence

We understand that business constraints may prevent you from releasing every green build to production, but be careful not to wait too much between releases. Having a continuous delivery pipeline setup will only produce benefits by shipping to production early and often.

If your staging environment is updated multiple times a day, but you only ship to production at the end of the month, you are still exposing yourself to the risks of having big bang releases. It will be harder to understand where bugs are coming from, your team will still be scared of pushing small changes, and you still have a long feedback loop with your customers. You will have made deployment easier, but not necessarily taken all the fear out of releasing to production.

We recommend deploying as often as possible to avoid these issues and make your team more productive. And when you get confident enough in your release process, you'll be able to go one step further and adopt continuous deployment – where every change will go straight to production as soon as it passes all the automated tests!

Posted by Sten Pittet

8 min read

Products discussed
Bitbucket Pipelines logo
Bitbucket Pipelines: Built-in Continuous Delivery Solution