More and more teams are advancing from continuous integration into some flavor of continuous delivery, and the rise of provisioning and deploy tools makes it easier than ever to get started. While every team’s delivery pipeline will look a little different, there are some things that apply across the board. I sat down with Andrew Phillips of XebiaLabs, maker of Deployit and the Deployit plugin for Bamboo, to talk about these universal truths. As VP of Product Management, Andrew spends a lot of time thinking about how to automate deployments and bring teams into this exciting new phase of software development.
When starting to build a continuous delivery pipeline, one of the first steps is to define the complete package that will be deployed to each environment. Obviously, the compiled application is one piece of that package. What are some other components to consider?
I’m glad you asked that question! Frequently we speak to users already doing continuous deployment because “we push our EARs and WARs to our appservers”. When you ask about updating of configuration files or web content, changes to the server configuration, creation and modification of resources such as queues or datasources, handling of database changes, execution of post-deployment smoke tests etc., then realization quickly sets in that there’s almost always more to your deployment package than just the application binaries.
A continuous delivery pipeline can pull in those peripheral assets from any network location, including from your source control management system. What do you see as the advantages of storing them in SCM, vs. on a file server?
To me, you’d want all the “peripheral assets” in your deployment packages in source control for pretty much the same reasons that apply to your source code, which you probably don’t keep on a file server either. I should add, though, that I’m talking about “primary” rather than “derived” assets here: you keep your source code and not the EARs and WARs in SCM because it should always be possible to regenerate the binaries from the source.
In the same way, if the configuration files or database scripts in your deployment packages are generated from other tools (e.g. Excel or a a schema editor or something like that) I’d say they’re fine on a file server – it’s the generator that should be in SCM. If the configuration files and SQL scripts are “primary” resources that are manually constructed then you certainly want them to be under version control.
How about a bill of materials for provisioning the servers that comprise your environments (OS, Database, libraries, etc)? Where does that fit into the picture?
I’ll have to throw the Consultant’s Answer at that one: it depends 😉 Certainly in today’s dynamic IT environments where we all want to accelerate time to market it is essential to have an actionable description, manifest or “bill of materials”, as you put it, for your environments. You will need to be able to spin up and scale environments very rapidly, and automated environment provisioning is key to achieving this.
The big question is who owns and actions your environment manifest. If you’re looking to deliver virtual appliances the environment and application are essentially part of the same release package, so here the manifest would be part of your build and release process which may output one or multiple virtual images or something like a Vagrant file. In a DevOps scenario with teams dedicated to the management of all elements of a business service you may be looking at such an approach.
If you’re planning on adopting a more PaaS-like interface, where a release consists mainly of the sort of application components discussed previously, and where the target middleware environment is essentially a “black box” that takes care of scaling, monitoring, failover etc., the environment manifest basically describes the building blocks of your PaaS (or, in more traditional terms, middlware environment). In this case, ownership and delivery of the manifest would lie with the platform owners, who may be an external organisation running a cloud-based PaaS or your internal middleware or infrastructure team.
It’s worth pointing out the differences in lifecycle here: in the “virtual application” case, the environment is linked to the application; in the PaaS scenario they are independent of each other.
Automating a deploy run-book it is a big task. Any tips for how to approach it?
Well, I think my truly honest answer would be: “Don’t. Find someone who can own that problem for you.” Writing and especially maintaining deployment scripts, workflows or run-books for anything beyond trivial applications is a tedious and time-consuming activity that diverts your developers and middleware experts away from core business tasks.
If you’re considering PaaS you’re automatically outsourcing this problem. What we see is that the constraints current PaaS offerings place on the application mean that this option is feasible pretty much only for greenfield projects.
In a situation where you need to manage the deployment of existing applications, look for a deployment automation solution that offers as much out-of-the-box deployment logic and content as possible. Integrations with your build and continuous integration tooling will further reduce the amount of work left to you. [Editor’s note: check out the Deployit plugin for Bamboo!]
Of course, we know that in the real world there will always be a couple of applications or scenarios that out-of-the-box content does not cover. It’s obviously important that your chosen solution can easily be extended where necessary.
One “bonus point” is that the frequency with which you have to extend your solution serves as a good metric for the degree to which your deployments are non-standardized. Looking at the momentum behind PaaS, standardized packaging and deployment are obviously things to be aiming for.
Speaking of standardization, is it critical to have all your environments standardized on the same OS, caching scheme, load balancer, etc.? Or can deployment tools usually handle different configs between environments?
Diverse configurations tend to increase the complexity not just of deployments but of your infrastructure management as a whole, so I’d say standardization is certainly not a bad thing to be aiming for. What’s interesting is that, whereas the idea of having a production-like environment for development was pretty much pure fantasy until recently (and will continue to be so for plenty of environments), virtualization, IaaS and PaaS services are now making this possible for some.
Still, I think it will continue to be absolutely essential that your deployment tooling supports differences between environments. Not just for the many situations in which development and test are still quite different from production – in a cloud scenario you will most likely be auto-scaling your environments to meet demand, so they are unlikely to be the same.
What we’re seeing even now is that handling this kind of dynamic variability can add quite a substantial layer of complexity and/or maintenance to deployment and provisioning scripts and workflows, so it’s certainly something to bear in mind when you plan your continuous delivery implementations!
Do you recommend storing automated deploy scripts along with the source code and incorporating it into the CI scheme, or keeping them separate?
You certainly want to make sure your deployment scripts are kept in a central, version-controlled repository: so not on a deployer’s laptop or even in an unversionable job, build or workflow definition.
Whether you choose to store these scripts alongside your source code depends on who in your organisation is responsible for developing and maintaining them. This discussion is similar to the environment manifest question earlier: if developers are responsible for deployment, they should store the scripts alongside the code and include them in the deliverable – then you have RPMs or MSIs.
If you are taking a more PaaS-live approach – and given many developers’ lack of deployment expertise this is what we are seeing more frequently – the deployment logic will be part of your platform source or configuration and owned and versioned with it.
Let’s talk tools. A CI server like Bamboo is pretty essential, and I’m guessing you’d recommend a deploy tool like DeployIt as well. What do you see as the primary advantages of that?
Let me put it this way: the people we speak to are benefiting greatly from continuous integration but are realizing that to truly be able to deliver business value more quickly they need to tackle the next hurdle, which is rolling the continuously integrated packages out to your target environments and transitioning them through the deployment pipeline. Early adopters of these ideas had to come up with complex scripting triggered by continuous integration builds to achieve this. As we discussed earlier, this kind of home-grown automation is resource-intensive to develop and, moreover, requires continual maintenance (a type of continuous activity that you definitely do not want ;-)).
So I would certainly suggest that you hand off this recurring effort to a deployment automation solution that integrates with your continuous integration server, such as Bamboo. Of course, we would like you to include a market-leading platform like XebiaLabs’ Deployit amongst the alternatives you consider!
Any other tooling you recommend?
We’re seeing a lot of activity around on-demand environment provisioning, especially in operations but increasingly as a self-service function provided to development teams. We see a lot of use of Puppet and VMware’s offerings but increasingly also virtual machine creators such as Vagrant (which runs on top of VirtualBox). Especially for development and test environments, we also see usage of public clouds such as EC2 and Rackspace.
Obviously, in the light of these trends it makes sense to check if your deployment automation solutions can integrate with such tooling to, for example, automatically become aware of new environments to which applications can be deployed. You may even want to deploy the application tier automatically once a new environment has been provisioned.
In terms of improving the efficiency of ALM using your existing tooling we’d also suggest you investigate possible integrations between your deployment process and change management or ticketing systems such as JIRA. For instance, we see Deployit users automatically verifying change tickets as part of the release conditions for a deployment, or updating the status of a ticket with the result, logs etc. of the current deployment.
Phew, that’s a ton of useful information to digest. For a steady flow of continuous delivery and deploy automation goodness, follow @XebiaLabs on Twitter. Thanks so much for stopping by the Atlassian blog-iverse, Andrew. Come back soon!