JPRpicThis is the forth in our five-part series from guest blogger J. Paul Reed—build engineer, automation enthusiast, and host of The Ship Show podcast.

Any discussion on the transition toward continuous delivery of your software would be incomplete if it neglected the cultural aspects of such a deep change. In many conversations about the larger concept of DevOps (which CD is often a part of), much time is spent bandying about the question of tools versus culture: how each are relevant, where both are applicable, what is influenced more effectively by which, and how to tell whether they are successful. As such, it’s time to take a look at some of the cultural issues that can present challenges and should at the very least be considered when helping your organization get onboard the continuous delivery train.

Engineering support staff are people too

focusedstaffingPerhaps the most fundamental change in cultural thinking as it relates to a software development organization trying to move to continuous delivery is the introspection, discussion, and hopeful dismantling of the idea that the sole creator of company value is the developer. Many may balk at the claim that this is even an issue, but as someone who’s spent over a decade as part of the “support staff” in various software development organizations, I  suggest checking in with your release engineers, QA teams, and operations staffs: you may be surprised by some of the stories they will tell about how their work and technical opinions are addressed–and in some cases, how they’ve personally been treated over their tenure. (The quintessential example, of course: “We need this release to be done tonight; we’re aware it’s Friday before a holiday weekend, but the developers got all of their work done, so it’s your job to test it and get it shipped, no matter the date.”)

To be fair, it’s easy to see how the industry came to this state of affairs: after all, it is developers who write the code and create the features that customers buy. Customers don’t buy “quality,” at least not directly. They won’t pay extra for a state-of-the-art “release pipeline.” And thus, from a business perspective, these concerns are often assumed to “exist by default,” as in the case of QA, or chronically under-invested in, as in the case of release engineering and operations. These roles have traditionally been considered a “cost of doing business” and as such, are accounted for (and thus thought of) as cost-centers, not money makers worthy of further investment.

opsbrainBut there’s the flaw in that line of reasoning: as more software moves online, capable developers and compelling features matter less. In fact, they don’t matter at all if the site or service is down. Similarly, as businesses clamor to move the bits out ever-faster to level-up their ability to compete, the fancy “next gen” architectures your developers have created are entirely irrelevant if the release pipeline is inconsistent or broken half the time. Or if the bits you’re shipping are riddled with bugs.

Unpacking this problem to address it is not that hard, but does take effort on everyone’s part. Fundamentally, the shift must be made from siloed thinking about “which teams are responsible for what” toward a more holistic model. As a part of this, the organization should re-examine the value-creation chain as it flows from team to team, and eventually out the door. In so doing, you begin to realize that developers are not the sole creators of value, and that release/QA/ops staff are not detractors of value. Therefore, the work QA, operations, and release teams do is equally as important as the features developers write. And if that is true, then it becomes obvious just what a losing proposition it is to deemphasize QA’s concerns about quality issues, ignore the release team’s desire to prioritize automation over burning humans out with endless manual processes, or continually cannibalize operations teams’ budgets.

Because this notion that the “developer is king” and the only relevant opinions in engineering discussions must come from development is so pervasive and historically engrained in the industry, addressing it is not an easy task. Change, especially cultural change, is difficult. But if the central artifact of your continuous delivery transformation is a working CD pipeline–where developers’ commits go in, then something happens to test them (QA), then magical packaging happens (release engineering), then they get pushed out to an environment (operations)–and these commits are to flow smoothly through that pipeline, then it becomes obvious that no single part of the pipeline can be reasonably be considered less important than another. Organizations who build their CD pipelines and yet omit resolution of this fundamental (cultural!) issue will have a front row seat to failure, as that pipeline begins to spring leaks and becomes clogged with half-baked commits, broken builds, and un-deployable artifacts. Implementing continuous delivery can be very revealing in that regard, which makes it a good teacher, but can also make it difficult for organizations to deal with.

Think globally, act locally

This shift in thinking may all seem like a tall order (perhaps even a non-starter!), but some small steps can be taken that will pave the way for helping everyone adapt. One of the easiest steps is to emphasize the sharing of information between various teams, and put into place actual procedures that foster (and perhaps, at first, force) this sharing. Some readers may have worked in organizations where information was hoarded, to be used later as currency or even a weapon. I’ve certainly worked at those places! Directly confronting this organizational anti-pattern can be a good first step. The venerable Scrum pillars of the sprint planning, daily stand-up, and retrospective gatherings are great examples of tools that can be used to tackle this. They are made even more effective when they involve even a single member from other release engineering or operations teams, and this small change can promote the “de-currency-ing” of information, by making it available to all.

Likewise, the movement to publicize status and reporting using whiteboards, tracking tools, flat-screen TVs on office walls, or other “information radiators,” can help eliminate the ability to “weaponize” information. When infrastructure performance metrics, sprint task boards, continuous integration status, and application error reports are available for everyone to view and analyze, this not only fosters collaboration, it begins to chip away at silos–a hallmark of the DevOps movement. Taking action on the information can be crowdsourced by the entire team.

devopscollabAnother way to foster collaboration is to pair people from disparate teams to work on a single related task. The arguments for and productivity gains measured by the pair programming movement serve as proof that this method can be successful. When adapting the strategy for a cross-functional team dynamic though, you don’t just want two engineers working on the same piece of code. Instead, it can make more sense for developers to pair with the operations team writing infrastructure cookbooks for deploying the application. Or a QA engineer pairing with a developer to write unit tests, and the developer pairing with the QA engineer to write integration test automation. Both increase the understanding of the other’s world.

Yet another step in this direction: the practice of shared pager duty. The idea is often misunderstood as “making developers become responsible for all operations tasks,” as if they’re now pulling double-duty. In some environments, it is implemented this way, and can (understandably) create frustration for all. A better approach is to treat it more like police officers riding along with fire fighters or air traffic controllers getting a pilot’s license: spending a rotation in the others’ shoes can provide more insight into your own work and its role in the larger system than a thousand sprint retrospectives. In the best implementations, pager duty is shared among different roles. All pager duty shifts are modified to include a developer who is on the hook to aid in the response to production issues, along with the operations or release engineering staffs.  Fighting fires together becomes the cornerstone experiences that are most successful at (often informally!) “breaking down silos.”

The intersection of continuous delivery and DevOps

John E. Vincent, an operations engineer at Dell who’s been heavily involved in the DevOps Days events and the DevOps community, was once asked “What is DevOps?” He answered succinctly (pardon the language): “DevOps means giving a shit about your job enough to not pass the buck. DevOps means giving a shit about your job enough to want to learn all the parts and not just your little world.”

victoryAn environment where it’s acceptable to pass the buck implies that there are classes of employees, the lower of which is there for cleaning up the messes and doing the work the higher class doesn’t want to do (but, they often claim, could just as well do if they really wanted to). If the benefits of DevOps are to be fully realized, we must change this notion that the engineers supporting developers aren’t as valuable. Similarly, if we are to foster an environment that promotes learning about “worlds” other than our own, we must actively create these opportunities for cross-team sharing.

This is a series on continuous delivery. So you may be wondering why, in an article on the topic of what culture fosters continuous delivery, DevOps has such a prominent role on the stage. The answer is simple: you can have a DevOps-focused culture without doing continuous delivery–and many organizations, in fact, do. But if you venture out to implement continuous delivery without a solid understanding of the values a DevOps culture espouses, your journey begins with much extra burden. And is likely to find itself doomed.


Editor’s note: There’s more to CD that great tooling. But great tooling doesn’t hurt! Check out Atlassian Bamboo–the first world-class continuous delivery tool on the market.

A skeptic’s guide to continuous delivery, pa...