Agile software development at Atlassian
h4. Short release cycles are great.
Shipping a new feature-version of software every few months – instead of every few years – is great.
* Short release cycles help avoid feature creep because there is always a deadline looming, and these deadlines help to maintain a sense of urgency in developers and in stakeholders
* Short release cycles help you avoid the mad rush of things that “absolutely must be in _this_ release”: The _next_ release will never be too far away, so you’ll have better chances of convincing stakeholders that non-critical features can be delayed;
* Short release cycles force you to fix bugs regularly – every developer would obviously love to spend all their time working on cool new features instead of fixing bugs, but if you ship frequently you can’t procrastinate forever. Stuff just has to work;
* Since the release day is the best day to step back and see the big picture (and the competition), short release cycles help you to adapt your strategy to reality much faster;
* Last not least, seeing progress frequently is great for team morale.
 
h4. But you can make many mistakes in those few months too.
Unfortunately, even within a short release cycle of just three months, there is plenty of time to introduce new _problems_ too. A typical team could probably design three totally unusable new features and introduce five unexpected performance bottlenecks along the way, on top of normal bugs. So three months can be a pretty long time as well, and usually you will only realise these problems around the launch date! Do you really want to wait this long?
Admittedly, there are some agile risk mitigation strategies: the continuous integration server will catch quite a few bugs and help the developers fix them, automated performance tests let you discover many subtle problems really early and the QA team will be working endless hours to iron out usability issues too. But let’s face it: even the best QA and usability experts are just _simulating_ user behaviour. The real test is when _actual_ users work with your software, in ways you maybe didn’t consider, and when your software interacts with other production software too. The earlier you can get this data the better.
h2. How we do it at Atlassian
h4. Enter Dogfooding
Dogfooding means nothing else than using your own products. This is a great principle because by using your own software, you will share similar pain to your customers. If your product is crap, you will have a high incentive to make it better, if you use it every day. You will make features _work_ well, not just look well.  You will fix bugs _before_ you add even more features. No matter what software you are developing, make sure to use it yourself in one way or another.
According to Wikipedia, the term was coined by an actual dog food advertisement, and later promoted by Microsoft. Even if this remains a myth, the term is very catchy. Eat what you sell! Use the software you ship! Stick your own finger into a table-saw to demo the shutdown mechanism! Well, you don’t have to go this extreme to become a good dogfood-er. Just deploy your software onto a system that you have reason to use yourself. You may need to become creative and bend some company rules, but few things can’t be done when the stakes are high. Are you developing yet another Twitter-clone? Then shut down all Twittter access for all employees from within the company. Are you coding a new Wiki? Throw out Confluence! Are you testing dull online-banking software? Try moving actual money onto a staging-installation to encourage employees to spend that money under realistic settings. There will be pushback in any of these examples, but it is usually worth the effort to fight this through.
h4. Mini-releases to dogfooding servers
So, rolling out your final product to yourself is great. But just shipping your final release to your own company doesn’t make your product better _in the short term_. If the final release is crap, then it won’t be just your customers who are mad at you, but your coworkers and boss too! On the one hand you do want their feedback from dogfooding, but on the other hand you don’t want to upset them by putting a dysfunctional final release in front of them. The solution is pretty simple: ship to your own company servers *all the time*. Not just once per quarter, but every two or three weeks!
At first, that idea might sound even worse, because now you might have your boss get angry at you every two weeks instead of every three months. But the good news is that you’ll get a chance to improve the product right away, once he or she has stopped yelling at you. Bug fixing and improving unusable features will bubble up in priority almost immediately after a botched dogfooding release. Additional features will only be worked on once the existing features are good enough. Automatically, the next dogfooding deployment will cause much less frustration. And the next will be even better. Until you fall into a rhythm of always shipping decent dogfood releases. It just happens!
h4. Feedback from staff about mini-releases
On top of improving quality, frequent internal releases are great for gathering regular feedback as well. Inside Atlassian we upgrade our Confluence servers every two weeks, and we even create release notes for each release, including screenshots and explanations. Creating these release notes is simple and just takes an hour or two, but the feedback from our internal staff is amazing. Seeing the release notes, and the actual mini-release, every staff member knows what changed and what to comment on (if they care). The development team and the product managers get free feedback from actual users, all the time. So it is a very worthwhile investment for the development team to build and deploy regularly.
Here is an excerpt of one recent mini-release (we call them milestones). You can view all our milestone release notes online too, because we even publish them to our development community:
milestone_release.png
And here is an excerpt of the *feedback* we got from various parts of the company. Bill works in Marketing, Joshua in Sales, David in IT, and Matt in Development. These are just 6 comments out of 26 we got, and that is just one single milestone release being discussed. We get this kind of feedback all the time, and get even more by chatting in the hallway or through IM or Jira issues!
internal_customer_feedback.png
While QA is really important in helping find bugs and unusable features early, only real users (and as in this case: customer representatives) give you the whole picture. If we didn’t ship internal Confluence releases all the time, we would get this kind of information far too late to feed it back into the upcoming release.
h2. Trying this at home
We are extremely happy with our dogfooding process in the Confluence team, and we are step by step rolling it out to our other product teams. However, before you get too excited, have a look at some risks and common sense prerequisites
h4. Risks
As mentioned before, dogfooding does carry certain risks. Despite almost two years of fortnightly deployments, we still sometimes run into some common problems:
* we still sometimes over commit during a two-week iteration. We sometimes ship features to our internal systems that should have remained on a developer’s machine for another two weeks, because it is just not useful yet;
* while we do have a great QA team, sometimes we just don’t listen closely enough and ship more bugs to our internal audience than we should;
* sometimes we find so many bugs after shipping internally, that we can’t get all the bugs ironed out before the next milestone, and some of our internal users occasionally get annoyed.
But then again, we do find problems while dogfooding that we would not find even with three times as many testers. Occasionally marketing or sales employees do get a bit mad at Confluence developers for introducing a new bug on our intranet server. But everyone agrees that bugs they stumble over at least won’t make it to customers. That’s great to know for marketing and sales employees too.
And a completely non-technical risk: Even though we try, we don’t use all the features internally in the same way our customers do. We must always ensure we don’t get lulled into a wrong sense of security. It is crucial the software works for us, but it isn’t sufficient. Having paranoid evil testers remains as important as ever.
h4. Criteria for dogfooding deployments
Since you _will_ run into some problems, it makes sense to define a couple of simple criteria for deploying to your internal dogfooding system. You have to strike a balance between stability and dogfooding value. If you focus too much on experimentation you will get lots early feedback, but it will be mainly negative feedback. If you focus too much on stability, you will avoid negative feedback about bugs, but you will get the feedback so late in the release cycle that you may not be able to improve the feature based on the feedback anymore. In Confluence we tend to have these criteria:
* all automatic tests (there are thousands!) on our Bamboo servers must pass for the target configuration (if we deploy to Tomcat, then the Tomcat-tests must pass);
* the automated performance test must not show a significant slowdown;
* deployment to an exact replica staging system has to work, and manual smoke testing of the top use-cases must not show any problems on that staging system;
* the new features must have been checked by QA, no showstoppers have been found, and the amount of known bugs must be acceptable. *We don’t aim for zero bugs in mini-releases though!* We want to know early if we are on the right track with a feature. A few bugs more or less don’t matter as long as end users can see the general idea of a new feature. Insisting on zero bugs would take a long time, and then the crucial feedback might come way too late.
 
h4. Public dogfooding
We also run our unreleased versions of Confluence on our publicly visible servers. However, we don’t run our fortnightly releases there. Since the focus on internal dogfooding is to find bugs and usability problems early, we don’t have extremely high requirements for polish for the milestone releases. But our external systems are also used by potential customers, and we don’t want them to think that interim bugs will actually be in the final software release. So our external dogfooding is mainly done with patch-releases (which don’t add any new functionality) and with beta-quality releases during the last month of a release cycle. We still take a certain risk that bugs can occur, but it is much more controlled. Getting feedback during the last month of a four-month release cycle is pretty late, but this still gives us enough time to react to some user feedback and to real-life problems you wouldn’t encounter on internal systems (e.g. SPAM, search engine web crawlers, anonymous access, problems with hackers, etc).
h4. Be pragmatic and customise to your needs
Too often agile projects have gone wrong because people thought they could blindly follow some approach they read about on the net. Don’t be naive! It has taken us a lot of thinking and careful preparation to get to where we are. We wouldn’t be as successful, agile, and fast-paced without our fortnightly dogfooding feedback. But this doesn’t mean one size fits all, and that everyone should throw all their process over board over night and follow our example. Take careful steps and precaution before you try this at home! Backups help. QA helps. Trying it out on non-production systems helps. Discussing it with your team is crucial for success. Clear communication is important as ever. Don’t try this in the middle of a failing project either, rather wait and introduce this process after a major release has taken place. And so on. With some common sense though, continuous dogfooding will make you wonder how you ever managed to develop software without it!

Watch Per talk about his approach to agile iterations


Watch more agile interviews with the Atlassian team

Dogfooding and Frequent Internal Releases