In 2007 I did a series of surprisingly popular blog posts on Atlassian’s Agile Process. As part of our renewed push to share our own agile story, I wanted to post an update to my earlier posts. Last time I enumerated the XP practices and mapped them to Atlassian’s. I won’t be repeating myself (D.R.Y.), so if you’re interested in the origins of Atlassian’s Agile Process, here are parts one, two, three and four.
Today in the Engineering division, we still have an agile process in each of the development teams defined for and by that team, that is as true now as it was in 2007.
The things that have changed are mostly in the category of second order effects.
The biggest difference in Atlassian’s agile process in the last two years is that now it’s not just a process for engineering work. Every department is getting agility. Top books on Agile practices are handed around, stacked available on common bookshelves and regularly seen on the tops of busy desks, scruffy yellow sticky notes protruding.
And now I’m even beginning to think that people are reading them.
Another sign of mass agile process adoption was about a year ago when middle management (which was at that time brand-new at Atlassian) started doing stand-ups. After that it seemed as though a critical mass had been reached and the practice of stand-up meetings became ubiquitous. It seems there is a culture of minimalism in meetings, not just at Atlassian, but amongst all smart people who want to get things done.
It’s not just Stand-up Meetings either. User stories, You Aren’t Going to Need It, Do The Simplest Thing That Could Possibly Work, Don’t Repeat Yourself as well as some of the foundation thinking such as kaizen are all now part of the Atlassian lexicon.
I attribute this adoption to three things:
1. People have seen it work.
2. Leaders like it.
3. This stuff isn’t new-fangled, it fits in with much established wisdom.
Atlassian has grown quickly over its six year history, both in staff and in the number of offices. Processes have had to scale and over long distances and time zones. A good example of the growing pains is Jira’s automated test suite. It’s an awesome beast. There are a growing number of unit tests (junit), functional tests (jwebunit) , web browser driver tests (selenium) and there is now a QA department who reviews and improves all aspects of our quality assurance regime which includes testing.
Jira supports multiple operating systems, JDKs, application servers, databases, and versions thereof. And it comes in three editions (Standard, Professional and Enterprise) with different feature sets. Each edition comes in different distributions including EAR/WAR, Standalone which includes Tomcat as an application server and there is a Windows installer executable. As if that wasn’t enough, it works in multiple versions of different web browsers.
So that’s a lot of combinations to test! We really wish we could automatically test every combination! Currently our full functional test suite takes more than two hours to run. Multiply that by every combination and it would probably take several days of CPU time to complete. Of course Unit Tests run comparatively quickly and their green bar gives fairly good confidence, but if we want to be sure we haven’t broken something in the Jira user interface when we refactor the code, we have to wait at least two hours.
Running many thousands of functional tests is what is known as an embarrasingly parallel workload and accordingly, I am embarrassed to say we have not completed the work to properly “parallelify” it.
But now that Bamboo supports agents running in the EC2 cloud, we can crank up the compute resources available for continuous integration and we should be able to bring our functional test latency down under 20 mins without too much trouble. Fingers crossed.
“Pair programming is the ultimate code review” goes the “agilist” line. I can see this. I can see how it works and I have felt code-review benefits from pairing. I’ll be blogging about pair programming specifically in the coming weeks, so I won’t exhaust this side of the argument here, however I have to say that code review is the single most effective quality assurance measure on top of a traditional agile process that I know.
There are many advantages of code review over pairing as well. Of course, these are not reasons to abandon pairing, because pairing provides much more than code review. If you are using a web-based code review tool like, ahem, oh look here, it seems we have a product I can pimp here… Crucible… the main advantages are the most obvious ones. You can scale up the number of reviewers and they can act from different locations and time zones. You also get a permanent record.
The Jira team is probably an average adopter of code review as a practice where it remains a discretionary thing. This suits us because it lets us always use the Agile advised dosage: Just Enough.
Virtual Index Cards with GreenHopper
While we really like the simplicity of paper cards for tracking tasks in the small, the electronic version can be better in some respects. Not only for distributed teams. Of course we use Jira for tracking large numbers of bugs and feature requests, but it was not until the GreenHopper plug-in that we have felt that Jira was fully capable of giving us the immediate feedback and visibility that cards can give a co-located team. It also makes burndown charts which are increasingly to be found on big LCD screens around the offices.
So that’s it. We like our agile process. It suits Atlassian.