Get hands-on training for JIRA Software, Confluence, and more at Atlassian Summit Europe. Register now ›

Agile development at AtlassianAs you may know, we are getting a little more serious about sharing our agile development stories with the world. Well, a little while back, at our Dev team leads’ weekly Agile Book Club meeting, we decided to assess ourselves against Dean Leffingwell’s Agile Enterprise Acid Test. The test consists of 9 questions, grouped into 3 elements – see the full definitions below.
The purpose of the discussion was not to compare teams; it was a subjective self-assessment of the different products’ development processes, to help identify areas for improvement within teams. We also discussed some key characteristics of each team, like team size, codebase age, and something we politely refer to as “Founder involvement”, to see if there was any correlation with Acid Test results (there wasn’t, so I’ve left these out).

The results

ATL Acid Test Dec08.png

The discussion

  • The Confluence team has monthly process meetings which provide a means of “empowerment” and allows them to “reflect & adapt”. Issues from the retrospectives are voted on by the team, and high-ranking issues are added to the project backlog. Per (team lead) “smoke tests” the user stories from Adnan (product manager) for completeness and clarity, before passing them on to the developers.
  • The JIRA team conducts laptop demos for founders at monthly reviews. Feedback is not as continuous as other teams – milestones are less frequent but release cycles are longer. Process champions and tech leads are in place.
  • The Studio team has not been tested on “accountable for results”, as they have not stuffed up a release (yet). The team hasn’t asked for many process changes after “reflecting” (apart from requesting the use of real hours for estimating, rather than story points).
  • Generally the team leads felt frustrated at their ability to remove all impediments, particularly IT infrastructure issues. Luckily we now have an unsuspecting new IT Manager on board to help out with these.
  • The FishEye/Crucible and Clover teams have had slippages. They are starting to reflect and adapt.
  • It’s difficult to update our internal Crowd instance for dogfooding without disrupting IT. Need to work on a better system for this.
  • The Poland team holds structured retrospectives and follows up. They work well around the remote location issues.
  • UI Design – need to feed this into our agile processes more effectively, and in parallel with the Product Managers’ conceptual phase for the next release.
  • Need to include the San Francisco team next time (they were busy actually working when we did this test).

Dean’s definitions

Variable scope. Fixed quality.
1. Can the teams vary scope? – Does the team have the authority to vary scope even as the release deadline draws closer?
2. Is quality acceptable and fixed? – You can’t go fast building on crappy code. Agile accomplishes little without the requisite code quality which must be built into the process through TTD, continuous integration, test automation, coding standards and the like. If you see your teams iterating or sprinting with a primary objective of working down code-level defects, then you are not truly agile.
Incremental Value Delivery
3. Is software delivered incrementally? – If your teams are sprinting but there is no feedback until the final delivery (one form of “waterscrumming”) then you are not achieving agility as there is no meaningful feedback to drive the solution to an acceptable outcome
4. Is it valued and evaluated? – Demos are great, but you need real value delivery to assure fitness for intended use and early and continuous ROI. If the incremental code is not being actually used, you are not very likely to get the results you seek.
5. Is feedback continuous? – In addition to customer feedback, product owners, product managers and other stakeholders have a responsibility to continually assess the current result. This is achieved through story-level acceptance and iteration-level demos.
Empowerment and Accountability
6. Are the teams empowered? – Are the teams truly empowered and able to modify and improve their own software processes? Do they self organize and self-manage? Are resources routinely moved to the most critical bottlenecks?
7. Are they accountable for results? Empowerment and accountability are two sides of the same agile coin. Are the teams delivering reliable quality code in incremental fashion? Do you they commit to iteration and release objectives, subject to responsible scope management?
8. Do they regularly reflect and adapt? – Do the teams adhere to Agile Manifesto Principle #12? – At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
9. Does management eliminate impediments? – Is management also engaged in continuous improvement? Are impediments routinely surfaced, addressed and eliminated.

The follow-up

We came up with a few actions, including more dogfooding of pre-release milestones, and following up more on the issues identified in team retrospectives. In a couple of months we’ll be conducting the Agile Enterprise Acid Test again, to see how we’re progressing. This time we’ll survey developers, product managers, and the Founders (in addition to the team leads), just to check we’re not making stuff up :-).

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now