Agile software development at Atlassian

Test teams don’t usually choose their organisations software development methodology. Hopefully the irony of being in such a situation is not lost on any teams that call themselves “Quality Assurance”. Still this doesn’t prevent test managers and test teams from adopting practices outside of the decreed methodology to their advantage. In this post I’ll explain how one Agile practice, continuous integration, can benefit and be implemented by test teams working in non-Agile environments.

If you’re testing in large organisations, following non-Agile development methodologies, co-ordinating large manual testing efforts, especially at places where making software is not the core business – I know your pain. I spent many years working in similar environments. The requirements are never accurate nor comprehensive (unfortunately they’re not concise either, thanks to the 20+ page template full of meaningless boilerplate), the design keeps changing (see it just changed again), of course there is no contingency in the project plan to accommodate these changes (other than you working weekends) so the specs no longer reflect reality (don’t worry you’re working off the wrong version anyway) and the extra preparation time so graciously provided by development when they missed their deadline (naturally) evaporated in a scramble to accommodate a raft of late changes in scope (social networking…in a data warehouse?) as well as fend off attempts by the development team to sneak poor code into test (“It works fine on our machines!”). At least the stakeholders are staying true to their word by refusing to slip the release date.

Thankfully you’re able to test more than you planned, in less time than was agreed, without the information and resources you were led to expect, because the knowledge that the advertising is paid for, the on-line viral campaigns have started, the marketing collateral is printed, the training has been delivered and the CEO is en route to the press conference keeps you from sleeping at night.

Some testers like to be the hero, working long hours to save the day. That’s misguided, because real heroes don’t take a hiding and then keep coming back for more. They win the fight and ride off with the prize. So if you find yourself in situations anything like that above, understand that your biggest fight is not with buggy code, poor requirements and bad specs. It isn’t with project managers, developers, business analysts, architects or customers either. It’s with the development process that ensures that test teams will always pay for the sins of the previous phases AND makes the customer suffer. How? Because longer days and working weekends can only make up some of the debt. Adding extra staff can cost as much as you gain. Eventually corners get cut. You know what I’m talking about. Tests important enough to think up, write down and schedule get descoped, major bugs get re-classified as minor, acceptance/exit criteria become meaningless and bugs get shipped that shouldn’t have. Who’s the hero now?

Don’t do the best job in a bad situation, first improve the situation.

If you manage or perform functional testing there’s a good chance your test plan describes the situation like this:

graphic_testplan.png

Regression testing is usually performed last because,

  1. Bugs are more likely to be found in new untested code, rather than unchanged and previously tested code, so testing of new functionality should be prioritized first.
  2. Bug fixes may themselves cause regressions, so to avoid repeating regression tests, regression testing is scheduled after bug fixes are complete.

This sounds like a commonsensical approach and it would be if this reflected reality. Alas the real situation looks more like this:

graphic_testreality.png

Invariably development finishes late overall, but usually some features are completed on time. This often tempts project managers and test managers to split testing into phases accordingly. Testers switching between phases will cost productivity later, but hopefully not outweigh the benefit of starting some testing as early as possible. Once phase 1 is complete, test teams might be tempted to start regression testing those functional areas not impacted by phase 2. Often times this has to be abandoned when late changes are added, because testers are needed to test the new changes and/or the changes invalidates much of the regression testing done so far. These late changes may also include changes to new functionality that’s already been tested, so this needs to be retested as well before regression testing of the entire release can begin. Still this doesn’t mean critical regressions aren’t found once regression testing finally gets under way, requiring more fixes and another round of regression testing to be done. Rinse and repeat if you have more phases and more rounds of late changes. In the end testing takes much longer than planned, which may force short cuts to be taken (descoped tests, bug severities lowered, etc.), test managers get worn out by the constant reprioritizing and testers get fatigued from the constant task switching. But it doesn’t have to be this way. Here’s how…

Step 1: Automate your regression tests

For four reasons:

  1. It increases the amount of time testers have to test new functionality. (That’s manual exploratory testing. Which requires intelligence, creativity and adaptability. Which requires a human being.)
  2. It increases the level of confidence in the regression test results (Let’s be honest, humans are poor regression testers. Checking the same thing over and over is tedious. Unlike computers, we get bored, we get distracted, we don’t notice regressions slipping past.)
  3. Once automated you can run them any time you want (No more worries resourcing regression testing, just press “start”.)
  4. It increases testers job satisfaction – less context switching, no repetitive testing, more time to find bugs, greater use of mental abilities.

But aren’t automated tests prone to breaking and costly to maintain? They are if they’re poorly designed and testers don’t have the skills to fix them. Record and playback test automation tools are easy to use but that’s because the tests they create have no design. They’re not designed to be efficient, flexible or robust, so it should be no surprise when such tests break and break often. It is possible to record a test with these tools and build the design in afterwards, but that’s a bit like building a house first and renovating it to fit your design later. Sure you can knock out a wall here and there, but eventually the structural supports and plumbing are going to get in the way and you’ll be forced to settle with an unsatisfactory mess. However, automated tests that are written with good design in mind don’t break unnecessarily, because they’re designed not to. Writing tests this way requires technical skill, which your team may not have. In which case you’ll need automation specialists to get started, but you don’t want to be reliant upon a few people to automate tests in the long term. Developing the entire test team so that every member has the skill to write well designed automated tests should be your goal. It means more automated tests can be written, in less time and allows the cost of maintaining and fixing automated tests to be shared across the team.

There’s another reason for developing these skills across the entire team. Developer productivity is rapidly accelerating. They have smarter development environments, better tools, more and more third party libraries to use and more efficient development processes. They can produce more in less time. Without the ability to automate well, testers won’t be able to keep up.

There’s no shortage of automated test tool providers including many open-source options (www.opensourcetesting.org provides a comprehensive list). Different tools suit different contexts, so what we use at Atlassian is not necessarily right for you, but if you test Java based web applications then you should check out the free open-source tools that we rely on: JUnit, JWebUnit, HtmlUnit , HttpUnit & Selenium

Step 2: Make your automated regression tests fast

How long does it take to run your regression tests? I once worked at a company where manual regression testing took ten testers three weeks to complete. Obviously any regressions we found incurred huge costs in terms of delays and rework. At Atlassian our products aren’t as large or complex as that system, but neither do our regression tests take three weeks to run. Depending on the product they take between half an hour and three hours to complete. That’s up to 2000 tests (excluding unit tests of course). Imagine finding every single show stopping regression bug two hours after being given a release, not two hours before it’s due to be released.

With fast automated regression tests, the previous situation starts looking less time-consuming and complicated:

graphic_steptwo.png

There are many ways to make your automated regression tests fast, too many and too context dependant to cover here, but always keep the following in mind:

  1. Don’t test via the GUI unless you are checking the GUI. Rendering screens and web pages takes a lot of time, testing via an API or a headless browser will be dramatically faster.
  2. Divide automated regression tests into sets that can be run in parallel e.g. we have a different set of tests for each web browser allowing us to test them all simultaneously, not sequentially.
  3. Don’t run tests that don’t test what’s changed. Clover’s test optimization feature tracks the code coverage of each test, allowing us to automatically run only those tests which executed code that’s changed since the last regression test.

Step 3: Use a Continuous Integration server to run your automated tests

So now that you have your automated regression tests and they run fast, why wait until the end of the test phase to run them? Now you can run them any time. All the time. Fully regression test every release, patch and fix immediately. Ah, but who’s going to start the tests, collect the results, prepare the reports and publish them? If regression tests are being run every day, that’s a lot of administrative work. Unless of course you automate that as well, which you can with continuous integration (CI).

CI is the Agile development practice of continually integrating code changes into the code base, building the software from that code base and testing it. It benefits developers by giving them feedback quickly on the changes that they’ve made. CI can be performed manually, but it’s more efficient when automated and the tools that do this are known as continuous integration servers. Now compiling code and building software might not be of interest to test teams but the ability of CI servers to automate the execution of automated tests is. There are many commercial and open source CI servers available. At Atlassian we make and sell our own, Bamboo, and here are some of the things it can do:

Bamboo is a test managers ultimate assistant. It won’t be late for work, get sick or go on holiday so your automated regression tests always get run when they’re supposed to and it’ll take care of collecting and reporting your regression test results and status. With all the administrative workload taken care of then, there’s no excuse not to run your regression tests all the time, so now our situation looks like:

graphic_stepthree.png

To learn more about Bamboo you can download a free, fully functional 30-day trial version of this popular continuous integration tool. Alternatively take a look at the public Bamboo instance we provide for the Atlassian developer network.

Step 4: Integrate your automated tests into the build process

Once you have a continuous integration server running your automated regression tests, you also have the framework necessary for developers to continually run their builds. There’s a strong case to convince developers to do this; the infrastructure is already in place and doing so will deliver these benefits:

  • Regression tests are automatically run during the development phase not just in test. “All regression tests pass” is now an entry criteria to the testing phase not an exit criteria.
  • Smoke tests need only test new functionality, given existing functionality has already been tested thoroughly.
  • There’s no need to raise bug reports when automated tests fail because the CI server automatically notifies developers of failures instantly.
  • The true status of development is visible to everyone (not just developers) via the CI Server dashboard all of the time. Which means development can’t offload poor code into testing to meet their deadlines when everyone can see the tests are still failing.
  • Any regression tests that need to be updated due to changes in the software will be discovered during the development phase, giving testers plenty of time to amend them before the testing phase starts.
  • You now have a completely integrated automated build and testing framework that can be extended to automate other tasks, such as the deploying releases and patches to test environments.

This does require that your development teams use a source control management system and have an automated build process in place. If they are unwilling or unable to put these in place or to integrate their build process with your continuous integration server hire a build engineer for a few days/weeks to set it up for you. You don’t even need access to the source code as long as you can obtain the latest packages of your software.

Step 5: Reap the rewards

Put this all together and now not only does your test plan look like this, so does reality:

graphic_spoils.png

You have more time to test new functionality, spend less time reporting failures and because every change is fully regression tested during testing and development the days of finding critical bugs late in the testing phase and blowing out the schedule are over. You’re testing more, in the same amount of time, but with greater visibility and earlier feedback. You might not be holding daily stand-ups, using task cards and working in iterations – your development methodology might not be Agile, but now your testing definitely is.


Andrew Prentice discusses adapting to an agile environment

Learn more about testing and QA inside Atlassian

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now