Get hands-on training for JIRA Software, Confluence, and more at Atlassian Summit Europe. Register now ›

Automated testing is a great way of maintaining quality on a software project by providing quick feedback to developers when things break. Problem is, often teams find themselves with long-running suites of tests that become a time killer in the iterative development process. If the tests take too long to run, developers are less likely to run the full suite locally before a commit. Instead they commit their changes untested and rely on the Continuous Integration (CI) server to test it. In many teams this can mean the CI server gets quickly overloaded with changes to test, and developers wait hours to find out they broke the build.
With the release of Clover 2.4 we’ve added a new test optimization feature that can dramatically reduce build times by selectively running only the tests relevant to a particular change. This makes it practical for developers to run the test suite locally prior to a commit. It also means CI server throughput is greatly improved, both of which mean faster feedback to development teams.
On Java projects, it’s more likely running tests than compiling (thanks xkcd)

When too much testing is …probably too much

In many teams it can take far too long for the impact of a code change to be known by the submitting developer. The developer might wait many minutes or even hours before the Continuous Integration server gets to building and testing their change. If instead they’ve run the suite locally, their machine is tied up running tests, leaving the developer expensively idle.
Build breakages can often derail a whole development team, with all work grinding to a halt while the spotlight shines on the developer who introduced the problem as they attempt to fix it.
If a particular change is going to cause one or more tests to fail, the team needs to know about it as fast as possible, and preferably before it is committed.

Two approaches to smarter testing

So much of the testing effort is wasted because many tests are needlessly run; they do not test the code change that prompted the test run. So the first step to improving test times is to only run the tests applicable to the change. It turns out that in practice that this is a huge win, with test-run times dramatically reduced.
The second approach, used in conjunction with the first or independently, is to prioritise those tests that are run, so as to flush out any test failures as quickly as possible. There are several ways to prioritise tests, based on failure history of each test, running time, and coverage results.

Clover’s new test optimization

As a code coverage tool, Clover measures per-test code coverage – that is, it measures which tests hit what code. Armed with this information, Clover can determine exactly which tests are applicable to a given source file. Clover uses this information combined with information about which source files have been modified to build a subset of tests applicable to a set of changed source files. This set is then passed to the test runner, along with any tests that failed in the previous build, and any tests that were added since the last build.
The set of tests composed by Clover can also be ordered using a number of strategies:

  • Failfast – Clover runs the tests in order of likeliness to fail, so any failure will happen as fast as possible.
  • Random – Running tests in random order is a good way to flush out inter-test dependencies.
  • Normal – no reordering is performed. Tests are run in the order they were given to the test runner.

Note that Clover will always run tests that are either new to the build or failed on the last run.

Optimization safeguards

Clover’s test optimization uses per-test code coverage to determine a minimal set of tests to run for a given code change. In some builds, changes with non-local effects or changes to non-source files (e.g. a Spring XML config file) mean that Clover’s selected subset of tests won’t adequately test the change. For this reason we recommend still running the full test suite periodically. Clover has a number of strategies to help with this:

  1. Clover can watch for modifications of specific files or filesets, and trigger a full test run if any change
  2. Clover can trigger a full test run every Nth build.

Practical integration for Ant and Maven

We’ve worked hard on this feature to make it easy to integrate into your existing Ant or Maven2 build. You don’t need to use a specialized java environment or standalone test runner.
Clover’s Ant integration for test optimization is designed to work with the existing <junit> Ant task. The new <clover-optimized-testset> container wraps existing <fileset>s to control which tests are actually run:

<junit ...>
<batchtest fork="true" todir="${test.results.dir}/results">
<clover-optimized-testset snapshotfile="${clover.snapshot.file}">
<fileset dir="src/tests" includes="${test.includes}" excludes="${test.excludes}"/>

In Maven2, the optimization feature is enabled via a profile added to the project POM:


Some real world results

The FishEye team maintain an automated test suite that takes between 20-30 minutes to run, owing to some expensive setup and teardown. We enabled Clover’s test optimization feature on the project and measured performance compared to the normal, non-optimized build. Over the 10 day trial period the FishEye team committed 142 changesets as part of their ongoing development effort. For each changeset, two builds were triggered – a “normal” build, where all tests were executed, and a test-optimised build, where only relevant tests were executed. The following chart shows cumulative times for both the normal and test-optimised builds:
By only running tests that were applicable to each particular change, test execution time was reduced by a factor of four – a dramatic reduction. The number of tests run are shown in the following chart:
For this trial we configured Clover to run the full test suite every 10 builds, which explains the regular spikes in the number of tests run under the optimized scenario. This safeguard measure ensures that non-local or non-source changes that don’t feed into Clover’s optimal test subset calculation are still tested.

Some unexpected results

The next chart compares the number of test failures between optimized and normal builds:
Correlation of test failures was very good, with the optimized build detecting all but one of the test failures detected by the normal build, in a fraction of the time the normal build took. The missed failure was caused by an XML config file change. The change was corrected in a subsequent checkin before the optimisation safeguard test run kicked in, which would have detected the failure.
Curiously, in several builds some tests failed when run as part of the optimized build, but not when run in the normal test execution. After some investigation, we found several implicit inter-test dependencies – the execution of one test was required to make a subsequent test pass. This code smell is something the FishEye team are now working to remove 🙂

Fail faster: Optimize your tests

Clover 2.4’s new test optimization can dramatically reduce your build times, taking the load off your CI server and making it practical for you to run your automated test suite locally, prior to a commit.
You can download a free 30 day trial of Clover 2.4 and try test optimization on your project today. The Quick Start guides for Ant or Maven 2 will get you up and running fast.

Want to learn more?

Join me for a webinar this week to see Clover in action. Just click on the time below to register:

Tues, Nov 11, 2008 8:00 AM – 9:00 AM PST/16:00 GMT
Tues, Nov 11, 2008 5:00 PM – 6:00 PM PST

Check out all of the details about what’s new in Clover 2.4 including demo videos.

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now