Moving from quality assurance
to quality assistance

Ready to embrace a brave new world where developers are also testers? Read on.

If you're intrigued by the quality assistance approach, but don't know how to bring it to your team, we're here to help. The first step is understanding that this may be a seismic shift in the way your entire engineering department runs, and as such, doesn't happen overnight. The second step is helping your team understand why the endeavour is worthwhile. So let's start there.

Benefits of the quality assistance model

As the team moves further towards this model, you should aim to see improvements in these three areas:


Developers that master testing not only increase the efficiency and capacity of their teams, but also write higher quality software. Atlassian developers understand the expected quality bar for new features, and aim for that standard from the start.


Preventing bugs before they are written means that the additional time and effort to fix bugs or rewrite code is reduced. Teams can ship faster, and have confidence that the quality remains high. Atlassian developers can decide when their work is ready to ship, instead of dispiriting cycles of code-test-rework-retest.


When testing can be confidently shared with developers, the people who are passionate about quality, such as great testers, can focus on solving the root causes of quality problems in addition to discovering their symptoms. Atlassian QA has shipped a number of successful innovations to our teams, including instant test environments, automatic setup with almost-real-world test data, and continuous process experimentation and improvements.

Implementing quality assistance

Changing an existing team's quality process will require time and effort because you're requiring the whole team to be responsible for the quality of the team's output instead of relying on QA to be gatekeepers. If team members believe that QA should be catching the bugs, the model will break down. All developers on the team must have a sufficient level of training so that they know how to test effectively. If developers are the only ones testing a story, then lack of training means that production bugs are likely to slip out.

Simply saying that "developers now do the testing" will increase the risk of quality problems. But don't despair: there are various ways to mitigate this. Each of these techniques can help the team get into the swing of things, but shouldn't be relied upon indefinitely since they are about finding problems after a story has been implemented. Ultimately, the team should focus on prevention rather than detection.

Interim techniques

Blitz Testing

Asking the whole team to join in a time-boxed testing session is a fun way for teams to check that they are confident with a feature they are shipping. They can provide feedback from a variety of users, and detect bugs and other problems before a release. However, reliance on blitz tests is a sign that a team is not confident in the feature they are shipping. If blitz tests are consistently finding critical problems, that shows the story-level testing being done is insufficient. If that's the case, focus on improving the story-level testing rather than increasing the blitz testing. If the goal is to ship features faster, then we should avoid having a blitz test as a necessary step.


Running your unreleased features on internal instances is a good way to validate that they are useful and easy to use. It can mean that problems are discovered, reported and fixed before the feature is released to your customers.

However, the Atlassian dogfood systems are critical instances used by hundreds of staff members. We treat them as production-like instances, and see shipping a bug as a failure of the testing process. When a team treats internal deployments as a necessary step to find all the bugs, that's a sign that they're not confident in their own output, and should have more testing stages within the team, rather than inflicting bugs on those dogfood instance users.


Assigning a "Developer on Test" is a good way to introduce developers to the idea of testing, improve their skills, and make it explicit that quality is everybody's responsibility. It prevents the problem of a QA engineer being the bottleneck for the team. However, long-term reliance on DoTing is a sign that a team isn't confident in the testing done by the developer who implements the story. Effectively, somebody is double-checking the testing tasks that should have been done by the original developer, which is inefficient. Eventually, we want to feel confident that the original developer has sufficiently tested for the risks outlined in the testing notes, without needing another person to repeat that.

QA Kickoffs

Pairing with developers encourages them to think of all the risks and edge cases upfront, before they start implementing the story. Implementation choices may be influenced by these requirements, and problems may be prevented earlier on. You can brainstorm together to come up with testing notes, and sort out any ambiguities from the start.

However, reliance on QA kickoffs means that the developers require a QA engineer to come up with potential problems, rather than being able to come up with them themselves. This means the team still depends on QA (bottleneck alert!), and needs further coaching in order to become fully independent.

Interim QA process

As you transition toward quality assurance, your QA process may look something like this:

As you become more confident in your team's ability to ship high-quality features to production, you can experiment with removing some of these stages – probably the DoT and blitz test stages – for certain stories. Many Atlassian teams are performing such experiments, and finding that some techniques work well for their team's context, while others are not providing value and can be dropped.

Experimentation is the key word here. A carbon copy of the Atlassian QA process won't work for every team. Be patient, iterate, and know that the fact that you're embarking on this journey at all means you're on the right path.

Posted by Mark Hrynczak in QA

5 min read

Tools we use