Setting up a QA team

For the last five months we have been interviewing people for the position on our shiny new QA team.

A bit about our goals

Creating a QA team is a part of our plan to:

  • reduce the number of issues that slip out to our customers
  • reduce the time/effort it takes to maintain and add new features to our products
  • improve the performance of our products

There are many factors that contribute to achieving such a goal:

  • are the requirements documents clear and complete?
  • are the design documents sound?
  • is there an adequate number of test cases and do they match the requirements?
  • what other processes and metrics are employed to control the development life-cycle?

Of course – being the engineers that we are, we would like to get a quantitative measure of quality as a starting point.

A bit of context

Atlassian practices an agile development process. This, among other things, means that testing is the responsibility of the development team. Unit tests, integration tests, acceptance tests etc. are all done by the development team. We have a set of tools and practices to that extent.

We have Crucible code reviews done as part of the development process.

All the tests run on our Bamboo servers and we get live feedback after each commit. We have Clover statistics collected and reported by the Bamboo server on those tests.

The tests we write run on different combinations of servlet containers and databases in our Bamboo servers. We have Selenium tests being run by Bamboo against various browsers.

We also have rushboxes. These are live instances of our products which get upgraded nightly (or, some of them, hourly) by the Bamboo servers. Commit the code … ‘hey presto’ … go to a URL and you can see the effect of that commit live running in the product. Only if the unit and functional tests pass of course.

We are about to introduce ‘Blitz tests’. This is where each product’s rushbox will get hammered by the entire company testing the latest and greatest features. This includes admin, HR, sales, marketing etc. This is going to be yet another check step before the release goes out.

How do you quantify quality?

Despite the practices I described above, we felt that there is a lot more we could do looking at the quality overall.

What we would really like to do is to try and quantify the quality of the product. After all – ‘what you can’t measure – you can’t do’ (not sure where I have heard it, but worked well for me so far).

I have spent a good part of last three months talking to people working as QA engineers, QA team leads and QA managers. One question I have been asking all of them was: How do you measure the quality?

The answers I have been getting were all imprecise. The current process seems to come down to:

  1. analysing functions described in the requirements document.
  2. doing your best in writing test scenarios and diligently running all of them. Acceptance criteria are also set as the ratio of tests that must pass, severity of found bugs, risk and impact analysis of the bugs etc.

Also most of the QA people seem to think that developers can not test the software they write.

The example I have been giving in return was – analyzing the requirements is all fun but developers do not give you 10, neatly packaged, functions to test. They give you (for instance) 300K lines of code. Testing a bunch of uses cases per function defined by the product management team is one thing, dealing with all that code that is probably flawed in a lot of places is a slightly different thing. Intuitively all the QA people I have spoken to feel that most of the code will be touched by their test cases directly or indirectly. However we agreed that there is no quantifiable measure of quality in their processes as yet.

So we have been looking for people passionate about measuring quality. People who would take our current tools, add to them, combine them in different ways and extract some hard numbers about the quality of our code.

On the other hand we also feel there are so many numbers out there. Just the code coverage can be done in a dozen or so different ways and still give you a false sense of security

  • Which ones are worth the trouble?
  • Should we even aim for 100% coverage?
  • How do you quantify reported bugs?
  • How about bug coverage?
  • Code review coverage?
  • How about things like JSP, velocity templates etc..

Lots of questions and a great opportunity for research.

Quality Analysis versus Quality Assurance

The bottom line is we want to set up a Quality Analysis team. This will not be a big team. The thinking at the moment is perhaps one or two QA engineers per team mainly researching and advising. It is much more about making developers accountable for their code quality rather than outsourcing all the manual testing. The responsibilities of the QA team will be:

Quality metrics and acceptance thresholds

The QA team will have to research into quality metrics and set goals for the development teams. Based on features for each iteration they will have to set acceptance criteria in terms of:

  • the above metrics
  • types of tests developers have to write for given features
  • type of environments to write tests for

Support data analysis and feedback to the engineering teams

The QA team will have to analyse the types of issues our customers report and advise development teams on how these issues could have been prevented, areas to concentrate testing on, prioritise testing scopes etc.

Exploratory testing

This is what any QA engineer worth his salt should be doing frequently anyway I heard.

Blitz test orchestration

As I mentioned above, this type of test is a company-wide gig of testing. Its main premise is that in the company everyone is different and has a different understanding of each feature. Exploiting that to try and find issues seems to be a good idea.

etc.

Much will have to be defined anew in our QA practices. This is an exciting new start for us..

Reading material

Over the course of the last three or four years I read a few interesting takes on what software quality assurance is. Here is the list of those I found very interesting (no surprises there – these people do it for living). However, by no means is this a definitive list. I left out a lot more than I should have. Unfortunately I didn’t really keep track of all the literature on the subject. QA is more of a side passion for me.

I have also been following a few blogs on the subject. Some of those that I read on and off:

Test Driven is always a good place to dig for information, resources, tools and tricks for testing.

Comments?

QA practices is a much discussed topic. Moreover, there are lots of people who think differently and quite passionately about QA. I would love to hear your comments.

Last but not least – we are still looking for keen QA engineers to come and make a difference!

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now