We hosted the Jira QA webinar for an APAC audience in September and a surprising number of attendees were from North America and Europe. Because of its popularity we decided to replay the webinar and host two live Q&A sections with Michal Kujalowicz, a Jira QA team lead, on December 9th for North American and European audiences.

There were 569 questions asked between the two webinars. After reading the questions and comparing them, we noticed that our original blog post still reflected the most popular questions. As a result, we decided to stick with this blog. Enjoy!

Cheers,
The Jira team

Last week Penny Wyatt, leader of the Jira QA team, hosted a webinar on how the Jira team does QA. Several hundred people attended and a total of 154 questions were asked throughout the presentation. Penny took the time to choose her top five questions and answer them.

Penny’s top five Q&As:

Q How long does it take to get a developer up to speed on this type of thinking?
A: It’s harder to change the culture of a whole team than it is to transform individuals. It’s taken us five years to get the Jira team to the level of quality mindset it has today, but it doesn’t take each new developer very long to get up to speed. They quickly pick up the mindset from their fellow existing devs, and they soon pick up the testing skills via pairing and workshops. The hardest part is picking up all the knowledge of risks and the product. This can take years, but we mitigate this through knowledge-sharing in QA kickoffs and QA demos.

Q: Is there still a need for test cases, are those for regression/automated testing only?
A: Scripted manual test cases don’t come into our strategy at all. If a test is just a ‘check’ – that is, a set of predefined steps and a defined assertion – we find it is more efficient and less error-prone to have it executed by a computer instead of a human. If a test is genuinely a test – requiring critical thinking, freedom to investigate and risk assessment – we find it is better to execute it as part of exploratory testing in order to include that freedom and intelligence in the testing.

Q: Developers are typically more expensive than testers. If we use developers as testers, is that not an inefficient use of budget/manpower?
A: Absolutely, using developers as testers to execute a separate testing step is expensive and wasteful of developer time. But having a separate testing step at all – even one executed by testers – is expensive and wasteful of developer time. Every time a story or bug is pushed back from testers to developers, it’s not just a testing cost, it’s a developer cost. By dropping the rejection rate from 100% to 4%, we’ve saved a lot of development time that was being wasted on reworking stories and fixing stupid bugs before release. We’ve saved the time spent on investigating, reporting, triaging, assessing, reproducing, and fixing internally-found bugs. And the code is designed from the ground up in a more testable way because the developers know they’re the ones who will have to do the testing. Our DoTing (developer-on-testing) stage was an intermediate step along the path of pushing quality upstream, so that we could remove the separate testing step entirely. It was a temporary investment that has more than paid off.

Q We have developers and QA testers in different time zones. Would this model only work in the same time zone? How do you work with remote teams?
A: We’ve done remote quality assistance with teams in Poland and Vietnam, with the QA engineer based in Australia. It’s not as effective as having skilled QA onsite, as a big part of being a good Quality Assistance engineer is building a personal relationship with your devs. A remote QA engineer is easily cut out of the loop, and it’s much harder to gauge the overall culture of the team. However, we were able to successfully run remote QA demos, QA kickoffs, and pairing sessions via video calls – just calling directly from the dev’s machine to the QA’s and sharing the screen.

Q: Are the QA notes on a story-per-story basis or do you build a knowledge base of QA notes? How do you deal with recurring risks?
A: QA notes are on a story-per-story basis, so it’s usually the QA engineers that spot the patterns of recurring risks. This has become harder over the years as our Jira QA team has grown, as each QA engineer doesn’t necessarily know what the others know. Until now, we’ve mitigated this with weekly knowledge-sharing meetings, and wiki pages where we keep track of common or surprising risks. We’re getting to the point where this no longer scales. Right now we’re working on a more structured knowledge base with a database of rules that are run over each commit. So, for example, if it sees you’re using the User object in your Jira code, it would add a comment to the issue saying, “The User object can be null if the current user is anonymous, please make sure you’re handling that correctly”. This will help us get the knowledge out of QA heads and, in the best case scenario, may even replace the need for QA demos and kickoffs. That would be helpful!

Want to learn more?

The webinar is now OnDemand. Register here to watch the 60-minute presentation. Plus, we’re hiring in Sydney! Check out our career website for more information.

 

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now