Waterfall project management separates development and testing into two different steps: developers build a feature and then "throw it over the wall" to the quality assurance team (QA) for testing. The QA team writes and executes detailed test plans. They also file defects when painstakingly checking for regressions in existing features that may have been caused by new work.
Many teams using these waterfall or other traditional testing models find that as the product grows, the amount of testing grows exponentially–and QA invariably struggles to keep up. Project owners face an unwelcome choice: delay the release, or skimp on testing. (I'll give you one guess as to which option wins 99% of the time.) In the mean time, development has moved onto something else. So not only is technical debt mounting, but addressing each defect requires an expensive context switch between two parts of the code base. Insult, meet injury.
Para piorar a situação, as equipes de garantia de qualidade são tradicionalmente recompensadas de acordo com a quantidade de bugs encontrados, o que coloca os desenvolvedores na defensiva. E se houvesse uma maneira melhor para ambos, desenvolvedores e garantia de qualidade, reduzir o número de bugs no código e, ao mesmo tempo, eliminar essas trocas dolorosas que os proprietários de projetos precisam fazer? Não criaria um software geral melhor?
Migrando de um método de teste tradicional para um ágil
A meta das equipes ágeis e de DevOps é entregar, continuamente, novas funções com qualidade. No entanto, as metodologias de teste tradicionais não são adequadas para uma estrutura ágil ou de DevOps. O ritmo do desenvolvimento requer uma nova abordagem para garantir qualidade em todos os builds. Na Atlassian, a gente faz testes ágeis. Observe em detalhes a abordagem de teste com Penny Wyatt, líder sênior da equipe de assistência de qualidade do Jira Software.
Much like compounding credit card debt, it starts with a small amount of pain, but snowballs quickly–and saps the team of critical agility. To combat snowballing technical debt, at Atlassian we empower (nay: expect) our developers to be great champions for quality. We believe that developers bring key skills that help drive quality into the product:
- Os desenvolvedores são ótimos em resolver problemas com o código.
- Desenvolvedores que escrevem os seus próprios testes investem mais tempo na correção caso eles falhem.
- Os desenvolvedores que entendem os requisitos de função e as implicações de testes geralmente escrevem um código melhor.
We believe each user story in the backlog requires both feature code and automated test code. Although some teams assign the developers the feature code while the test team takes on automated testing, we find it's more effective to have a single engineer deliver the complete set.
Lide com os erros em novos recursos e as regressões em recursos existentes de modo diferente. Se aparecer um erro durante o desenvolvimento, reserve um momento para entender o erro, corrija-o e prossiga. Se uma regressão aparecer (por exemplo, algo funcionava antes, mas não funciona mais), então é provável que reapareça. Crie um teste automatizado para proteção contra essa regressão no futuro.
This model doesn't mean developers work alone. It's important to have QA engineers on the team as well. QA brings an important perspective to the development of a feature, and good QA engineers know where bugs usually hide and can advise developers on probable "gotchas."
Toque humano no teste exploratório
On our development teams, QA team members pair with developers in exploratory testing, a valuable practice during development for fending off more serious bugs. Much like code review, we’ve seen testing knowledge transfer across the development team because of this. When developers become better testers, better code is delivered the first time.
But isn't exploratory testing manual testing? Nope. At least not in the same sense as manual regression testing. Exploratory testing is a risk-based, critical thinking approach to testing that enables the person testing to use their knowledge of risks, implementation details, and the customers' needs. Knowing these things earlier in the testing process allows the developer or QA engineer to find issues rapidly and comprehensively, without the need for scripted test cases, detailed test plans, or requirements. We find it's much more effective than traditional manual testing, because we can take insights from exploratory testing sessions back to the original code and automated tests. Exploratory testing also teaches us about the experience of using the feature in a way that scripted testing doesn't.
Manter a qualidade envolve uma mistura de testes automatizados e exploratórios. Conforme novas funções são desenvolvidas, os testes exploratórios garantem que o novo código atenda ao padrão de qualidade com mais amplitude que os testes automatizados. Isso inclui facilidade de uso, design visual agradável e utilidade geral da função além da proteção robusta contra as regressões oferecida pelos testes automatizados.
A mudança pode ser difícil – muito difícil
I'll leave you with a personal anecdote that nicely summarizes my journey with agile testing. I remember managing an engineering team early in my career that had strong resistance to writing automated tests, because "that work was for QA". After several iterations of buggy code and hearing all the reasons why automated testing would slow the team, I put my foot down: all new code had to be proven by automated tests.
After a single iteration, code started to improve. And the developer who was most adamantly against writing tests turned out to be the one who sprung into action when a test failed! Over the next few iterations the automated tests grew, scaled across browsers, and made our development culture better. Sure, getting a feature out the door took longer. But bugs and rework dropped significantly, saving us huge amounts of time in the end.
Change is rarely easy. But like most things worthwhile, if you can buckle down and create new patterns for yourself, the only question you'll be left with is "Why didn't we do this sooner?!"