When testing even the most basic user-facing functionality, the set of possible test cases is almost unlimited. Let’s say we’re testing a form in JIRA where you set the current user’s display name. If we look at it from a completely black-box perspective, the possible test cases are something like:
Total test cases = (supported browsers) x (supported platforms) x (supported databases) x (JIRA licence types) x (OnDemand/Standalone/War/Windows installer/OnDemand-with-JIRA-only) x (different speeds of entering text) x (keyboard vs mouse submission) x (supported languages)
That’s roughly 162000 test cases so far, and we haven’t even started looking at what text we’re putting in the field.
Of course, no-one actually plans and executes 162000 test cases for this kind of basic feature. Someone, somewhere along the line, makes a decision about which test cases are worth executing. Many companies have this decision made by a business-focussed person who has no knowledge of the code. In other companies, testers will have the choice, but will choose at random or by working down the full list of cases until they run out of time. On the other hand, a good tester will decide – consciously or subconsciously – through a process known as equivalence partitioning.
What is equivalence partitioning?
Equivalence partitioning is the process of breaking up potential test cases into “equivalence classes” – that is, batching up test cases into groups, and only executing one case from each group. The idea of each equivalence class is that executing one test in a class gives you the same coverage as executing any other test in that class. Hence, if you’ve executed an example test from each class, you know you’ve covered the whole feature. Any further testing is just wasting time.
Correctly establishing the equivalence classes is the hard part, and requires knowledge of both the bug history for the product and the implementation details. Getting it wrong means you can miss bugs or waste time with worthless tests. But not doing equivalence partitioning at all pretty much guarantees you’ll both miss bugs and waste time.
An example of equivalence partitioning in action is in this excerpt from the testing notes (that is, hints for exploratory testing) for bundling the Issue Collector plugin with JIRA:
Key: * both - please test separately in both behind-the-firewall and OnDemand * either - please test in either behind-the-firewall or OnDemand
Check that there's a block for the issue collectors on the project admin summary page. (either) Check that the issue collector item is in a sensible place in the admin menu structure (both) Check that the issue collectors admin page is accessible through "gg" (both) Check that a project admin (i.e. not a full admin) can configure issue collectors (either) Investigate if there are any potential performance issues around the generation of the activity graphs for each collector. (either)
I’ve described some tests here, but more importantly I’ve made some judgement calls here about which tests don’t need to be performed, based on equivalence classes. This same code ships to JIRA behind-the-firewall and JIRA OnDemand. In some scenarios they are equivalent and in some they are not. If the activity graphs cause performance issues in OnDemand, they will cause it in behind-the-firewall too, and vice-versa. However, the administration menu structure is quite different in full OnDemand and behind-the-firewall; and plugins often work in one and not the other. Hence, for that case, OnDemand and behind-the-firewall are not equivalent and need to be tested separately.
Is that the full story?
Well, no. Our testing involves a lot more than pre-written manual test cases – in fact, equivalence partitioning is most often an unconsious process running in the back of my mind while performing exploratory testing. While exploring, the classes are constantly updated and expanded, based on what we learn about the feature, what odd behaviour we observe, and what memories of past bugs are triggered. And of course, we’re looking for much more than functionality bugs – usability, performance, security, accessibility – everything that goes into deciding if a feature is lustworthy. However, equivalence partitioning is an important piece of the puzzle.
Tomorrow’s post: Part 2 – how equivalence partitioning may explain odd behaviour from your QA engineers.