Get hands-on training for JIRA Software, Confluence, and more at Atlassian Summit Europe. Register now ›

This guest blog post is part of an Atlassian blog series raising awareness about testing innovation within the QA community. You can find the other posts in this series under the QA Innovation tag.

This post is written by Anne-Marie Charrett, a testing coach and trainer with a passion for helping testers discover their testing strengths and become the testers they aspire to be.

“An oracle is a principle or mechanism by which you recognize a problem.”(1)

Oracles can be found everywhere. Take a look at this ad below:

By comparing a taxi ride to a plane journey, the ad successfully highlights the unfairness of charging for cargo baggage. They then use this as a selling point for their airline.

We use oracles in testing to discover bugs. We can’t test without them. If you test you’ve used oracles. If you’ve found bugs, you’ve used an oracle.

A popular but limited test oracle is the product requirements. By comparing the product being tested to the requirements, helps us to identify problems either with the product or with the requirements. This can be limiting because requirements are often incomplete, out of date or non-existent. Fortunately, there are many other oracles available to us in testing(2).

Even when requirements appear complete, testers often unknowingly use other oracles. Examples of other oracles may be a different version of the product, a customer support ticket or a user manual. If you’ve ever raised a bug outside requirements you’ve used a different oracle.

Its not always easy to recognize potential problems and testability goes a long way to helping you create oracles. It takes skill to be able to develop your own oracles but this is essential skill if you want to be a skilled tester.

It’s helpful to have a taxonomy of oracles to call on when needed. Fortunately, testers such as Cem Kaner, James Bach, Michael Bolton and Doug Hoffman(3) have systematically examined oracles and have made this knowledge available for other testers to use.

Bug reports become more credible when you know your oracle. Instead of relying on tester opinion or intuition – describe a bug in terms of the oracle you used.

Compare:

“the form on this webpage is unfriendly”

to

“I compared this form to our competitors. They use less steps to complete the process, and in my opinion this makes the page less friendly to use”.

The first bug report relies on opinion only. The second bug report uses a “consistency with comparable product” oracle to help explain my decision making. This makes for a more persuasive bug report.

To recognize problems that interest our stakeholders requires their input. Easier said than done when often stakeholders are not available, they don’t know themselves what they want or its impractical to ask them and often as tester’s we end up making best case assumptions.

Tester’s have to be brave enough to go beyond what stakeholders see as problems. We’re there to “see” stuff that others don’t”. We’re there to question, to discover dilemmas. This means that sometimes we need to use oracles that others either haven’t thought of, or don’t believe are important.

Oracles are slippery beasts though, they often unknowingly morph into different oracles because the nature of the problem changes. Startups and R&D often face this dilemma. Customer are often willing to forgive clunky interfaces or intermittent failures (think of mobile phones in the early 90’s) because any solution is better than no solution. As times passes, customer become more demanding and drop outs during a mobile call are no longer tolerated, Usability issues become more important.

Oracle and people will always be bound together. Its impossible to have an oracle without a person evaluating, interpreting and judging.

Not only are oracles intrinsically linked to people, but people can also be oracles. Think of an important customer as being a possible oracle, in fact any stakeholder that matters is an oracle for your testing.

It’s not surprising then that oracles often conflict as the example below shows(4):

 

“I had a satisfying moment today where I’d hacked one of our test tools to work with series of campaigns instead of single campaigns so it could generate tons of test data the reports. I set it going and didn’t really say much other than “Hey here’s what it looks like with lots of data”.

A designer, a tester and a developer all turned out to have different ideas of why it was wrong or right, based on different oracles. The designer was basing it on his ideas of how customers like to see graphs in general. The developer was basing it on second level support tickets he sees for existing reports. The tester was basing it on…well I don’t really know, probably his own opinions and a bunch of other oracles he’d up.

So by getting them all together, they had to come to some agreement about what the best oracle was. We agreed on consistency with other reports, and our experience about what has confused users in the past. When oracles conflict it may not be that any are “wrong”, but we still have to pick one in the end, and I guess the oracle becomes that decision”

—-Trish Khoo

 

Oracles conflict, but that doesn’t invalidate them. Oracles help us recognize problems, they don’t help us solve them. Ultimately, the decision has to be made if we want to solve the problem, ignore the problem or try and change the problem.

It also follows that because we are human, oracles are fallible. An oracle may be well chosen yet a tester can end up with a mistaken conclusion. In automation, we can incorrectly conclude that a green light is a good result, when in fact, a green light may be an indication of a failure on the test script.

Oracles are complex, changing, often contradict and fail us, but they are powerful tools in testing. Lets treat them with the respect they deserve by making an effort to understand them in greater detail.

  1. James Bach & Cem Kaner
  2. http://www.developsense.com/articles/2005-01-TestingWithoutAMap.pdf
  3. http://www.softwarequalitymethods.com/Papers/STQE%20Heuristic.pdf
  4. courtesy of Trish Khoo

An electronic engineer by trade, testing discovered Anne-Marie when she started conformance testing to ETSI standards. She was hooked and has been involved in software testing ever since.

She runs her own company, Testing Times, offering coaching, workshops and consulting with an emphasis on Context Driven Testing. Anne-Marie has strong ties with the Software testing community and co-founded the Sydney Testers Meetup.  Anne-Marie can be found on twitter at charrett and also blogs at http://mavericktester.com and coaches on Skype at id: Charretts

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now

Fresh ideas, announcements, and inspiration for your team, delivered weekly.

Subscribe now