Over on LinkedIn, Paul Gerrard has started a "Test Assurance" group, and has asked "How can we assure exploratory testing?"
Lacking in discipline, I found it hard to read every bit of blither posted, but found it easy to respond. In the spirit of reuse, here is my take...
In the small sample of organisations that I’ve met who have an assurance group, each has gone about that work differently*. Some want to see that the team isn't wasting its time, some want to see that the organisation isn't developing false confidence, some want to see that the organisation can substantiate its claims, some want to see that set processes are being followed. If you're asked to assure exploratory testing, it's a good idea to find out what you are expected to judge and influence; those expectations may be in opposition to your own.
However, let’s assume that I am being asked, and I'm being asked to assure exploratory testing in an organisation that isn't doing something I find outrageous**, and I'm being asked to set up some sort of assurance without following the rails of a pre-existing culture. We can all dream.
In that case, I would work in a way that expected 'assurance' to independently assess the degree to which information coming out of a process of work could be trusted, and the degree to which the organisation as a whole trusts it. I’d expect assurance to be a sampling activity, with access to anything but without expectations of touching everything.
For exploratory testing, I would hope to:
- Watch individual exploratory testing activities to judge whether execution was skilled (I’d want a wide variety of business and testing skills on display)
- Watch the team to judge whether their exploratory testing work was supported with tools and information (exploratory testing without tools is weak and slow, exploratory testing in the dark is crippled)
- Gauge whether the team had independence of thought, and to what degree that independence was enabled and encouraged by the wider organisation (bias informs me about trust)
- Read some of the output (reports, notes, bugs) and watch some debriefs (if any) to judge how well the team transmits knowledge about its testing activities.
- Follow unexpected information to see to what extent it was valued or discounted (exploratory testing finds surprises; is that information useful and treated as such?)
I’d hope to do the following less-ET-specific tasks, too.
- Dig into any points where information could be restricted or censored (ie inappropriate sign-off, slow processing, disbelief or denial)
- Observe the use and integration of the team’s information in the wider organisation to judge whether the work was relevant, and accurately understood
- Judge the team’s sense of direction by observing the ways that information found, lessons learned, and feedback from the organisation affect the team’s longer-term choices.
I hope that our hypothetical organisation would use these insights to help jiggle whatever needed jiggling, and that once jiggled, the organisation could feel that they could trust the information from the team even more, and that the team would feel even more relevant and valued. Then I'd kick myself for signing the NDA that stopped me writing about it.
* Some have made the system, some have bought it, some have commissioned it, some have bolted it together. None have used exploratory testing as their sole means of testing. And I've never needed to 'sell' exploratory testing to those organisations that are risk-aware enough to have an assurance group.
** like trying to find a way to avoid clear responsibilities in a legally-defensible way, or trying to avoid the truth about the system under test