Thursday, January 05, 2012

Known ways of managing ET #04 - Set Aside Time

tl; dr – scheduling ET changes the game. This is how to cope.


The team decides to budget a fixed amount of time for exploratory testing. Of course, that’s not the end of the story. This post is about what happens next.

First some background and disclosure: This sort of decision has been the trigger for a fair proportion of my client engagements since around 1998*. So I see this more often than I might. Generally someone on the team has eloquently persuaded a budget-holder that the project will find value in time spent exploring**, and I get to work with fresh, enthusiastic and newly empowered testers. So I find the situation delightful. Who wouldn’t? I’m sure these two complementary perspectives colour my experiences, and I expect they have coloured my writing too.

Budgeting a chunk of time to explore what you’ve made is a fine idea. As a management approach, however, it’s a bit hands-off. Sometimes, neither sponsor nor enthusiast has worked out the details of what is expected from whom and by when. More particularly, although there’s often a great sense of adventure, there’s not much consideration about the strategies for coping with inevitable change. Here then are some of those changes, a few related pathologies, and some strategic and tactical tweaks that have worked for me.

Dealing with lots of ideas

There will be a monstrous growth in the number of testing ideas; test teams have no problem coming up with new ideas about what and how to test. The practical problems lie in picking the best, dropping the newly-redundant, classifying and slicing and managing the set. Picture people trying to stuff an octopus*** into a rubber sack. This is a natural part of testing; one of the characteristics of wicked problems is that you can’t pin down the solution set.

As with all monstrous growths, the quantity of test ideas will be limited by external factors. If you’re keeping ideas on sticky notes, you’ll run out of wall space – which is perhaps better than putting everything in an expandable database that no one bothers to read. The most usual limits**** are to do with patience and attention span. When working within the set, the team will learn crucial filtering and throwing away skills, but will also run up against endowment bias; it’s hard to let go of something you own, and harder to let go of something you’ve worked hard to gain. There is likely to be conflict or denial – all the more so if the team has to date been under the consensual hallucination of potential completeness which underpins script-only approaches.

The growth in quantity may not be not matched by a growth in diversity or quality of ideas. This is only made worse by a testing guru (sometimes me) who sees it as his or her job to come up with new ideas. A good way to defuse some of this problem is to encourage the team to not only add to the ideas themselves, but to challenge ideas and remove them from play. If you make your influential tester the champion for diversity or quality in the collection, that can help too. I’ve often seen teams hamstrung by an inappropriate attraction to combinatorial collections; given a set of test ideas, someone draws up a table of (say) data type against input, and works through, left to right, top to bottom. Stop this in its tracks; tables and ordered progressions indicate a single real idea, one which is ripe for heavy optimising with a tool. If you can’t automate, decide which combinations are most important to do right now. If you can’t prioritise or optimise, hit the combinations randomly and learn from them as you go.

I like to constrain the numbers of test ideas in play by saying how much time an idea might need, and keeping the total amount of time needed under some reasonable limit. Although I can get as sucked-in by combinatorials as the next tester, I find that I tend to prefer diversity over depth. I try to temper this bias by paying attention to the agreed strategy and the current context – which means it’s good to have talked about what is valuable to the project. If I find myself pouring out ideas in some consultantish denial-of-service attack on the capabilities of the team, I’ll find a corner and write until I’ve calmed down, then see if my list triggers people to put their own ideas up, rather than fill the wall with my collection.


Dealing with fast feedback

Exploratory testing makes some feedback much faster, so a commitment to exploratory testing will change the feedback characteristics of the project. If there is a personal connection between the designers, coders and testers to support this feedback, the consequences can be excellent. Problems get fixed quickly, the team can develop a better understanding of what quality means on the project, and testers swiftly find out what they need to supply in a useful bug report. I’ve often seen palpable increases in respect and communication, which leads to trust and an overall greasing of the machinery of a development team.

Teams who have built massive confirmatory automated test are used to fast feedback, but of a very different flavour from that provided by exploratory testing. Feedback on on novel modes of behaviour and unanticipated failures can deeply challenge a team who thought their design was watertight. I’ve been told that bugs aren’t bugs unless they’re found by a customer, and that any bug without an immediately-obvious fix is a feature. I see both of these reactions as denial, and an indication that I should have prepared the ground better for the kind of feedback I can offer from exploring the product. The situation is made much easier if you are working with a real customer or user, rather than acting as proxy. The cry of pain might also indicate that your testing priorities don’t match the priorities of the people constructing the system – it’s your call whether you need to re-align, or whether you should embrace the difference. I’ve written more about the correspondences and conflicts of exploratory testing on agile teams in Testing in an Agile Environment.

More problematically, some people don’t want feedback when they’re in the throes of making stuff. I’m one; it gets in the way of the fragile extension of the imagination that is at the heart of how I make new things. Some testers are remarkably insensitive to this, others imagine that they need to somehow sweeten the pill. When I have results or questions, I prefer to make it known that I have feedback, and to give that feedback either to a group at a time when everyone is tuned in, or to an individual on invitation. Of course, it’s great to be able to march up to someone’s desk and get an immediate response, but what’s good for you might not be good for your colleague. Decide whether it's the right time to ask before you get between momma bear and her cubs.

As a test team gets closer to its audience, some will imagine that the test team risks losing its independence, and will resist – for instance – exchanging design information or locating the testers a shout away from the coders. Isolating the testers is an obvious but stupid way of encouraging independence of thought. You’ll find more in The Irrational Tester.

Conversely, test teams who have no existing connection with their designers and coders throw their feedback into a void. Swift, accurate and targeted information might seem valuable, but is worthless to the builders if it is delayed by procedure and snowed under by noise. The feedback in this case is primarily for the users (and sometimes the sellers) of the software. It’s crucial to understand your audience.

Some legally-minded people (and some sellers) don’t want information about new failures and will restrict or censor feedback. Some need plausible deniability, and don’t want to even look. If you have this situation as a test manager, messing about with ET won’t fix your problems.


Dealing with decisions and organisational oversight

Groups that are new to ET tend to see a large expansion in the number of problems found before they see a reduction in the number of problems made. More than once, when I’ve been dropped into an agile team as an exploratory tester and customer representative, I’ve had to take the judgement about whether to horribly disrupt the vendor team’s workflow by filling it with bugs. Clearly, if relations are good, blowing the workflow is bad – even if it is an indication of a crisis ignored. So far, I’ve managed to avoid purposefully blocking the flow. However, although it is a decisive and disruptive step, new exploratory testing groups can fatally disrupt the flow easily, unconsciously and even gleefully (which is nauseating, but happens). When bringing ET into a project, it’s vital to have awareness of this dire ability throughout the project team. If the workflow is king but the quality poor, decision makers will need to prepare for compromises on all fronts.

Once exploratory testing is chugging along, you hope to reach a point where fewer bugs are being made. I’ve had complaints from metrics people that fewer bugs are being found. This is a fine demonstration of failure demand, and I find it easier to set it as desired goal at the outset, rather than have to explain it as a side effect. I’ve found it useful to put this in terms of ‘we will not necessarily find more problems, but we will find more useful problems and find them faster’. Similarly, some metrics people count a day when lots of problems have been found as a bad day; it’s easier to help them deal with their pain if you’ve already had a chat about how much worse it would be if all that trouble was revealed later.

A decision to be hands-off can make some managers feel insecure. This feeling may lead them back towards the test team with unexpected needs for oversight and control*****. To avoid this, any team that has been given a degree of autonomy needs voluntarily to help their sponsor feel secure. I find that it helps to make a clear agreement not only about responsibilities, but about material deliverables. For instance: “We will keep a steady pace of exploration, shown by a public counter of time spent. We will display our test ideas and will be able to discuss them at any time with anyone on the project. We will make a visual distinction of those ideas which we have explored, those we are about to do, those we will probably do, those which we have recently eliminated, and those which have recently arrived. All time spent exploratory testing will be documented, and the documentation kept at <link/location>. All bugs logged from exploratory testing will be cross-referenced to their discovery documentation. Where we cannot keep to these commitments, we will make a collected note of the exceptions. We will come to you at the start, end and at regular intervals throughout significant testing efforts to keep you up-to-date. We will come to you with requests for help with obstacles we cannot overcome ourselves and with decisions about changes to budget and staff, but not with decisions about test direction and prioritisation. You will allow time for our regular reports and requests, and will champion our autonomy as set out here. If you are unable to give us the help we ask for, you will let us know swiftly and with reason.

Budgets change. Sometimes a project wants more exploration, sometimes there’s less available to spend on it. While the test team may have started out with a fixed exploration budget, and may be comfortable cutting or expanding its testing to suit, there may be questions around how it would drive a change and require more (or less) from its sponsors. This is to misunderstand testing as a service – the people to whom one provides a service will be the people who ask for more, who want less, who adjust the balance. Clearly, the test team will be engaged in the negotiation, but I would question the motivation of a test team that prefers its own budgetary decisions over the informed decisions of its customers and sponsors.

Lots of teams seem scared of putting exploratory testing in front of auditors. I’m not sure why; the auditors I’ve met seem to do a lot of exploration in their work, and I’ve always found it helpful to ask the appropriate auditors about what they might expect to see from exploratory testing before we start exploring. If there is, for instance, an unadjustable regulation that stipulates that all tests must be planned, the auditors are not only most likely to know about it, but to be able to give you an indication about what they might accept (i.e. charter+timebox in plan post-hoc). I understand that session-based testing was developed in part to allow exploratory testing to be audited. If auditors have a place in your organisation, then it’s better to expect them****** than to hide; talk to your auditors and negotiate what they need for assurance. I wrote a note about this here on the blog in June 2011: How to Assure Exploratory Testing.

Crisis reveals underlying truths. I can't recall a project that has identified every necessary task, or given each task the time that was needed or budgeted. Testing, especially when considered with a potentially-unlimited discovery element, is eminently squashable and so is usually squashed – which tends to reveal uncomfortable truths about how the overall organisation understands testing. If exploratory testing is squashed out of existence when testing as a whole is squashed, your decision makers see ET as a luxury. If exploratory testing takes the whole pie when (but only when) testing is squashed, decision makers see ET as a short cut. Both these positions are pathologies. You might be able to spot them early by indulging in a spot of scenario planning, or you might trust your instinct. I work from the position that testing is a service – mostly optional, generally valuable – which I find both reasonable and benign, but my position could be a pathology in your organisation.

As a team grows into exploration, it will develop a library of tools. By tool, I don’t mean a downloadable executable, but something that arises from the combination of mindless machinery with data, configuration and conscious application by the minds of the team. A chainsaw needs a lumberjack*******. Some tools arise as testers automate their manual, brain-engaged testing – and as the automation takes over, the tool will change the way a tester triggers problems and their ability to observe surprises, not always for the better. Other tools arise because they enable a whole new class of tests, and using the tool even briefly exposes a new perspective with its own easily accessible bugs. A tool armoury is one of the core assets of an exploratory testing team; exploratory testing without tools is weak and slow. As with any library, it needs to be organised to be useful. If I can, I keep a public list of tools, techniques, tricks and test data, perhaps tagged with general useful areas and knowledgable users. I encourage people to name and share their approaches. I try to get individuals to pair on tool use, or to hold swift training workshops.

One of the strengths of session-based testing is the way that it uses brief and frequent retrospectives to move skills through the team. Any exploration has learning at its heart, otherwise discovery builds nothing. Apart from skills in tools and test approaches, a test team needs to build knowledge of the system they are testing. We all know the truism that by the end of a project, the testers know a system better than anyone else. Exploratory test teams also build their awareness of how the system is used (and abused), and have broad connections throughout the users of their system and across the various sponsors and stakeholders. The edges of the team blur as people are seconded in and out; not all exploratory testers will identify themselves as testers. I have occasionally found easy acceptance of exploratory testing way outside the test team, which can give a neatly circular confirmation that the team made a good decision to set time aside for exploration.

In conclusion…

Test teams setting out to explore need to have a conversation with the overall project that acknowledges that this discovery activity cannot be completed, and seeks to find out what the project most values so that the test team can align its work to the overall desires of its customer and the goals of the project. It’s good to have a one-page strategy outlining goals and responsibilities. Internally, the team will need to work out how to make its work visible and trustable, and how to support its exploration and learning. It will need to organise and constantly refine a collection of tools and test ideas, drawing inspiration from diverse sources. As exploration becomes more understood, used and valued, the test team will broaden its skills and blur its edges, bringing its services closer to the coders and the customers.


* Not that anyone in my circles at that time saw – or named – exploratory testing as a distinct thing.
** I’ve mentioned one way that teams arrive at this point in Known ways of managing ET #03 - The Gamble
*** You’re here? Good for you. Welcome, friend. I know that you are interested in details and diversions. As am I. Between us, I don’t mean an octopus. Imagine Cthulhu.
**** because they’re the smallest, and so are arrived at first.
***** which sounds to me like Marshall McLuhan’s reversal effect. You’ll want to read Michael Bolton’s article for more.
****** It’s not like they’re the Spanish Inquisition.
******* and transport infrastructure, logistics plan, pulp mill, petroleum industry...

No comments:

Post a Comment