Friday, December 23, 2011

Uncommon Ways of Managing ET #03 - Daily News

tl;dr – exploration every single day works wonders

Every day, one person explores for 90 minutes, allocating a further 30 minutes to bug logging, diagnosis and follow-up conversations with others involved in the product. The whole team gathers for a 5 minute briefing on the day’s exploration, where the explorer talks about areas covered, concerns and interesting discoveries. Anything is open for exploration; the configured and working product, its data, a group of requirements, known bugs, user manuals, the production trouble logs.

Some days, knowing the area under exploration, the news would be keenly awaited by all. Some days, there would be nothing new to report. Some days, the approach to the exploration would be far more interesting than the results.

The team might include the briefing in a daily standup. They might decide to be briefed first thing in the morning on what had been found in the previous day. They might choose the next area of exploration at the end of the briefing, allow the explorer to be directed by a someone who steers, or give the explorer the initiative. I expect that there would be a big, visible and public compendium of untried ideas.

The time given to exploration is predictable, and should not be seen as a minimum or a maximum, but as a regular activity. The more stable the product, the wider the exploration, the more unstable, the more that the collective exploration will reflect the overall assessment of trouble areas. The group learns as a whole, and individuals learn to take a step beyond the group’s expectations. I expect that individuals would look forward to taking their turn as the explorer, and that the team would rarely keep the same explorer from one day to the next. While competitiveness will lead to diversity and excellence, it stands a chance of causing individuals to hide some of their approaches until they become the explorer. Whoever is managing the team will need to take care to temper competition with shared purpose.

The regular and relentless expenditure of a fixed duration, and the expectation of team scrutiny would encourage each explorer to take a concentrated approach in a fruitful (and so potentially novel) direction. This is an approach to exploration (rather than to testing). It works best if interesting parts of the subject of exploration can be reached swiftly – but as easy parts to reach are exhausted, people will think of new things to explore, or construct mechanisms to get them to a new point of exploration. Collectively, their explorations would be more diverse than might be achieved by a single explorer or dedicated scout. If ever the whole team found they had exhausted their stock of new ideas about what to explore, I would use that as a trigger to ask whether we needed more tools, more skills, a more stable set of artefacts to explore, or whether we knew our application really well.

Thursday, December 22, 2011

Known ways of managing ET #03 - The Gamble

tl;dr – exploratory testing can be a gamble. And what's the problem with that?

As I write this, it’s coming up to Christmas*. Christmas is a hard deadline**, but there are clear gift-related requirements from one’s nearest and dearest***. As weeks turn to days and days turn to hours, some people find they have no gifts, yet no appetite either for handing over socks or gift vouchers. So they gamble. They give themselves some time, and head off to hunt for presents. Sometimes, not everyone gets a present, sometimes the presents are junk. Sometimes, they’re inspired. Generally, after spending roughly the allotted time and just a touch more than the allotted budget, the giver has something approaching the right number and selection of gift-ish things. We’re all familiar with a gamble, we know that some are more comfortable with a gamble than others, and we know that others positively delight in the last minute sprint-and-bundle.

Exploratory Testing is in great part about discovery. If you’re looking for real surprises, it’s pretty pointless to say how long**** you think it might take you to find them. It is more rational to set limits on how long you’re going to spend looking. This is a gamble. Your budget isn’t necessarily set to somehow match the value of the stuff you’ll find, but may be rather more influenced by what you have available to let you look. If you’re comfortable with a gamble, you may be comfortable with managing ET by lobbying for a budget from the project, and working to find a great way of spending that budget for the project.

Time for a couple of examples: “We want to spend 40 hours in the first week after delivery looking for trouble” is a gamble. “We’ll need someone for three days to prepare the exploratory environments, data, tools and ideas” is an investment in a known deliverable. These aren’t opposites. You will have goals for both. You can schedule both. Nonetheless, some test managers refuse to take gambles to their project managers.

This is not always because the project manager is genuinely uncomfortable with a gamble. Testing is a Wicked Problem, and no one thanks you for dropping a Wicked Problem into their basket of responsibilities. However, many project managers of my acquaintance are rather good gamblers, instinctively and analytically weighing up risk and return. Their management talents extend well beyond the business of juggling durations and dependencies. If you’re comfortable with a gamble, candid about the realities of ET and convincing about your skills, you may well see the PM’s eyes light up. The game is on. You’ll get some proportion of what you’ve asked for, and you’ll go looking for problems.

Of course, managing ET doesn’t end here. But – crucially – this is where it can start.

* or whatever you call it in your tribe.
** although some years I have good reason to send New Year cards.
*** and from those who are remote and unfamiliar, come to that.
**** I’m using long and budget here to mean the aggregate of time, money, people and so on. I don’t simply mean time or money. And we’re all aware that there isn’t a simple equation to switch back and forth between money and people. P ≠ mc2

Wednesday, December 21, 2011

Uncommon ways of managing ET #02 - Kanban

tl;dr – Can Kanban work for ET?

Kanban is a way of managing inventory* – and by making that management visually clear, of helping workers arrive at improvements in the flow of resources. Kanban doesn’t look like a natural fit with testing: It is rather a stretch to say that test teams make things out of stuff in the way that Toyota makes cars out of steel. More to the point perhaps, testing’s inventory problem is sorting, not storage; it’s easy to find vast numbers of things to test, and simple to think of many ways to test them, but hard to find the right tests right now.

Then there’s ‘Kanban in Software Development’**, which is related, but different***, and describes a way to manage not inventories, but software development work in progress. It’s interesting to read Chris McMahon and Matt Heuser’s take on Kanban.

Either way, Kanban helps to visualise flow and to discover how that flow interacts with the capacity to work. As such, I imagine that it might be a reasonable fit with some of the logistics of managing exploratory testing. Perhaps by giving a snapshot of what the exploratory testers are paying attention to (and what they’re not) it might not only intrigue people across the team but also show up process problems that are ripe for fixing.

This isn’t a note about how to fit in with Kanban-driven software development, but a note about how I would use it as a tool within the test team. Also, I’m doubtful about Kanban as a way of managing work in general, and I wouldn’t use it to manage all the work in a test team. So let me give context to the situation in which I think Kanban could be a handy to people managing exploratory testing:
  • You’re already working with charters (from session-based testing or something similar).
  • Your project has budgeted enough effort for ET to make it worthwhile managing the flow of the thing****.
  • You feel the need to visualise and tweak your flow.

Here’s how I would use it:

Set up a chunk of available and visible wall as your Kanban board. Split this up into four columns. A great big fat one on the left for ideas that need to be touched on in the foreseeable***** future. Next right, a skinny column for what you might hope to do today, then a skinnier one that will show the exploratory testing going on right now. Finish up on the far right with a fat one for ideas that don’t need to be considered again in the foreseeable future. You’re going to fill the space with sticky notes. The sticky notes will be moved from left to right. At a glance, anyone in range will be able to see what’s left to do today, what’s being done, what’s been done, and whether testing is going on right now.

Sticky notes start out in the far left column. There will be lots here, probably rather more than the team might be able to test in the available time. Each one will represent a charter that you’re happy to spend a chunk of time on; you’ll write the charter on the note. I think it’s a good idea to put the originator’s name on the charter, so people know who to ask about its history. You’ll also want to represent the time****** you’re giving it, either by note size or with a number.

As a collective, fill up the ‘today’ column with notes. Kanban is a tool to help visualise work, so if you’ve decided that today the team will spend 10 hours exploring but you’ve got 18 hours of charters lined up, it will be obvious that you’ll need to iterate until sanity prevails.

The ‘in progress’ column is to help the team visually manage the flow of work. If you see people as the limiting element of your capacity (ie one person can only do one charter at a time*******), and you’ve got two people testing today, you’ll split the ‘in progress’ column into halves. You’ll give the halves the names of your testers. To give you a sense of today’s capacity, you might even block out chunks so that there is only space for two stickynotes in the column. The two spaces start out empty. When someone starts testing, they move the related stickynote into the ‘in progress’ column.

If that someone was me, I would write my start time and hoped-for end time on the stickynote before I stuck it back onto the Kanban board. Then I would explore, using the charter to direct my testing, within the timebox I had set myself. While testing, I would be very likely to generate more test ideas********. I’d deal with some of these the session, but some would need to become charters themselves. I’ll keep track of these on new stickynotes. When I finished my session, I would move the stickynote out of the space representing me and into the far right hand column, leaving the space empty for a new stickynote.

I would put any new charters on stickynotes into the great mass of notes in the far left hand column. I might even re-jig the work for today, if one of my new charters was more urgent than something else we’d planned. Then, assuming I had more time to explore, I’d move another stickynote from the ‘today’ column into my ‘in progress’ space, and get on with stuff.

That’s the basics. Let’s go one iteration on, and consider some wrinkles.

There’s going to be a surfeit of stickynotes in the left hand column. Too may ideas, and too much to do, is the nature of the testing beast, and I think it’s desirable to show this truth. Having the notes physically available means it’s easy to rearrange them. I suggest that the rearranging happens all the time, by anyone. If activity is dependent on something not-yet-delivered, I’d like to see the stickynotes grouped somehow – perhaps on a sheet that is itself stuck on the the board. If some activity is likely to be done soon, I’d like to reposition its stickynotes on the right of the column, ready to jump into the next day’ work. I’d encourage the team to bubble-sort vertically; to adjust pairs of vertically-adjacent notes from time to time so that more important ones rise. I’d like us to explicitly mark off a ‘pit of pointlessness’ at the bottom of the column containing all the stickynotes that represent things we can’t do*********, won’t do or just don’t want to do.

Over the day’s work, I want to see the number of stickynotes reduce in the ‘today’ column reduce, and the increase in the ‘done’ column. I’d like a second pit of pointlessness in the ‘done’ column for any notes representing a session that went bad. I would want to organise the notes in the ‘done’ column so that I could see, day by day, when something was done. You might want a different organisation. We would talk about it, and the board would have bought out a useful discussion.

The capacity element of this Kanban board only really applies in the ‘today’ and the ‘in progress’ columns. I’ve assumed above that the capacity for ‘today’ (or whatever period you use) is in hours, and having one hour represented by a note of given size might help understand how much time is needed, and has been spent. I’m less comfortable with the capacity of the ‘in progress’ column being named individuals – I’m well aware that an empty slot looks like someone’s not working, and I’m also keen that people can explore together and at times of their choosing. I think that I would prefer to work towards capacity in terms of Test Lab resources; clean data, single links to stubbed-out systems, hand-held devices, or whatever causes our primary bottleneck. Again, that’s something for a self-organising team to sort out for itself.

Frankly, I don’t know what to do about activities and time taken logging my bugs and stats and reports, or how to mark a brief debrief. I’ve worked in teams that have included and excluded plenty of activities from their charters, and I’d suggest consistency of approach within a group is more important the approach itself. Instinct suggests to me that this if board is only for visualising exploratory testing work in progress, that I include time spent doing diagnosis for bug logging and exclude the rest.

No huge surprises here, I hope – but let’s remember that one reason to use Kanban is to optimise away the need for Kanban. In the article referenced in **, Jeffery Liker was quoted in The Toyota Way: “Kanban is something you strive to get rid of, not to be proud of”. The approach I’ve described above should be seen as diagnostic tool rather than a solution to a scheduling problem. I expect that in use, one would see plenty of tweaks to not only to the Kanban board and its processes but – more importantly – to the actual work of managing exploratory testing. I hope that you, dear reader, will sort them out in a way that suits your team, and then will share your solutions (and your context) with the rest of us.

I’ve done bits of this from time to time, but not all of it together. If you’re interested in the ideas above, remember Elisabeth Hendrickson’s mantra: “Empirical evidence trumps speculation. Every. Single. Time.”. Some testers are already on this path, but I can’t find references to their experiences. Perhaps readers will furnish those references in the comments. I want to get to hear Adam Geras’s take on the subject, given that he’s not only lived with, but talked about “A Personal Kanban for Exploratory Testers”. I have a memory, which may be made up but feels as if it arrived in the last six weeks, of a series of pictures posted on twitter, with test activities represented as sticky notes that marched left to right across a board. I can’t find a reference to those pictures in any of my notes or bookmarks. If you recognise that as your work or the work of one of your colleagues, I’d love to hear from you, and to discover what you’ve learnt from the real world, and how badly it beats up my imagination.

* Arrived at by Toyota in the 01950s, refined into a cornerstone of the lean movement since, and now a perhaps just-past-trendy meme in the agile community. Here’s Wikipedia on Kanban. As I understand it, Kanban at Toyota describes a system of signalling that a small, local inventory is empty. It is used not only to manage the flow of components, but also to make that flow explicit, and adjustable. One reason to use Kanban is to optimise away the need for Kanban. This appeals to me. I’ve never seen this kind of Kanban in action, but it’s clearly inspired lots of people.
** see Karl Scotland’s Aspects of Kanban for text, David Anderson’s A Kanban System for Software Engineering for video. Here’s a currently poorly-cited article on Wikipedia so you don’t have to take your hands off the mouse. I have seen it in action, but never the same way twice, and you’re better off going to the sources than reading an inevitably compromised footnote like this one.
*** Tokens indicate presence, not absence. It’s about making inventory, not consuming inventory. Capacity and optimisation seem (in practice) to play second fiddle to visualisation and flow.
**** As a rule of thumb, this is more likely to be true if you’re not just concerned with what to test first, but what might be handy to test next. If you’re jamming ET into the gaps around the edges of your existing testing, I wouldn’t bother managing it with Kanban, because there’s no chance it will flow.
***** My horizon for foreseeable is pretty short. The absolute maximum might be the end of the sprint or the date of software release, and it tends to be less than three days. Your team will have a different attachment to the future. As a group, work out what your horizon for “foreseeable” will be and write it large somewhere obvious. Do change it if you need to.
****** Unless you’re working with fixed-length charters.
******* hmm
******** distractions
********* too big, too difficult, too dependent to consider in the foreseeable future. It’s not just the trivial things that are pointless.

Monday, December 19, 2011

Known ways of managing ET #02 - Bug Bash

tl;dr - Bug Bashes are rubbish.

The project gathers people together at an appointed time and place. Everybody splurges on testing for an allotted period, logs some bugs, and stops. If you need examples, see *. It’s a community thing, and there is generally a group hug / doughnut / retrospective before everyone goes back to their day jobs.

I guess one virtue of a bug bash is that it is a concentrated period of work, which may be a good thing in itself. Bug bashes can employ and popularise diversity in a group’s acceptable points of view, which is a plus in my book. You get to meet other people. And maybe a doughnut. But other than these few useful traits, it’s hard to find much that is good.

While the bug hunt room might be a fertile idea-generating ground for small groups in close physical proximity, the format means that many people head into the system for the first time, at the same time. Everyone is testing in parallel for a limited period, so there isn’t much opportunity to learn from each other, to analyse the group’s results, draw conclusions and carry on in a better way. There isn’t much opportunity to appreciate all the different approaches that are being tried, and take a new one – or, indeed, to help the group to explode in variety. There isn’t much opportunity to sling together a swift tool that drastically cuts the manual drudge and finger trouble in later work. And most people, working as novices, will follow a limited gamut of manual paths characterised by learning and exploring the application for the first time**. This could help you predict how your customers*** might interact with the product in the first few hours of use, but it’s not so good for assessing the product in other, more representative ways. A bug bash may be directionless, but that’s not to say that it is diverse.

A bug bash puts strain on available test-environment resources; licenses, batteries for devices, laptops, USB cables, un-damaged data, bandwidth, IP addresses, you name it. I’ve seen a support chap spend days getting the kit together, versioned, charged, addressed, data-filled and working before a push. Even assuming your Test Lab really can support your bashers with hands-on stuff, your back-end and infrastructural resources may not be neatly independent and you’ll end up being stymied by half the group finding the same test-environment bugs. This is great if you’re looking for test-environment or large-group bugs, but again, not so good if you’re looking for a broader or more representative set of interesting issues.

Expect lots of duplicate bugs, as different bug hunters bang into the same low-hanging fruit. Bug bashes often throw up some easily found but hitherto-unseen trouble, but let’s not be unthinkingly self-congratulatory when a third of our crew waste their time investigating, diagnosing and logging the same problem at the same time. More insidiously, duplicate bugs may mean broadly similar paths through the application. If everyone’s doing the same thing, what does that do for coverage? For exploration?

The quality of logged bugs tends to be low, the density high. The brief duration and inevitable peer pressure pushes the people in the room to value their speed to the first bug, and if they don’t get the early bugs, then, hey, there’s kudos to he who logs the most. Bugs are logged at speed, in bulk, with generally poor detail and diagnosis. Some managers are happy that their bug bash has resulted in a great wodge of trouble tickets, but remember that a bulge in the bug rate disturbs the workflow of an agile team like a turtle disturbs the digestion of an anaconda. Finally, the hysterical whoop whoop of competition not only breeds false confidence, but can break the spirit of people with their ego tied up in the code and configuration of the product.

A bug hunt allows a team to throw many people at a problem in a short period. It can appear cheap in elapsed time, while substantial in people time. Wrong. Ten people testing for an afternoon might look like a week’s worth of testing, but it is a week’s worth of testing by someone with 3-hour amnesia. I have seen bug hunts used to demonstrate someone in management’s commitment to testing their product. Diverting half the team off their usual path and into testing for a few hours certainly makes a statement, but the statement is that management is committed to theatrical gestures. Grand Guignol**** testing is a titillation, yet I’ve seen it used to substantiate the assertion that a product is fully tested. Frankly, my arse.

When might I use a bug bash? Perhaps if there was a problem reported frequently in beta testing, which was serious and urgent enough to warrant a diverse group’s concentrated attention but not well-reported-enough to act on directly. I might give the reports to a bug bash group, ask them to find out anything about the problem that isn’t already detailed in the reports, and facilitate their communication by sticking a scribe/steering person at a central and visible whiteboard, equipped with a bell. But I’d prefer to use a small team with big kit.

Getting a large group together for a short period is an expensive way of doing rubbish testing. I’d far rather spend the time and money getting the necessary people together and delivering a test environment that is up, running, connected, data-ready and swiftly-rebuildable. Or delivering a diverse and knowable set of data. Or a collection of reasonable (and less reasonable) user scenarios that stand a chance of saying something interesting and meaningful when tried on the product. Or a couple of hours so we know something about each other beyond name, age, height and title. Actually, pretty much anything is better value than a bug bash.

For a while***** it seemed like every other client wanted to throw most of their exploratory eggs into the bug bash basket. I have no idea who kicked off this ludicrous meme, but I’d still like to tweak their nose. Here’s my position:

Managers: of all the usual gambles you can make with your charges, a Bug Bash is one of the dumbest. Get someone to bring you an alternative, and consider it.

Testers: Bug Bashes might look like fun for you, but they suck for the product and the project. Don’t be fooled.

Clear enough?

* I’ve seen Scrum teams devote the whole team for a couple of hours on the 6-8th working day of a 10 day sprint. I’ve seen waterfall teams drag fifty people into the canteen on a Friday afternoon to batter away at a batch of handhelds. I’ve seen test teams commandeer the boardroom for a day at a time straight after they get the code, every time they get the code. Bet you've seen something similar. Enough examples: back to the polemic.
** Ж: “So what did you do?”
Ю: “Well, I tried logging in, and I’d not logged in before, and it went well, so I tried changing my username and resetting my password, using funny characters.”
Ж: <smacks head on desk>
*** Assuming your insiders are good substitutes for customers…
**** A horror show made up of a series of short pieces. Read Grand Guignol in Wikipedia, and think testing. Compare with Soap Opera Testing.
***** 2004-2008, or so

Saturday, December 17, 2011

Uncommon way of managing ET #01 - Scouting

tl;dr – skilled, supported, concentrated exploration

The team makes one person* the dedicated explorer for a period. This person, who we’ll call The Scout, spends all their time exploring. Their job is to find as much interesting stuff as they can. They’re supported (and watched) by others who set up environments, log bugs, keep notes, analyse data, suggest and configure tools. Pay attention: These supporting people are not part-time or less skilled; they’re just as engaged as the scout, but they’re not on point.

There are no sessions or formal session-end debriefs, but the team will want to stop and sit back from time to time and come to some conclusions about what they’ve found. The person (or people) on point are switched around regularly – scouting is fatiguing, and diversity is important. People with different specialities are used as required, and The Scout need not be a tester.

Exploration often has a sense of a frontier, a boundary between the known and unknown. The frontier is fundamental to exploration, and The Scout pushes it ever onwards. We understand, of course, that testing has really wiggly and sometimes discontiguous boundaries, and that the territory behind the boundary may not be well-known, and is likely to change unexpectedly. The team will understand this boundary better than anyone else, and will need to come to an understanding about how much they need to be able to notate and share information about the frontier.

This approach is all about discovery. It’s not cheap, nor is it exhaustive, but it is valuable. The project gambles time in return for information, so The Scout needs to know what the project is interested in. I expect there would be tussles about what The Scout would be exploring, and what they would be looking for. So much the better.

Note: This an idea. I’ve not worked (quite) like this. Maybe, though, this idea triggers something that you would like to try with your team. Let me know how you get on.

* or one group, but I’ll write in the singular to keep the grammar simple

Friday, December 16, 2011

Known way of managing ET #01 - Stealthily

tl;dr – some people hide their best work from their paymasters

A few people get together to find problems in snatched moments. There's little or no imposed direction, measuring, or task control, and rarely any sense of completeness or coverage. Although the work sometimes gets done with tacit support from one or two individuals in the upper echelons, there is little oversight and it is usually a hidden activity. Testers don’t log time or bugs through the usual channels, and it feels almost like an indulgence, a guilty pleasure.

I've been on a team who hid a pair of stealthy explorers in a corner for a few hours each week. The exploratory testers would tell the rest of the team about the bugs they’d found. Generally those bugs were 're-found' during manual scripted testing to allow them to be logged within the imposed structures of the project – and if no script might find a particular bug, we would assemble one. The customer would not countenance paying for unscripted testing, but was very impressed that we were designing such effective scripts*.

Exploratory testing becomes stealthy typically because those who control and budget for team members' time don't approve of looking for trouble. Look out for rigid 'verification and validation' contracts, consultancy contracts that only allow a small set of explicitly-approved billable activities and legal fears of explicitly-acknowledged defects.

I've seen Exploratory Testing hidden most commonly in teams that focus on user acceptance and regression testing, but I've also seen it in a self-labelled agile team that relied on a (rather sparse) set of low-level confirmatory automated tests. These teams tend to be a mix of self-identified testers and people who may be seconded into the test team or are otherwise keen to avoid the label. However, I’ve even seen stealthy approaches in test teams who were exploring with a degree of management support, but who felt that some of their approaches were beyond what might be accepted.

However nasty such hidden work looks from the outside, it's often rather well supported by individuals within the testing teams. People get the opportunity to work heroically, to subvert management decisions (especially gratifying if those decisions feel irresponsible), and sometimes to have a direct link to someone rather higher up in the tree of project status. The stealthy effort tends to get a geekily-sexy label.

When I meet experienced exploratory testers who have carved themselves a niche in some monolithic institution, they're often proud to be stealthy, and sometimes unhappy to share their approaches. Sometimes their reticence is justified – in groups which insist they don't do ET, acknowledging that you're an the ET specialist doesn't necessarily improve your day.

It’s worth mentioning that some companies consciously take a stealthy approach to discovery work so that they have plausible deniability, for instance while finding bugs to get a company out of a contract, or finding bugs that no one who could be legally-liable should know about. Such activity will challenge your ethics. Call time on these if you must – and sometimes, you must – but be aware that you may be shouting at the waves.

* This was in the early nineties. Don’t think I would put up with it now – but don’t think that the practice has ended, either.

Thursday, December 15, 2011

There are Plenty of Ways to Manage Exploratory Testing

tl; dr - lots of different ways to manage ET

The key problem that exploratory testing faces, as a viable discipline, is how it is managed. Of course, there are other well-covered interesting areas - the question of whether to do it at all has been debated to death* amongst us pundits (if not with as much fervour in industry), and if you're want to know how to do it, there is a slew of excellent ideas, techniques, disciplines and tricks to choose from**. However, the hairiest problems in actually doing it come from how people organise the work, and how the work and its owning organisation adjust to fit***.

Over the next couple of weeks, I'll post a short series here, snappily entitled "Ten Known Ways to Manage Exploratory Testing" and "Ten Uncommon Ways to Manage Exploratory Testing"****.

Here's a kickoff showing roughly where I'll go:

Ten Known Ways to Manage Exploratory TestingTen Uncommon Ways to Manage Exploratory Testing
  • Stealth Job
  • Traditional Retread
  • Off-Piste (Iron Script)
  • Off-Piste (Marshmallow Script)
  • Bug Hunt
  • Set Aside Time
  • Gambling
  • Script-Substitute
  • Session-Based Test Management (James & Jon Bach, me, others)
  • Questioning (Jon Bach)
  • Thread-Based (James and Jon Bach)
  • Touring (James Whittaker and others)
  • Don't bother (thanks to Dave Liebreich for reminding me...)
  • Scouting
  • Kanban
  • Following Lenfle
  • Daily News
  • R&D
  • Testing Guru
  • Video Reports
  • Post-Partum Labelling
  • The Summariser
  • GPS
  • Cloudy
  • The Inquiring Metricator

I'll fill you in on what I***** mean by each of these over the next few weeks. Expect about one a day, in no particular order.

And by the way: I'm posting this because it's good stuff, and you're going to find it useful. I'm posting it now because I've got a course in January that I want you to know about. That's January 25-27, in Oxford. A two-day workshop on exploratory testing techniques, followed by one on managing exploratory testing. Book here.

Note: If you're in Scandinavia, I've got one in Copenhagen on 6-8 March through the Morten Hougaard's Pretty Good Testing. Details here.

* And has degenerated as everyone professes to agree with each other's aims while insisting the philosophy's all wrong. Consultants, eh? Welcome to my world.
** Key sources for me****** - both Bachs, Bolton, Carvalho, Edgren, van Eeden, everybody I've ever tested with, Green, Kaner, Hendrikson, Harty, Itkonen, LEWT, me, Richardson, Sabourin
, Weinberg, Whittaker. Alphabetical order. A l p h a b e t i c a l. No preference implied. Some are sources for stuff I try not to do...
*** Don't get me wrong – plenty has been written about this, too, over many years. Here's some more.
**** Sorry about the terrible titles. Ten-fer lists wind me up, get me down, piss me off and a whole other bunch of phrasal verbs. But them's the titles. No, I'm not going with The Twelve Days Of Testmas, and yes, obviously there are more than ten of each... I'm not claiming these lists are exhaustive, nor that the items are exclusive. I'll write this up properly once I'm done serialising.
***** You can probably guess some, or most. Sweepstake?
****** Did I miss you out? Apologies. It's a blog posting – half-baked by nature. Email me, and if I forgot you, I'll add you to my list.

Wednesday, December 14, 2011

I've updated my format...

... but not my content.

The old one looked like someone's living room - if that person was living forty years ago. Good riddance.

I understand that lots has changed behind the scenes. Let me know if you spot something that's not what you'd expect.

I've found one fluff so far; the new profile has squashed my headshot.

Monday, December 12, 2011

A couple of hands-on tools

tl;dr – two handy tools

On Sunday, I was watching a film of a fine tester testing. I was keeping track of how his testing differed from mine. I realised that I was looking for the functionality of a couple of tools that I sometime use, and that he wasn't using at the time.

Both tools are for the Mac, but I imagine that similar tools are available for PCs too. Although neither are testing tools, they do things that are not only convenient, but by being frictionlessly convenient, allow me to observe and trigger behaviour in usefully-different ways.

The first is Mouseposé from Boinx. Mousposé highlights your on-screen pointer position, and is the close cousin of many tools used by teachers and screencasters to make their actions more obvious.

What makes it useful to me as a tester is that it makes user actions more explicit – not only following the pointer around, but differentiating between one click and double- (and n-) clicks, between right and left button (or however one expresses the peculiar sigil necessary on a trackpad to do the same). It also – and here's the killer – displays the keys you're pressing. These are on the edge of vision, in large letters on a low-screen bezel. They appear very briefly, and captured keys include shift, control, escape, enter and so on. It's great to expose the occasional finger slips that lead to novel behaviour, great for working out what you're doing, and for confirming (or not) that you're doing what you think you're doing. It's especially useful if you, like me, find that your hands don't always do quite what you ask them to do.

If they made a tester version, I'd like it to cope with multi-touch, give me a trail on drag, and ideally have some kind of paper-tape thing to show recent input and save me using a keylogger.

The second is Keycue from Ergonis. Lean on the command key for more than a few seconds, and KeyCue pops up a bezel with every* currently-available command-key combination. Not just the front application, but all the combos that are currently listening. Different options respond as you change your key combo. It's great for ramping up your expert hands-on-keyboard-flying-user tricks, but more than that, it shows you a whole bunch of potential bug triggers.

Key stroke input can cause unusual behaviour not only because it comes through an alternative route (as it happens, the rout is easier to automate, so tends to be covered in developer-side testing/checking) but because it's fast and potentially in conflict with other stuff. Hitting keys in swift succession can expose timing-related bugs (two in Word, one in Excel this morning alone). Hitting meaningful keys that have meaning to something other than the thing you're talking to can also get the pizza spinning. I want to try it on a cyrillic machine and on localised software.

If I had a tester version, I'd prefer that it actually enquired the system internals to find out what was listening, rather than simply parsed menus (and possibly-flaky "User-definable custom shortcut descriptions"). But I don't even know if that's possible.

* if you're not in version 6, every is unfortunately more like most

Sunday, December 04, 2011

Something for the Weekend? 006 - Visualisations

tl;dr:     visual representations are lovely

Christopher Warnow and onformative have worked together to make a movie which gives a visual dimension (not an explanation) for various sorting algorithms.

I came across Warnow because @cunabula had retweeted a link to his visualisation of Amazon's recommended books, A Thousand Milieus. Warnow uses Amazon's recommendations to find a hundred related books - then shows you them as clusters.

Delightfully, Warnow has made his tool available (not open source, but available*) - so here are visualisations** for books*** about testing that I've been known to recommend:

Hope you've had a lovely weekend.

* data walking and munging is done with Processing, which is dead easy to get a handle on, and the graphing side is basically Gephi. So go on - have a play. Here's your map: download the tool (which brings the Processing library for Gephi), download and install Processing, chuck Francis Li's http library into the right place, fire up the tool script (all 12Kb of it) and check that its behaviour seems reasonable. That's the hard stuff done. Now everything is open to you - your first task is to make the tool search for 30 books, rather than 100. Feel satisfied?
** two visualisations per book? Certainly. Running the tool twice produces diagrams with similar content, but very different layout. Compare and contrast.
*** U.S. Amazon store

Tuesday, November 29, 2011

Exploratory Testing - collected stuff

tl;dr: lots about Exploratory Testing

A colleague asked me if I had written articles or blogs about Exploratory Testing. Why yes, I have. It looks like it's time to share them more prominently.

Exploratory Testing is a crucial element that is often poorly integrated, so when I write about testing, I tend to make reference to Exploratory Testing. However, I don't particularly think of ET as the only interesting game in town, so when I write about it, I hope I put it in a relatively rational, consistent and practical testing landscape. Hence this feels like the first time I have consciously made an inventory of the stuff I've written about ET.

Here goes:

Of my long-form ET stuff, the paper that gets the most citations is 'Adventures in Session-Based Testing', which is about managing ET. I wrote it around ten years ago with the very brilliant Niel van Eeden. It won 'Best Paper' at EuroSTAR and STARWest. I updated it, so it occasionally gets called 'Further Adventures...'

The one I'm most fond of is 'Four Exercises for Teaching Exploratory Testing',* but although it went to the Workshop on Teaching Software Testing 5 back in 2006 and should have been part of the online materials, it vanished instead into online limbo. Astonishingly, when I searched just now, it's finally there - but orphaned from the rest of the site. If you can find a path to it from I'll give you a hug.

You might already recognise the Black Box machines, which get plenty of attention, and are the single biggest cause of random strangers saying hello at conferences. Occasionally someone on a train (typically to or from Paddington) will say "aren't you... didn't you...", which is odd, but nice**. I know of a dozen or so people and organisations who use them for recruitment purposes – so let's call them my attempt at a balance to the useless five years I spent trying to nudge the ISEB exam marginally closer to fit for purpose.

Agile people tend to have come across Elisabeth Hendrickson's excellent Test Heuristics Cheat Sheet, which has my name on it. I contributed when she and I put our Exploratory Testing classes together for a (very enjoyable) 2-hander that we ran in London and in California.

There's plenty of other papers available at, and many have related material. A Positive View of Negative Testing has more on techniques, Things Testers Miss is on bug stories, Testing in an Agile Environment covers my experiences of the fit (and friction) of testers - often using ET - on agile projects. In my most recent long-form paper, The Irrational Tester, I appropriated some fashionable ideas from behavioural economics, and I hope not only gave them testing context, but enabled more substantial exploration by drilling back from the pop science to the original research. That one got another 'Best Paper', this time from STAREast.

On that same 'papers' page, you'll find a short series of more conversational short-form things under the heading 'Exploratory Testing Notes'. They go with my Getting a Grip on Exploratory Testing workshop - they're not carefully-checked whitepapers, but nor are they short sharp blog postings. There's yet more that goes with the course, but it changes pretty much every time I do the thing.

As for blog postings, well, you're here - but Blogger is, ironically, a rotten thing for searching.
This is an entry on tools for ET ,
and this for assurance ,
and two together that may be of interest .

Reading this, it's obvious (to me) that lots of the other ET-related stuff I've written over the years has slipped away. That's the problem with the internet - ephemera are eternal, but useful stuff gets drowned. I'll fish some out and post them over the next few weeks.

Finally - I run a workshop from time to time called Getting a Grip on Exploratory Testing. It's all hands-on, and is limited to 12 people. I'm running a public class in Oxford on 25-27 January. Lots of friendly testers have twittered about it, and some (who have been on the thing) have recommended it. You'll need to look now, and book quickly, to get to the early-bird discount by the end of the week.

* Two things to note: firstly, the exercises describes are software, and available to all. Contact me and I'll send you them - plenty of people do, and they're used all over the world. Secondly, they're deep enough exercises to still be part of my workshop. By all means have a play, but if you're thinking of coming on my workshop, be aware that novelty is important to exploration and you won't get as much from the workshop. That said, I always have alternatives available if someone turns out to be familiar with the exercises.
** Sometimes they just want me to sing a song, which is still odd, kind-of-nice, and tends to mean they're a Bulgarian. Once, someone on a train was both a tester and a Bulgarian. We had lots to talk about. I should call him and arrange lunch.

Thursday, November 10, 2011

Bias: Illusions and corruption

tl;dr we're all nuts

Two recent articles tickled my interest in bias.

In The Observer (How cognitive illusions blind us to reason, an extract from Thinking, Fast and Slow) Daniel Kahneman reminds us that cognitive illusions are stubborn, particularly when one is exercising hard-won, high-level skills. He illustrates this using stock traders, saying that their skill in evaluating the business prospects of a firm is serious work that requires extensive training. "Unfortunately, [this skill] is not sufficient for successful stock trading, where the key question is whether the information about the firm is already incorporated in the price of its stock."

In The Economist, All power tends to corrupt is subheaded "But power without status corrupts absolutely". It describes an experiment in which subjects were asked to select tasks for a colleague to perform. Some of the tasks were demeaning. Before they made their selection, they were given a job they might respect or look down on (the descriptions read a little like fun vs dull testing roles) and a sense of whether they or their colleague had more influence. Those put into the position of having influence but no respect chose significantly more demeaning tasks for their colleague.

Some testers tell me that they do a skilled and difficult job, but don't get much respect. I believe them – and I've found that the articles above have helped me understand my own behaviour a little more clearly.

Monday, November 07, 2011

Byalo Rade - new track from the London Bulgarian Choir

I don't usually do this, but apparently you're interested...

Here's a track from the London Bulgarian Choir's new album, Goro Le Goro. The album will be released on November 26th, with a big gig and party in London. I'll be wearing my furry hat, but you'll need to buy a ticket.

White Rada is sweeping her yard, her slender figure swaying, her arms like pale wings. As she sings, pearls flow from her mouth. ‘Beautiful Rada, my daughter, don’t leave your yard, don’t lift your eyes, don’t give your flower away.'

Sunday, November 06, 2011

Something for the Weekend? 005 - Patterns and evolution

Catherine Young sees plenty in clouds.
(which reminds me: Q: what is the sky? A: all of the above from @gimboland/@posh_somme )

I've had a huge print of this sequence over my desk for years. Every time I actually raise my eyes and look at it, I think of change, excellence, strangeness, practice, talent, choice and stopping. Or something. Hope you've had a lovely weekend. Picasso's Bull suite.

Monday, October 31, 2011

A playful exercise for testers

tl;dr; here's part of a workshop

This exercise, like many exercises used with testers*, encourages people to discover rules** and build models. I last used it in public at Tony Bruce's excellent London Tester Gathering, and continue to use it at corporate clients. However, unlike most of my stuff, it doesn't involve software. Over the years, I've built up a small collection of bits of landscape. My assembly of palm-sized stones generally adorns one of my monitors in the studio, but occasionally has a second life in workshops. The exercise in which I use them is fun and seems valuable, so I thought I'd spread it about.


In a nod of respect to Johanna Rothman***, I call this exercise 'Bring Me a Rock****'.

Here are my instructions:

Set our your stones somewhere where everyone can see and get at them.
Pair up the workshop participants. If you have an odd number of people, pair up with the oddest yourself...
  • Both of you: write down something to identify a rock. Keep it secret.
  • Become ASK and FETCH.
    • ASK says "Bring me a rock"
    • FETCH brings a rock.
    • ASK accepts, or rejects the rock, based on the secret they wrote down earlier.
    • This continues until ASK accepts a rock
  • When ASK accepts a rock, FETCH proposes a model
    • If FETCH is wrong, swap roles.
    • If FETCH is right, ASK writes down new criteria, and FETCH brings another rock
  • carry on...

  • find a new pair if bored
  • observe how you are modelling your partner's models

The exercise is fun, but gets a lot more valuable when the participants talk about it afterwards. My approach is to:
  • discuss the different approaches to test and discovery
  • pay special attention to the way that patterns are set up and broken
  • apply learnings about discovery and patterns to testing
  • avoid too much time spent talking about the different rocks

There's a little fiddle in the instructions that I'd like to draw your attention to. I'm sure you've noticed that the ASK role has a higher status than FETCH. With this in mind, it seems odd that FETCH should stay as FETCH if they correctly predict ASK's model. A playground sense of fairness means that the roles should swap on a 'win'. Do play it this way if you want to. However, I find that swapping on 'lose' introduces interesting, more subversive*****, behaviour. Say hello to the Imp of the Perverse.

If you run an exercise like this, please feel free to change stuff. Also, I would love to hear how the exercise worked for your group. Finally, I'd be grateful if you'd give participants a link to this post.

* I've commonly heard of 'mastermind' being used, but some notable testers use more mutable approaches with lots of rule exploration and discovery. Michael Bolton told me of his new coin meta-game the other day, James Bach and other use cards and dice. If you're a tester interested in games, you'll certainly need to watch Dale Emery and Elisabeth Hendrickson – and try to get along to Elisabeth's Agilistry Studio one of these days.
** There's a whole class of games whose purpose is the discovery of rules - here's Wikipedia's living list of
Games with Concealed Rules. You might also want to have a look at Nomic. Nomic's rules aren't concealed, but their evolution is the game. There's a blog post about this around here somewhere...

*** Johanna Rothman's seminal article should be read by all:
**** Some are smooth, some rough, some round, some flat, some solid, some holey. They're all sorts of different colours and sizes. One is incised with the work 'luck'. One is (was) a plum. Take your pick.
***** ie subverting the game, not each other. I find this approach rewards just-about-guessable reasons for picking a rock, so ultimately this helps people in the pair co-operate in in the learning, rather than allowing one party to stay in the high-status position just by being wilfully obscure. I'm proud of this wrinkle. You might think it's dumb. Your choice.

Sunday, October 30, 2011

Something for the Weekend? 004 - Games with unknown rules

tl;dr - some games don't have fixed or known rules. Go here or here.

A long time ago, in a wooden house on the shoulder of a snowcovered mountain, I found myself sitting in a circle of people I didn't know at all. We were idly tossing a ball about. The ball was a complex thing with lights and buttons and fresh batteries – and as it moved from person to person, we shared arbitrary rules that were immediately forgotten. We noticed we had the attention of a couple sitting outside the group. They were, it emerged, players in the UK's Go team, and they were simply fascinated.

We continued to throw the ball. Occasionally, another rule would turn up. One of the Go players objected that our rules were inconsistent. Rising to the bait, we gently corrected him by explaining a hitherto unappreciated subtlety to the ruleset. More Go players were drawn towards the circle. As the Go players got down to organising and double-checking, our new rules explored variables beyond buttons and lights; the pause between throws, how hard the chuck, where the target was looking, whose friend they were, whether their name started with a vowel, whether a previous in-game action had temporarily changed their name...

This was, of course, nerd-sniping. But, more interestingly, it was an unexpected kind of game; the kind of game where the rules change. The Go players, on a team trip and consequently pretty much only in contact with other Go players, were more-than-usually locked into a pattern where rules were constant. Believing that all games have fixed rules is an easy habit to fall into.

Nevertheless, games with variable rules aren't unusual – I've spent happy hours playing bar chess, word disassociation, and parroting Mornington Crescent. If you play any games with a five year old, you'll know that rules are (a) important and (b) made up on the spot.

It's tempting to see the world as a game. Some people who explain the world use games correspondingly; as metaphors for the real world. The trouble is that games with fixed, agreed and finite rules are not always a great model. It's all very well to count heads and tails, but we forget that sometimes the coin falls in our tea and the resulting dousing trashes the laptop on which we're attempting to keep score.

The rules of our world are generally local, temporary and inconsistent. Very few rules are universal, very few activities not involving time, energy and accountancy are zero-sum. For me, the joy and value of maths, physics, music, cookery and coding is the discovery of rules; these deeper, emergent, unexpected truths. It's not about playing by the rules, but playing with. So, if you're going to play a game as a metaphor for life, you might consider using a game where the rules are on the unreasonable side of realistic.

All of which is a long and late introduction to: Something for the Weekend? 004 - Games with unknown rules
List of games with mutable rules
List of games with concealed rules


Saturday, September 24, 2011

James Bach moves very fast...

Something for the Weekend? 003

This turned up in Donella Meadows' excellent Thinking in Systems: A Primer*.
I've not seen it before, and thought you'd enjoy it.

A system is a big black box
Of which we can’t unlock the locks,
And all we can find out about
Is what goes in and what comes out.

...there's more, of course. But I'm not convinced that posting the whole thing is a fair (ab)use of copyright. You'll find it on less fussy people's sites. Go fish.

* That's an Amazon affiliate link... trying it out.

Monday, July 11, 2011

7+3 = 11

tl;dr – a boring trivial bug is causing me to procrastinate by writing about it
tool – a spreadsheet to help you choose input data to spot the particular pathology

I love moo cards*.

Here's a snap of a recent bill from moo. Spot the bug.
100 cards £9.17 ... Shipping £2.50 ... VAT £2.33 ... Total £14.01 ... new blog post: priceless

The problem is a known pathology**. It's not uncommon to find that basket calculations are sometimes off by a penny; the calculations are done with precision, and those precise numbers are fiddled to fit with our quantum of currency - the penny. The error fits the fiddle.

In this case, the total including delivery and VAT looks as if it should be precisely £14.004. Expecting this to be £14.00, one might be tempted to speculate that the total has been rounded up my mistake, but two things make me not so sure.

Ⴀ) I generally see problems related to truncations (which always go down; £14.004 -> £14.00) and normal rounding (£14.005-> £14.01, but £14.004-> £14.00).
Ⴁ) thinking about it, I had a 10% discount on the normal price as a sop for the knock on effects of a previous bug. Discounts add another layer of complexity.

Let's work the numbers:
  • £10.19 is the normal price.
  • After the 10% discount, that would be £9.171, not £9.17.
  • Add £2.50 delivery to arrive at £11.671.
  • 20% VAT on is £2.3342.
  • The precise total is £14.0052 - which will be rounded up to £14.01.
  • The VAT component is £2.3342 - which will be rounded down to £2.33.
That seems more plausible.

Is this a rare combination of numbers? I built a spreadsheet to explore, and it is not; 300 prices between 1p and £10 show this behaviour.

All this is in the context of a 10% discount, 20% tax and £2.50 delivery. But my spreadsheet is a model, so I can change the conditions. Playing with it gives me the following empirical understandings:
  • you don't see this problem without a discount;
  • within reasonable ranges, picking alternative discounts doesn't change the incidence much;
  • within reasonable ranges, changing the tax doesn't change the incidence much - I've seen it go down to 200;
  • the range of incidence seems to be 200-300 for 'reasonable' ranges of tax and discount
  • the delivery charge doesn't matter if it's to 2dp (and my model is inaccurate with 3dp)
Constraining myself to a basket with one item, I expect that I can sit down and demonstrate mathematically to my own satisfaction that in order to see a total that rounds up (ie £14.005), and an associated tax that rounds down(ie £2.3342), you need a price with a third decimal place - ie a normal price that has already been adjusted in some way. But that efficiency, while attractive, is a procrastination too far. For now, I'm happy with the general rule of thumb; you only see this problem when at least one thing in your basket can have a price that includes fractions of a penny - but if the potential is there, you'll see if for 20-30% of your possible prices.

Coders: One solution is do all calculations off-screen to full precision, but produce the totals on the bill from the numbers that actually go on the bill. Another is to round your total to 2dp before calculating tax. Of course this can mean having two containers for very similar information.

So far, so fun. For testers.

Frankly, I don't mind paying the extra penny. My problem is what the penny does to my paperwork.

I'm doing my VAT accounts, where I separate the £2.33 from the rest of the total. Moo's fluff on their bill means that stuff that should add up to zero, doesn't. I'll have to fudge the penny, which means introducing a special case. I'll have to be careful, because special cases are where I make accounting mistakes. That's a pain. I hope that you (or Moo) can use that description to advocate a fix for similar bugs.

And I hope that you go out there to find them. Here's the link to that spreadsheet again. I'll use it to generate data to help me reveal this issue***. You may use it and abuse as you wish. Please attribute me if you use it in public. It's got a second page that shows incidence, and a third with instructions, license and known bugs.

*  Those of you who have had a business card from me are charmed by them, too. Moo's custom postcards will lend excellent grooviness to a game I have in mind. I want to make special stickers for a bunch of post-it related activities. Moo have always responded swiftly and sweetly to problems, and  to top it all, they're local.
** I know this pathology, and I look for it when I test. Indeed, I've got an exercise based on something very similar in one of my classes. Some people question the veracity of that exercise; surely no-one really has obvious errors like this any more. Ha.
*** I tried a google docs version, but it runs like a three legged dog on Safari and Firefox. I was so discouraged I didn't bother with Chrome...

Friday, July 08, 2011

Something for the Weekend? 002

Briefly – I'm back in the studio for the weekend* – I'm fascinated by the way that technology enables interactive art. Exploration, discovery and emergent properties are desirable, even crucial qualities of the work.

Here are the sites of two people whose work I find especially interesting:

Brendan Dawes – and you should also try MagneticNorth
Robert Hodgin – who has more at his blog, Flight404

I'm thinking of getting to know Processing, before trying to get to grips with Cinder. Any of you got experience to share?

Enjoy the weekend.

* teaboy, mainly

Wednesday, July 06, 2011

Broken by design

tl;dr – I can't print or save a filled-in form generated by  HMRC software

A pet peeve, involving the Taxman and Adobe.

The Taxmen need a form. They'd like it online, and generously supply free software to help me get the numbers in the boxes. I use their software. It produces a form as a .pdf.

It's a dynamically-filled form*, so if I use one of my usual pdf readers, the boxes are devoid of numbers. Only with Adobe's reader can I see my numbers in the boxes. Adobe's reader is desperately slow and buggy, and I need to explicitly allow it to trust this locally-made form in order to see anything meaningful – but that's not my peeve. My peeve starts when I get to a point where the form is useful, and I'm shown a neat purple message:

        You cannot save data typed into this form. Please print your completed form if you would like a copy for your records.

Well, I would. Note that I've not typed any data into the form; it's been generated for me by the Taxman's tool. I go to print the form. I tend to print to .pdf, as I'm swamped with archived paper as it is, and a .pdf is both searchable and findable. A dialog appears, jauntily sporting the following:

        Saving a PDF file when printing is not supported. Instead, choose File > Save.

I consider printing it to paper**, scanning it in, OCR-ing the thing and calling it quits. Just in case, I try File > Save. No one will be surprised to know that I'm told:

        Data typed into this form will not be saved. Adobe Reader can only save a blank copy of this form.

A blank form? I'm sure that's what the taxman intended. The observant will notice that, as happens so often, following the instructions will put me into a self-defeating infinite loop. I've met this before, and that's my peeve.

It's big guns time. I pull out Acrobat 6 Professional. We're into software-that-costs-money territory here, and indeed have plunged straight into that unhappy valley of software-that-I-need-once-in-a-blue-moon-but-buggers-up-my-machine-to-such-an-extent-that-I-wince. Acrobat Professional is, for those of you unacquainted with Adobe's upgrade paths, ongoingly expensive. It also plays nasty with the other children in the sandpit, and doesn't do anything (except this) that I need.

A minute or two later, after it has managed to load, trashed the screen redraw, bunged the CPU to 100% and asked me to upgrade (not on your nelly, you eight-year-old, tired, hack, although I admit I have considered it), I try printing again.

        Saving a PDF file when printing is not supported. Instead, choose Save from the File menu.

There's that 'not supported' message again. Acrobat aside, I've not yet met an application that can print, but that can't aim it at a pdf. Perhaps I should set up a .pdf printer - but choosing not to address the bristles on that yak for the moment, I choose Save from the File menu, and - astonishingly - I can.

I suspect that by not supported, Adobe actually means restricted to the paid-for version. I suspect (suspicious tester that I am) that Adobe have done this on purpose. The taxman has chosen to provide me with a tool that throws my data away, unless I pay Adobe for the joy of keeping it. I wonder  whether the Taxman intended, condoned, or just didn't notice this behaviour.

Post scriptum***: As it happens, the tool turns out to be a dead end. The unprintable form is for my records only. Once I'm done slapping the desk, I fill in the online form in seconds and I'm done.

* For the initiated, this means that the .pdf (empty, pretty) is accompanied by a .fdf (just the numbers).
** Portable Document Format? My arse. Portable when folded up and shoved in a briefcase.

*** As distinct from PostScript. Print joke. Ah ha ha ha, bonk****.
**** Man laughing his head off.

Friday, July 01, 2011

Something for the Weekend? 001 (zero-padded in hope)

Wil Shipley writes code; I use tools he has had a hand in* at least weekly, more so when I’m onsite**. He seems to be an auteur, involved in all stages of translating ideas into code into cash. He also writes words – copiously, but no longer regularly as far as his blog is concerned. A few years ago, he wrote up a narrative describing his thought processes and discoveries as he worked through a rotten bug. It’s called The Greatest Bug of All, and is packed with meaty goodness.

For those of you who are more visual, those who are interested in variation and multiples, or those fascinated by the anonymous human touch, here is Stephen Wragg’s collection of walking men. Note the specification.


* typically OmniOutliner and OmniGraffle, although Shipley has moved on since to focus on Delicious Library, which I don’t use so actively.
** If I’m allowed to use my own kit…

Thursday, June 30, 2011

How to Assure Exploratory Testing

Over on LinkedIn, Paul Gerrard has started a "Test Assurance" group, and has asked "How can we assure exploratory testing?"

Lacking in discipline, I found it hard to read every bit of blither posted, but found it easy to respond. In the spirit of reuse, here is my take...

In the small sample of organisations that I’ve met who have an assurance group, each has gone about that work differently*. Some want to see that the team isn't wasting its time, some want to see that the organisation isn't developing false confidence, some want to see that the organisation can substantiate its claims, some want to see that set processes are being followed. If you're asked to assure exploratory testing, it's a good idea to find out what you are expected to judge and influence; those expectations may be in opposition to your own.

However, let’s assume that I am being asked, and I'm being asked to assure exploratory testing in an organisation that isn't doing something I find outrageous**, and I'm being asked to set up some sort of assurance without following the rails of a pre-existing culture. We can all dream.

In that case, I would work in a way that expected 'assurance' to independently assess the degree to which information coming out of a process of work could be trusted, and the degree to which the organisation as a whole trusts it. I’d expect assurance to be a sampling activity, with access to anything but without expectations of touching everything.

For exploratory testing, I would hope to:
- Watch individual exploratory testing activities to judge whether execution was skilled (I’d want a wide variety of business and testing skills on display)
- Watch the team to judge whether their exploratory testing work was supported with tools and information (exploratory testing without tools is weak and slow, exploratory testing in the dark is crippled)
- Gauge whether the team had independence of thought, and to what degree that independence was enabled and encouraged by the wider organisation (bias informs me about trust)
- Read some of the output (reports, notes, bugs) and watch some debriefs (if any) to judge how well the team transmits knowledge about its testing activities.
- Follow unexpected information to see to what extent it was valued or discounted (exploratory testing finds surprises; is that information useful and treated as such?)

I’d hope to do the following less-ET-specific tasks, too.
- Dig into any points where information could be restricted or censored (ie inappropriate sign-off, slow processing, disbelief or denial)
- Observe the use and integration of the team’s information in the wider organisation to judge whether the work was relevant, and accurately understood
- Judge the team’s sense of direction by observing the ways that information found, lessons learned, and feedback from the organisation affect the team’s longer-term choices.

I hope that our hypothetical organisation would use these insights to help jiggle whatever needed jiggling, and that once jiggled, the organisation could feel that they could trust the information from the team even more, and that the team would feel even more relevant and valued. Then I'd kick myself for signing the NDA that stopped me writing about it.

* Some have made the system, some have bought it, some have commissioned it, some have bolted it together. None have used exploratory testing as their sole means of testing. And I've never needed to 'sell' exploratory testing to those organisations that are risk-aware enough to have an assurance group.
** like trying to find a way to avoid clear responsibilities in a legally-defensible way, or trying to avoid the truth about the system under test

Tuesday, June 28, 2011

Testing, testing, 1 2 3

Tfft... dffd... is this thing on?*

I'm spending a lot of time, recently, in studios. Some of them not my own. Here are two patterns of behaviour:

А: Silence - "Can you hear me?" - "Yup" - cue playback / recording until finished or otherwise interrupted.
Б: Noises (multiple, some questionable) - "What's that?" - "What?" - "That..." - "Oh. Um..." - stop recording, fiddle until the unexpected is understood.

Hello, world.

* Blogger stopped letting me update the server, the server went pear shaped, and I (copiously) lost enthusiasm. None of these things were connected, but their effect in combination produced a long period of silence. Let's see if this is an intermittent burble, or a gradually-increasing stream.