Friday, December 23, 2011

Uncommon Ways of Managing ET #03 - Daily News

tl;dr – exploration every single day works wonders

Every day, one person explores for 90 minutes, allocating a further 30 minutes to bug logging, diagnosis and follow-up conversations with others involved in the product. The whole team gathers for a 5 minute briefing on the day’s exploration, where the explorer talks about areas covered, concerns and interesting discoveries. Anything is open for exploration; the configured and working product, its data, a group of requirements, known bugs, user manuals, the production trouble logs.

Some days, knowing the area under exploration, the news would be keenly awaited by all. Some days, there would be nothing new to report. Some days, the approach to the exploration would be far more interesting than the results.

The team might include the briefing in a daily standup. They might decide to be briefed first thing in the morning on what had been found in the previous day. They might choose the next area of exploration at the end of the briefing, allow the explorer to be directed by a someone who steers, or give the explorer the initiative. I expect that there would be a big, visible and public compendium of untried ideas.

The time given to exploration is predictable, and should not be seen as a minimum or a maximum, but as a regular activity. The more stable the product, the wider the exploration, the more unstable, the more that the collective exploration will reflect the overall assessment of trouble areas. The group learns as a whole, and individuals learn to take a step beyond the group’s expectations. I expect that individuals would look forward to taking their turn as the explorer, and that the team would rarely keep the same explorer from one day to the next. While competitiveness will lead to diversity and excellence, it stands a chance of causing individuals to hide some of their approaches until they become the explorer. Whoever is managing the team will need to take care to temper competition with shared purpose.

The regular and relentless expenditure of a fixed duration, and the expectation of team scrutiny would encourage each explorer to take a concentrated approach in a fruitful (and so potentially novel) direction. This is an approach to exploration (rather than to testing). It works best if interesting parts of the subject of exploration can be reached swiftly – but as easy parts to reach are exhausted, people will think of new things to explore, or construct mechanisms to get them to a new point of exploration. Collectively, their explorations would be more diverse than might be achieved by a single explorer or dedicated scout. If ever the whole team found they had exhausted their stock of new ideas about what to explore, I would use that as a trigger to ask whether we needed more tools, more skills, a more stable set of artefacts to explore, or whether we knew our application really well.

Thursday, December 22, 2011

Known ways of managing ET #03 - The Gamble


tl;dr – exploratory testing can be a gamble. And what's the problem with that?

As I write this, it’s coming up to Christmas*. Christmas is a hard deadline**, but there are clear gift-related requirements from one’s nearest and dearest***. As weeks turn to days and days turn to hours, some people find they have no gifts, yet no appetite either for handing over socks or gift vouchers. So they gamble. They give themselves some time, and head off to hunt for presents. Sometimes, not everyone gets a present, sometimes the presents are junk. Sometimes, they’re inspired. Generally, after spending roughly the allotted time and just a touch more than the allotted budget, the giver has something approaching the right number and selection of gift-ish things. We’re all familiar with a gamble, we know that some are more comfortable with a gamble than others, and we know that others positively delight in the last minute sprint-and-bundle.

Exploratory Testing is in great part about discovery. If you’re looking for real surprises, it’s pretty pointless to say how long**** you think it might take you to find them. It is more rational to set limits on how long you’re going to spend looking. This is a gamble. Your budget isn’t necessarily set to somehow match the value of the stuff you’ll find, but may be rather more influenced by what you have available to let you look. If you’re comfortable with a gamble, you may be comfortable with managing ET by lobbying for a budget from the project, and working to find a great way of spending that budget for the project.

Time for a couple of examples: “We want to spend 40 hours in the first week after delivery looking for trouble” is a gamble. “We’ll need someone for three days to prepare the exploratory environments, data, tools and ideas” is an investment in a known deliverable. These aren’t opposites. You will have goals for both. You can schedule both. Nonetheless, some test managers refuse to take gambles to their project managers.

This is not always because the project manager is genuinely uncomfortable with a gamble. Testing is a Wicked Problem, and no one thanks you for dropping a Wicked Problem into their basket of responsibilities. However, many project managers of my acquaintance are rather good gamblers, instinctively and analytically weighing up risk and return. Their management talents extend well beyond the business of juggling durations and dependencies. If you’re comfortable with a gamble, candid about the realities of ET and convincing about your skills, you may well see the PM’s eyes light up. The game is on. You’ll get some proportion of what you’ve asked for, and you’ll go looking for problems.

Of course, managing ET doesn’t end here. But – crucially – this is where it can start.


* or whatever you call it in your tribe.
** although some years I have good reason to send New Year cards.
*** and from those who are remote and unfamiliar, come to that.
**** I’m using long and budget here to mean the aggregate of time, money, people and so on. I don’t simply mean time or money. And we’re all aware that there isn’t a simple equation to switch back and forth between money and people. P ≠ mc2

Wednesday, December 21, 2011

Uncommon ways of managing ET #02 - Kanban

tl;dr – Can Kanban work for ET?

Kanban is a way of managing inventory* – and by making that management visually clear, of helping workers arrive at improvements in the flow of resources. Kanban doesn’t look like a natural fit with testing: It is rather a stretch to say that test teams make things out of stuff in the way that Toyota makes cars out of steel. More to the point perhaps, testing’s inventory problem is sorting, not storage; it’s easy to find vast numbers of things to test, and simple to think of many ways to test them, but hard to find the right tests right now.

Then there’s ‘Kanban in Software Development’**, which is related, but different***, and describes a way to manage not inventories, but software development work in progress. It’s interesting to read Chris McMahon and Matt Heuser’s take on Kanban.

Either way, Kanban helps to visualise flow and to discover how that flow interacts with the capacity to work. As such, I imagine that it might be a reasonable fit with some of the logistics of managing exploratory testing. Perhaps by giving a snapshot of what the exploratory testers are paying attention to (and what they’re not) it might not only intrigue people across the team but also show up process problems that are ripe for fixing.

This isn’t a note about how to fit in with Kanban-driven software development, but a note about how I would use it as a tool within the test team. Also, I’m doubtful about Kanban as a way of managing work in general, and I wouldn’t use it to manage all the work in a test team. So let me give context to the situation in which I think Kanban could be a handy to people managing exploratory testing:
  • You’re already working with charters (from session-based testing or something similar).
  • Your project has budgeted enough effort for ET to make it worthwhile managing the flow of the thing****.
  • You feel the need to visualise and tweak your flow.


Here’s how I would use it:

Set up a chunk of available and visible wall as your Kanban board. Split this up into four columns. A great big fat one on the left for ideas that need to be touched on in the foreseeable***** future. Next right, a skinny column for what you might hope to do today, then a skinnier one that will show the exploratory testing going on right now. Finish up on the far right with a fat one for ideas that don’t need to be considered again in the foreseeable future. You’re going to fill the space with sticky notes. The sticky notes will be moved from left to right. At a glance, anyone in range will be able to see what’s left to do today, what’s being done, what’s been done, and whether testing is going on right now.

Sticky notes start out in the far left column. There will be lots here, probably rather more than the team might be able to test in the available time. Each one will represent a charter that you’re happy to spend a chunk of time on; you’ll write the charter on the note. I think it’s a good idea to put the originator’s name on the charter, so people know who to ask about its history. You’ll also want to represent the time****** you’re giving it, either by note size or with a number.

As a collective, fill up the ‘today’ column with notes. Kanban is a tool to help visualise work, so if you’ve decided that today the team will spend 10 hours exploring but you’ve got 18 hours of charters lined up, it will be obvious that you’ll need to iterate until sanity prevails.

The ‘in progress’ column is to help the team visually manage the flow of work. If you see people as the limiting element of your capacity (ie one person can only do one charter at a time*******), and you’ve got two people testing today, you’ll split the ‘in progress’ column into halves. You’ll give the halves the names of your testers. To give you a sense of today’s capacity, you might even block out chunks so that there is only space for two stickynotes in the column. The two spaces start out empty. When someone starts testing, they move the related stickynote into the ‘in progress’ column.

If that someone was me, I would write my start time and hoped-for end time on the stickynote before I stuck it back onto the Kanban board. Then I would explore, using the charter to direct my testing, within the timebox I had set myself. While testing, I would be very likely to generate more test ideas********. I’d deal with some of these the session, but some would need to become charters themselves. I’ll keep track of these on new stickynotes. When I finished my session, I would move the stickynote out of the space representing me and into the far right hand column, leaving the space empty for a new stickynote.

I would put any new charters on stickynotes into the great mass of notes in the far left hand column. I might even re-jig the work for today, if one of my new charters was more urgent than something else we’d planned. Then, assuming I had more time to explore, I’d move another stickynote from the ‘today’ column into my ‘in progress’ space, and get on with stuff.


That’s the basics. Let’s go one iteration on, and consider some wrinkles.

There’s going to be a surfeit of stickynotes in the left hand column. Too may ideas, and too much to do, is the nature of the testing beast, and I think it’s desirable to show this truth. Having the notes physically available means it’s easy to rearrange them. I suggest that the rearranging happens all the time, by anyone. If activity is dependent on something not-yet-delivered, I’d like to see the stickynotes grouped somehow – perhaps on a sheet that is itself stuck on the the board. If some activity is likely to be done soon, I’d like to reposition its stickynotes on the right of the column, ready to jump into the next day’ work. I’d encourage the team to bubble-sort vertically; to adjust pairs of vertically-adjacent notes from time to time so that more important ones rise. I’d like us to explicitly mark off a ‘pit of pointlessness’ at the bottom of the column containing all the stickynotes that represent things we can’t do*********, won’t do or just don’t want to do.

Over the day’s work, I want to see the number of stickynotes reduce in the ‘today’ column reduce, and the increase in the ‘done’ column. I’d like a second pit of pointlessness in the ‘done’ column for any notes representing a session that went bad. I would want to organise the notes in the ‘done’ column so that I could see, day by day, when something was done. You might want a different organisation. We would talk about it, and the board would have bought out a useful discussion.

The capacity element of this Kanban board only really applies in the ‘today’ and the ‘in progress’ columns. I’ve assumed above that the capacity for ‘today’ (or whatever period you use) is in hours, and having one hour represented by a note of given size might help understand how much time is needed, and has been spent. I’m less comfortable with the capacity of the ‘in progress’ column being named individuals – I’m well aware that an empty slot looks like someone’s not working, and I’m also keen that people can explore together and at times of their choosing. I think that I would prefer to work towards capacity in terms of Test Lab resources; clean data, single links to stubbed-out systems, hand-held devices, or whatever causes our primary bottleneck. Again, that’s something for a self-organising team to sort out for itself.

Frankly, I don’t know what to do about activities and time taken logging my bugs and stats and reports, or how to mark a brief debrief. I’ve worked in teams that have included and excluded plenty of activities from their charters, and I’d suggest consistency of approach within a group is more important the approach itself. Instinct suggests to me that this if board is only for visualising exploratory testing work in progress, that I include time spent doing diagnosis for bug logging and exclude the rest.


No huge surprises here, I hope – but let’s remember that one reason to use Kanban is to optimise away the need for Kanban. In the article referenced in **, Jeffery Liker was quoted in The Toyota Way: “Kanban is something you strive to get rid of, not to be proud of”. The approach I’ve described above should be seen as diagnostic tool rather than a solution to a scheduling problem. I expect that in use, one would see plenty of tweaks to not only to the Kanban board and its processes but – more importantly – to the actual work of managing exploratory testing. I hope that you, dear reader, will sort them out in a way that suits your team, and then will share your solutions (and your context) with the rest of us.

I’ve done bits of this from time to time, but not all of it together. If you’re interested in the ideas above, remember Elisabeth Hendrickson’s mantra: “Empirical evidence trumps speculation. Every. Single. Time.”. Some testers are already on this path, but I can’t find references to their experiences. Perhaps readers will furnish those references in the comments. I want to get to hear Adam Geras’s take on the subject, given that he’s not only lived with, but talked about “A Personal Kanban for Exploratory Testers”. I have a memory, which may be made up but feels as if it arrived in the last six weeks, of a series of pictures posted on twitter, with test activities represented as sticky notes that marched left to right across a board. I can’t find a reference to those pictures in any of my notes or bookmarks. If you recognise that as your work or the work of one of your colleagues, I’d love to hear from you, and to discover what you’ve learnt from the real world, and how badly it beats up my imagination.



* Arrived at by Toyota in the 01950s, refined into a cornerstone of the lean movement since, and now a perhaps just-past-trendy meme in the agile community. Here’s Wikipedia on Kanban. As I understand it, Kanban at Toyota describes a system of signalling that a small, local inventory is empty. It is used not only to manage the flow of components, but also to make that flow explicit, and adjustable. One reason to use Kanban is to optimise away the need for Kanban. This appeals to me. I’ve never seen this kind of Kanban in action, but it’s clearly inspired lots of people.
** see Karl Scotland’s Aspects of Kanban for text, David Anderson’s A Kanban System for Software Engineering for video. Here’s a currently poorly-cited article on Wikipedia so you don’t have to take your hands off the mouse. I have seen it in action, but never the same way twice, and you’re better off going to the sources than reading an inevitably compromised footnote like this one.
*** Tokens indicate presence, not absence. It’s about making inventory, not consuming inventory. Capacity and optimisation seem (in practice) to play second fiddle to visualisation and flow.
**** As a rule of thumb, this is more likely to be true if you’re not just concerned with what to test first, but what might be handy to test next. If you’re jamming ET into the gaps around the edges of your existing testing, I wouldn’t bother managing it with Kanban, because there’s no chance it will flow.
***** My horizon for foreseeable is pretty short. The absolute maximum might be the end of the sprint or the date of software release, and it tends to be less than three days. Your team will have a different attachment to the future. As a group, work out what your horizon for “foreseeable” will be and write it large somewhere obvious. Do change it if you need to.
****** Unless you’re working with fixed-length charters.
******* hmm
******** distractions
********* too big, too difficult, too dependent to consider in the foreseeable future. It’s not just the trivial things that are pointless.

Monday, December 19, 2011

Known ways of managing ET #02 - Bug Bash

tl;dr - Bug Bashes are rubbish.

The project gathers people together at an appointed time and place. Everybody splurges on testing for an allotted period, logs some bugs, and stops. If you need examples, see *. It’s a community thing, and there is generally a group hug / doughnut / retrospective before everyone goes back to their day jobs.

I guess one virtue of a bug bash is that it is a concentrated period of work, which may be a good thing in itself. Bug bashes can employ and popularise diversity in a group’s acceptable points of view, which is a plus in my book. You get to meet other people. And maybe a doughnut. But other than these few useful traits, it’s hard to find much that is good.

While the bug hunt room might be a fertile idea-generating ground for small groups in close physical proximity, the format means that many people head into the system for the first time, at the same time. Everyone is testing in parallel for a limited period, so there isn’t much opportunity to learn from each other, to analyse the group’s results, draw conclusions and carry on in a better way. There isn’t much opportunity to appreciate all the different approaches that are being tried, and take a new one – or, indeed, to help the group to explode in variety. There isn’t much opportunity to sling together a swift tool that drastically cuts the manual drudge and finger trouble in later work. And most people, working as novices, will follow a limited gamut of manual paths characterised by learning and exploring the application for the first time**. This could help you predict how your customers*** might interact with the product in the first few hours of use, but it’s not so good for assessing the product in other, more representative ways. A bug bash may be directionless, but that’s not to say that it is diverse.

A bug bash puts strain on available test-environment resources; licenses, batteries for devices, laptops, USB cables, un-damaged data, bandwidth, IP addresses, you name it. I’ve seen a support chap spend days getting the kit together, versioned, charged, addressed, data-filled and working before a push. Even assuming your Test Lab really can support your bashers with hands-on stuff, your back-end and infrastructural resources may not be neatly independent and you’ll end up being stymied by half the group finding the same test-environment bugs. This is great if you’re looking for test-environment or large-group bugs, but again, not so good if you’re looking for a broader or more representative set of interesting issues.

Expect lots of duplicate bugs, as different bug hunters bang into the same low-hanging fruit. Bug bashes often throw up some easily found but hitherto-unseen trouble, but let’s not be unthinkingly self-congratulatory when a third of our crew waste their time investigating, diagnosing and logging the same problem at the same time. More insidiously, duplicate bugs may mean broadly similar paths through the application. If everyone’s doing the same thing, what does that do for coverage? For exploration?

The quality of logged bugs tends to be low, the density high. The brief duration and inevitable peer pressure pushes the people in the room to value their speed to the first bug, and if they don’t get the early bugs, then, hey, there’s kudos to he who logs the most. Bugs are logged at speed, in bulk, with generally poor detail and diagnosis. Some managers are happy that their bug bash has resulted in a great wodge of trouble tickets, but remember that a bulge in the bug rate disturbs the workflow of an agile team like a turtle disturbs the digestion of an anaconda. Finally, the hysterical whoop whoop of competition not only breeds false confidence, but can break the spirit of people with their ego tied up in the code and configuration of the product.

A bug hunt allows a team to throw many people at a problem in a short period. It can appear cheap in elapsed time, while substantial in people time. Wrong. Ten people testing for an afternoon might look like a week’s worth of testing, but it is a week’s worth of testing by someone with 3-hour amnesia. I have seen bug hunts used to demonstrate someone in management’s commitment to testing their product. Diverting half the team off their usual path and into testing for a few hours certainly makes a statement, but the statement is that management is committed to theatrical gestures. Grand Guignol**** testing is a titillation, yet I’ve seen it used to substantiate the assertion that a product is fully tested. Frankly, my arse.

When might I use a bug bash? Perhaps if there was a problem reported frequently in beta testing, which was serious and urgent enough to warrant a diverse group’s concentrated attention but not well-reported-enough to act on directly. I might give the reports to a bug bash group, ask them to find out anything about the problem that isn’t already detailed in the reports, and facilitate their communication by sticking a scribe/steering person at a central and visible whiteboard, equipped with a bell. But I’d prefer to use a small team with big kit.

Getting a large group together for a short period is an expensive way of doing rubbish testing. I’d far rather spend the time and money getting the necessary people together and delivering a test environment that is up, running, connected, data-ready and swiftly-rebuildable. Or delivering a diverse and knowable set of data. Or a collection of reasonable (and less reasonable) user scenarios that stand a chance of saying something interesting and meaningful when tried on the product. Or a couple of hours so we know something about each other beyond name, age, height and title. Actually, pretty much anything is better value than a bug bash.

For a while***** it seemed like every other client wanted to throw most of their exploratory eggs into the bug bash basket. I have no idea who kicked off this ludicrous meme, but I’d still like to tweak their nose. Here’s my position:

Managers: of all the usual gambles you can make with your charges, a Bug Bash is one of the dumbest. Get someone to bring you an alternative, and consider it.

Testers: Bug Bashes might look like fun for you, but they suck for the product and the project. Don’t be fooled.

Clear enough?

* I’ve seen Scrum teams devote the whole team for a couple of hours on the 6-8th working day of a 10 day sprint. I’ve seen waterfall teams drag fifty people into the canteen on a Friday afternoon to batter away at a batch of handhelds. I’ve seen test teams commandeer the boardroom for a day at a time straight after they get the code, every time they get the code. Bet you've seen something similar. Enough examples: back to the polemic.
** Ж: “So what did you do?”
Ю: “Well, I tried logging in, and I’d not logged in before, and it went well, so I tried changing my username and resetting my password, using funny characters.”
Ж: <smacks head on desk>
*** Assuming your insiders are good substitutes for customers…
**** A horror show made up of a series of short pieces. Read Grand Guignol in Wikipedia, and think testing. Compare with Soap Opera Testing.
***** 2004-2008, or so

Saturday, December 17, 2011

Uncommon way of managing ET #01 - Scouting

tl;dr – skilled, supported, concentrated exploration

The team makes one person* the dedicated explorer for a period. This person, who we’ll call The Scout, spends all their time exploring. Their job is to find as much interesting stuff as they can. They’re supported (and watched) by others who set up environments, log bugs, keep notes, analyse data, suggest and configure tools. Pay attention: These supporting people are not part-time or less skilled; they’re just as engaged as the scout, but they’re not on point.

There are no sessions or formal session-end debriefs, but the team will want to stop and sit back from time to time and come to some conclusions about what they’ve found. The person (or people) on point are switched around regularly – scouting is fatiguing, and diversity is important. People with different specialities are used as required, and The Scout need not be a tester.

Exploration often has a sense of a frontier, a boundary between the known and unknown. The frontier is fundamental to exploration, and The Scout pushes it ever onwards. We understand, of course, that testing has really wiggly and sometimes discontiguous boundaries, and that the territory behind the boundary may not be well-known, and is likely to change unexpectedly. The team will understand this boundary better than anyone else, and will need to come to an understanding about how much they need to be able to notate and share information about the frontier.

This approach is all about discovery. It’s not cheap, nor is it exhaustive, but it is valuable. The project gambles time in return for information, so The Scout needs to know what the project is interested in. I expect there would be tussles about what The Scout would be exploring, and what they would be looking for. So much the better.

Note: This an idea. I’ve not worked (quite) like this. Maybe, though, this idea triggers something that you would like to try with your team. Let me know how you get on.

* or one group, but I’ll write in the singular to keep the grammar simple

Friday, December 16, 2011

Known way of managing ET #01 - Stealthily

tl;dr – some people hide their best work from their paymasters

A few people get together to find problems in snatched moments. There's little or no imposed direction, measuring, or task control, and rarely any sense of completeness or coverage. Although the work sometimes gets done with tacit support from one or two individuals in the upper echelons, there is little oversight and it is usually a hidden activity. Testers don’t log time or bugs through the usual channels, and it feels almost like an indulgence, a guilty pleasure.

I've been on a team who hid a pair of stealthy explorers in a corner for a few hours each week. The exploratory testers would tell the rest of the team about the bugs they’d found. Generally those bugs were 're-found' during manual scripted testing to allow them to be logged within the imposed structures of the project – and if no script might find a particular bug, we would assemble one. The customer would not countenance paying for unscripted testing, but was very impressed that we were designing such effective scripts*.

Exploratory testing becomes stealthy typically because those who control and budget for team members' time don't approve of looking for trouble. Look out for rigid 'verification and validation' contracts, consultancy contracts that only allow a small set of explicitly-approved billable activities and legal fears of explicitly-acknowledged defects.

I've seen Exploratory Testing hidden most commonly in teams that focus on user acceptance and regression testing, but I've also seen it in a self-labelled agile team that relied on a (rather sparse) set of low-level confirmatory automated tests. These teams tend to be a mix of self-identified testers and people who may be seconded into the test team or are otherwise keen to avoid the label. However, I’ve even seen stealthy approaches in test teams who were exploring with a degree of management support, but who felt that some of their approaches were beyond what might be accepted.

However nasty such hidden work looks from the outside, it's often rather well supported by individuals within the testing teams. People get the opportunity to work heroically, to subvert management decisions (especially gratifying if those decisions feel irresponsible), and sometimes to have a direct link to someone rather higher up in the tree of project status. The stealthy effort tends to get a geekily-sexy label.

When I meet experienced exploratory testers who have carved themselves a niche in some monolithic institution, they're often proud to be stealthy, and sometimes unhappy to share their approaches. Sometimes their reticence is justified – in groups which insist they don't do ET, acknowledging that you're an the ET specialist doesn't necessarily improve your day.

It’s worth mentioning that some companies consciously take a stealthy approach to discovery work so that they have plausible deniability, for instance while finding bugs to get a company out of a contract, or finding bugs that no one who could be legally-liable should know about. Such activity will challenge your ethics. Call time on these if you must – and sometimes, you must – but be aware that you may be shouting at the waves.


* This was in the early nineties. Don’t think I would put up with it now – but don’t think that the practice has ended, either.

Thursday, December 15, 2011

There are Plenty of Ways to Manage Exploratory Testing

tl; dr - lots of different ways to manage ET

The key problem that exploratory testing faces, as a viable discipline, is how it is managed. Of course, there are other well-covered interesting areas - the question of whether to do it at all has been debated to death* amongst us pundits (if not with as much fervour in industry), and if you're want to know how to do it, there is a slew of excellent ideas, techniques, disciplines and tricks to choose from**. However, the hairiest problems in actually doing it come from how people organise the work, and how the work and its owning organisation adjust to fit***.

Over the next couple of weeks, I'll post a short series here, snappily entitled "Ten Known Ways to Manage Exploratory Testing" and "Ten Uncommon Ways to Manage Exploratory Testing"****.

Here's a kickoff showing roughly where I'll go:

Ten Known Ways to Manage Exploratory TestingTen Uncommon Ways to Manage Exploratory Testing
  • Stealth Job
  • Traditional Retread
  • Off-Piste (Iron Script)
  • Off-Piste (Marshmallow Script)
  • Bug Hunt
  • Set Aside Time
  • Gambling
  • Script-Substitute
  • Session-Based Test Management (James & Jon Bach, me, others)
  • Questioning (Jon Bach)
  • Thread-Based (James and Jon Bach)
  • Touring (James Whittaker and others)
  • Don't bother (thanks to Dave Liebreich for reminding me...)
  • Scouting
  • Kanban
  • Following Lenfle
  • Daily News
  • R&D
  • Testing Guru
  • Video Reports
  • Post-Partum Labelling
  • The Summariser
  • GPS
  • Cloudy
  • The Inquiring Metricator

I'll fill you in on what I***** mean by each of these over the next few weeks. Expect about one a day, in no particular order.

And by the way: I'm posting this because it's good stuff, and you're going to find it useful. I'm posting it now because I've got a course in January that I want you to know about. That's January 25-27, in Oxford. A two-day workshop on exploratory testing techniques, followed by one on managing exploratory testing. Book here.

Note: If you're in Scandinavia, I've got one in Copenhagen on 6-8 March through the Morten Hougaard's Pretty Good Testing. Details here.


* And has degenerated as everyone professes to agree with each other's aims while insisting the philosophy's all wrong. Consultants, eh? Welcome to my world.
** Key sources for me****** - both Bachs, Bolton, Carvalho, Edgren, van Eeden, everybody I've ever tested with, Green, Kaner, Hendrikson, Harty, Itkonen, LEWT, me, Richardson, Sabourin
, Weinberg, Whittaker. Alphabetical order. A l p h a b e t i c a l. No preference implied. Some are sources for stuff I try not to do...
*** Don't get me wrong – plenty has been written about this, too, over many years. Here's some more.
**** Sorry about the terrible titles. Ten-fer lists wind me up, get me down, piss me off and a whole other bunch of phrasal verbs. But them's the titles. No, I'm not going with The Twelve Days Of Testmas, and yes, obviously there are more than ten of each... I'm not claiming these lists are exhaustive, nor that the items are exclusive. I'll write this up properly once I'm done serialising.
***** You can probably guess some, or most. Sweepstake?
****** Did I miss you out? Apologies. It's a blog posting – half-baked by nature. Email me, and if I forgot you, I'll add you to my list.

Wednesday, December 14, 2011

I've updated my format...

... but not my content.

The old one looked like someone's living room - if that person was living forty years ago. Good riddance.

I understand that lots has changed behind the scenes. Let me know if you spot something that's not what you'd expect.

I've found one fluff so far; the new profile has squashed my headshot.

Monday, December 12, 2011

A couple of hands-on tools

tl;dr – two handy tools

On Sunday, I was watching a film of a fine tester testing. I was keeping track of how his testing differed from mine. I realised that I was looking for the functionality of a couple of tools that I sometime use, and that he wasn't using at the time.

Both tools are for the Mac, but I imagine that similar tools are available for PCs too. Although neither are testing tools, they do things that are not only convenient, but by being frictionlessly convenient, allow me to observe and trigger behaviour in usefully-different ways.

The first is Mouseposé from Boinx. Mousposé highlights your on-screen pointer position, and is the close cousin of many tools used by teachers and screencasters to make their actions more obvious.

What makes it useful to me as a tester is that it makes user actions more explicit – not only following the pointer around, but differentiating between one click and double- (and n-) clicks, between right and left button (or however one expresses the peculiar sigil necessary on a trackpad to do the same). It also – and here's the killer – displays the keys you're pressing. These are on the edge of vision, in large letters on a low-screen bezel. They appear very briefly, and captured keys include shift, control, escape, enter and so on. It's great to expose the occasional finger slips that lead to novel behaviour, great for working out what you're doing, and for confirming (or not) that you're doing what you think you're doing. It's especially useful if you, like me, find that your hands don't always do quite what you ask them to do.

If they made a tester version, I'd like it to cope with multi-touch, give me a trail on drag, and ideally have some kind of paper-tape thing to show recent input and save me using a keylogger.

The second is Keycue from Ergonis. Lean on the command key for more than a few seconds, and KeyCue pops up a bezel with every* currently-available command-key combination. Not just the front application, but all the combos that are currently listening. Different options respond as you change your key combo. It's great for ramping up your expert hands-on-keyboard-flying-user tricks, but more than that, it shows you a whole bunch of potential bug triggers.

Key stroke input can cause unusual behaviour not only because it comes through an alternative route (as it happens, the rout is easier to automate, so tends to be covered in developer-side testing/checking) but because it's fast and potentially in conflict with other stuff. Hitting keys in swift succession can expose timing-related bugs (two in Word, one in Excel this morning alone). Hitting meaningful keys that have meaning to something other than the thing you're talking to can also get the pizza spinning. I want to try it on a cyrillic machine and on localised software.

If I had a tester version, I'd prefer that it actually enquired the system internals to find out what was listening, rather than simply parsed menus (and possibly-flaky "User-definable custom shortcut descriptions"). But I don't even know if that's possible.

* if you're not in version 6, every is unfortunately more like most

Sunday, December 04, 2011

Something for the Weekend? 006 - Visualisations

tl;dr:     visual representations are lovely

Christopher Warnow and onformative have worked together to make a movie which gives a visual dimension (not an explanation) for various sorting algorithms.

I came across Warnow because @cunabula had retweeted a link to his visualisation of Amazon's recommended books, A Thousand Milieus. Warnow uses Amazon's recommendations to find a hundred related books - then shows you them as clusters.

Delightfully, Warnow has made his tool available (not open source, but available*) - so here are visualisations** for books*** about testing that I've been known to recommend:


Hope you've had a lovely weekend.

* data walking and munging is done with Processing, which is dead easy to get a handle on, and the graphing side is basically Gephi. So go on - have a play. Here's your map: download the tool (which brings the Processing library for Gephi), download and install Processing, chuck Francis Li's http library into the right place, fire up the tool script (all 12Kb of it) and check that its behaviour seems reasonable. That's the hard stuff done. Now everything is open to you - your first task is to make the tool search for 30 books, rather than 100. Feel satisfied?
** two visualisations per book? Certainly. Running the tool twice produces diagrams with similar content, but very different layout. Compare and contrast.
*** U.S. Amazon store