Friday, November 28, 2008

Coders, Unit Tests, and Testers

On the QAGuild group on LinkedIn, Prasad Narayan asked "Does the Dev team in your organization indulge in Unit Testing? I would appreciate some details, if the answer is in the affirmative.".

Here's my response:

I've worked in plenty of teams where the coders have written unit tests. Typically at organisations which explicitly care about coding. Less so at banks, service or entertainment organisations. On a reasonable proportion, and most especially on agile teams, the coders have written very large numbers of unit tests that act as a scaffold to the code (that is, very large as compared to the expectations of teams that don't use unit tests, so perhaps this is tautological) . More than a simple framework, the tests also act as a way to frame thinking about the code, to consider it before it is made, and to experiment with it when it is under construction.

Such teams are (in my experience) universally proud of their unit tests, and actively show them off. Their code tends to be better, too. When I've worked with teams who are shy of showing me their unit tests, and shy of letting me review them, then (in my experience) the code is universally duff. I'd rather work in a team that has no unit tests than a team that says it does, but won't show me (as a project member who is interested in testing) the tests.

I have worked with teams that write unit tests, but don't run them. This sounds bizarre, but tends to be a problem that creeps up on teams. I've seen it as a result of commenting out unit tests that break because of a non-code-related change (without replacing the test). Be aware also of unit tests that don't test anything important, or which pass with or without the code in place. I have heard of (but not worked on) teams where the testers wrote all the unit tests, and the coders wrote the code. I guess I'd hope that the two groups work closely enough that tests and code could be written together.

It's all to easy to let good unit testing and the resultant relatively-clean code lull one into a false sense of security about the viability of the system as a whole. If I work with a team that is using lots of unit tests, I (very broadly) take this approach:

  • reviewing the existing unit tests

  • being sure they're run regularly

  • trying not to duplicate them too much with my own / the customer's / other team's confirmatory scripted tests

  • using exploratory / experimental / diagnostic approaches to pick up and dig into all those unexpected risks and surprises

  • working closely with the coders to enhance, streamline, and otherwise improve their unit tests

Monday, November 24, 2008

Exploration

"Do not go where the path may lead; go instead where there is no path and leave a trail"
Ralph Waldo Emerson

Saturday, November 22, 2008

Can you start using Exploratory Testing without needing an expert?

On Software Testing Club, Anna Baik asked "Can you *bootstrap* your team into using a more exploratory testing approach without needing in-house expertise? Or are you likely to have problems unless you have either a consultant or an already experienced exploratory tester on staff?", which struck me as a reasonable question. Here's my answer (also posted as a reply on Anna's blog, mostly).

You can do exploratory testing without an experienced exploratory tester. Indeed, that's what all exploratory testers do with a new system / technology / customer etc.; start from a position of ignorance and get exploring.

There are a couple of ideas that I'd recommend keeping in mind:
  • one of the things you'll be doing is gaining experience, and (through reflection) expertise. This is part of all exploration, but it's particularly true when getting going on something new. Expertise will help you find more/better information – and you'll find fewer/poorer without expertise – but it takes time to build. However useful it is, it is neither necessary, nor sufficient, and even expert teams explore better with a newbie in the numbers. This is because...
  • if a few of you are exploring, there will be great diversity in your approaches. Learn from each other - and try to take those lessons in a way that doesn't flatten that diversity.
With those thoughts, here are a couple of recipes. Adapt as you see fit. For both, you'll need to set aside some time. Regard this time as a gamble, and assess its value when you're done. If I was doing this, I'd prefer to work in a group (so for a 3 person-hour recipe, schedule 2 people for 90 minutes). I might assess value by looking at whether I'd changed my bug rate, or found any particularly useful information / nasty bugs.

1        Exploring via Diagnosis: 3 person-hours
  • Pick out some current bugs that aren't clear, or that aren't reproducible. Doesn't matter if they're in your bug tracking system or not, but they should be already known, and not yet fixed.
  • Explore those bugs, seeking information that will clarify them or make them more reliably reproducible. Keep track of your activity.
  • Review what you've done, collate the information gained. Log new bugs, update clarified bugs.
  • If you can generalise from the investigative approaches you're used, then do so.
  • Tell the whole team your results.
  • Schedule another 3 hours on a different day and repeat!
2        The Usual Suspects: 2 person-hours + 10 minutes preparation
  • Spend 10 minutes writing down lots of different ways that bugs manifest / get into your software (Use any or all of cause, effect, location, conditions, etc.). Aim for diversity and volume, not completeness or clarity. This might be fun as a whole-team exercise.
  • Leaving enough time for the review+generalise+share steps at the end, split the remaining time in two.
  • In one half of the time, pick out problems that you've not yet seen in this release, and look for them. Keep track of your activity.
  • In the other half, pick out places that haven't yet seen many problems, and look in those for problems. Keep track of your activity.
  • Review, collate, log, update.
  • Generalise and tell the whole team your results.
  • Schedule another chunk of time on a different day and repeat!
  • Make your trigger list publicly accessible. Invite people to contribute, share, refine etc.
That said, ET is a skilled approach, and it's easier to get those skills into a team with a bit of reading / taking a class / involving a coaching consultant. There are plenty of sources around about getting started with exploration. Niel vanEeden and I wrote a paper called Adventures in Session-Based Testing which may help. It's here: http://www.workroom-productions.com/papers.html. For pithy heuristics, Michael Bolton has recently blogged the results of a fine EuroSTAR workshop here, and you'll also want to check out Elisabeth Hendrickson's cheat sheet, James Bach's mnemonics, Jon Bach's questioning approach, James Whittaker's tours (hopefully turning up on his blog soon).

If you have a favourite bootstrapping-into-exploration source, post it here or on Anna's original posting.

Note - I have an upcoming, no frills, public class on Exploratory Testing, in London, December 8-9. Lots of exercises, discussions, and learning by example.