Skip to main content

Test cases? This is 2013 you know....

Test cases are getting a very bad press of late. There is a perception that they are a wasteful practice, built carefully and slowly in a dark corner away from all collaboration, sprung on unsuspecting programmers to expose the unanticipated obscure edges of their code.

I don't believe it has to be like that, and I haven't practiced building test cases in that fashion for a long time. I prefer:

Test Ideas - your team are delivering a piece of functionality. You have basic understanding of its aims, maybe a few acceptance criteria. Its time to brainstorm. Grab the people who you think are useful (and perhaps some who are not directly involved), have a time-boxed brainstorm. Thirty minutes should do it. An all round view is required, stay away from the detail. FIBLOTS is an excellent heuristic here. Lets say we've walked away with 40 ideas of areas to test.

Test Scenarios - Time has passed, the team has made discoveries, you've had your first bit of the new functionality. You've realised that of your 40 ideas, there might be 20 you actually really, really need, plus another 10 that you didn't think of in your ideas stage. Give them a bit of care and attention now, adding personas, descriptions, pre-conditions, post-conditions, however you wish to flesh them out. Not too much detail and no steps!

Test Cases - right, now we get down it. You have an idea. You have a resulting scenario. If there are any key tests (hint - attached to acceptance criteria), you can add further details, even a few steps. I personally use this to automate my acceptance tests, rather than creating a manual script, where I think the value of a test case begins to drop dramatically.

In addition, you can show your decisions about what to cover and when in a meaningful way. Always treat your test assets as something which can be iterated on, and bring your test cases out of the 'dark ages.'

Comments

  1. Writing an example test case here ( 1 per topic or so)and there might help verifying the usability of your test ideas, and improve the feedback one gets during review.

    @halperinko - Kobi Halperin

    ReplyDelete
  2. Absolutely, the technique captures the benefits of iteration, review and principles of just enough, just in time.

    The purpose is to be non-prescriptive, so pushing an idea forward to a more detailed stage, is perfectly fine.

    The trick is not to push all your ideas to this stage before verification!

    ReplyDelete

Post a Comment

Popular posts from this blog

A Lone Tester at a DevOps Conference

I recently had the chance to go to Velocity Conf in Amsterdam, which one might describe as a DevOps conference. I love going to conferences of all types, restricting the self to discipline specific events is counter intuitive to me, as each discipline involved in building and supporting something isn't isolated. Even if some organisations try and keep it that way, reality barges its way in. Gotta speak to each other some day.

So, I was in an awesome city, anticipating an enlightening few days. Velocity is big. I sometimes forget how big business some conferences are, most testing events I attend are usually in the hundreds of attendees. With big conferences comes the trappings of big business. For my part, I swapped product and testability ideas with Datadog, Pager Duty and others for swag. My going rate for consultancy appears to be tshirts, stickers and hats.

So, lets get to it:

3 Takeaways

Inclusiveness - there was a huge focus on effective teams, organisational dynamics and splitt…

Wheel of Testing Part 2 - Content

Thank you Reddit, while attempting to find pictures of the earths core, you surpass yourself.
Turns out Steve Buscemi is the centre of the world.

Anyway. Lets start with something I hold to be true. My testing career is mine to shape, it has many influences but only one driver. No one will do it for me. Organisations that offer a career (or even a vocation) are offering something that is not theirs to give. Too much of their own needs get in the way, plus morphing into a badass question-asker, assumption-challenger, claim-demolisher and illusion-breaker is a bit terrifying for most organisations. Therefore, I hope the wheel is a tool for possibilities not definitive answers, otherwise it would just be another tool trying to provide a path which is yours to define.


In part one, I discussed why I had thought about the wheel of testing in terms of my own motivations for creating it, plus applying the reasoning of a career in testing to it. As in, coming up with a sensible reflection of real…

What if information isn't enough?

One of my aims for this year has been to attend/talk at what I will class for the purposes of this blog as 'non-testing' events, primarily to speak about what on earth testing is and how we can lampoon the myths and legends around it. It gets some really interesting reactions from professionals within other disciplines.

And usually those reactions (much like this particular blog), leave me with more questions than answers!

Huh?

After speaking at a recent event, I was asked an interesting question by an attendee. This guy was great, he reinvented himself every few years into a new part of technology, his current focus, machine learning. His previous life, 'Big Data', more on that later. Anyway, he said (something like):

'I enjoyed your talk but I think testing as an information provider doesn't go far enough. If they aren't actionable insights, then what's the point?'
This is why I like 'non-testing' events, someone challenging a tenet than has be…