Thursday, 28 December 2017

Testers Guide to Myths of Unit Testing

One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers, and help us realise what other layers of testing can cover most effectively.

Also, I want to do a talk about it, so I figured I would test the premise, see if potential audiences were into it. I put this on Twitter:
30 replies with ideas tends to indicate that people might be into it. 

The List

I thought, as my final blog of 2017, I would provide a super useful list of the myths and legends we as testers might believe about unit testing:
  • That developers always write unit tests
  • That developers never write unit tests
  • That testers can write unit tests for developers
  • That developers know what unit testing is
  • That testers know what unit testing is
  • That a class is a unit to test
  • That a function is a unit to test
  • That two people mean the same thing when they say unit
  • That anyone knows what a unit is in the context of their code
  • That unit tests fill in the bottom of the test automation pyramid
  • That unit tests remain in the bottom layer of the test automation pyramid
  • That unit tests are inherently more valuable than other layers of tests
  • That unit tests are fast to run
  • That unit tests are always automated
  • That lots of unit tests are undoubtedly a very good thing
  • That unit tests can eradicated non determinism completely
  • That unit tests are solitary rather than collaborative
  • That test driven development is about testing
  • That reading unit tests relay the intent of the code being written
  • That unit test document the behaviours of code
  • That when there are unit tests, refactoring happens
  • That when there are no unit tests, refactoring happens
  • That you never need to maintain and review unit test suites
  • If it's not adding value through quick feedback it needs removing or changing.
  • That unit tests sit outside a testing strategy for a product
  • Because they exist, the unit tests are actually good
  • Assertions are actually good. Checking for absence, as opposed to presence
  • If you have a well designed suit of unit tests you don't need to do much other testing
  • 100% code coverage for a given feature is evidence that the feature works as designed
  • That code is always clean if it has unit tests
  • Unit tests are about finding bugs
  • That there is a unit to test
  • That a failing test indicates what is wrong
  • That one problem = 1 failed test
  • That good unit tests are easy/hard (adapt based on your delivery) to write for non-deterministic functions
  • "That unit test coverage is irrelevant to manual testing"? aka "Why look at them? They're JUST unit tests, we have to check that again anyways."
  • That they may/may not believe that is a tester's responsibility to ensure code quality and consistency of the test suite (and that developers may believe the opposite)
  • That unit tests don't count as "automation" if they do not use the UI
  • That unit testing allows safe refactoring
  • That the intent a developer has when they write the thing they call a unit test (guiding the design) is the same as the intent a tester has when they write the thing they call a unit test (discovery and confidence).
  • That a large number of unit tests can replace integration tests.
  • That unit tests evaluate the product.
  • That false negatives ("gaps" or "escapes") in unit tests are a symptom of not having enough unit tests.
  • Writing unit tests while developing the 'production' code is a waste of time, as the code will change and you'll have to rewrite them. 
  • Having unit tests will prevent bugs
  • That coverage stats give anything useful other than an indicator of a potential problem area.
  • When and how often to run them. And how much confidence that actually gives you
  • That code quality for tests doesn't matter as they're just tests
  • When to write the unit tests (before/after the 'production' code 
  • The difference between a unit test and and integration test
  • That how much coverage you get with unit tests says anything about the quality of your test suite
  • That you don't need additional tests because everything is unit tested
  • That unit tests are the *only* documentation you need
  • That they will be shared with the rest of the team
  • TDD is a testing activity/TDD is a design activity/TDD is both/TDD is neither
  • That the purpose of unit tests is to confirm a change didn't break something
The list is raw and some entries straddle disciplines as the world is big, fun muddle despite our efforts to compartmentalise. I hope its a useful guide to interactions with developers regarding this layer of testing. Next time a developer asks for an opinion on existing unit tests or help with writing new ones, have a look through this list and challenge your assumptions. After all, illusions about code is our business...


Thanks to the following for their contributions*:
  • Steven Burton
  • Angie Jones
  • Gav Winter
  • James Sheasby Thomas
  • Dan Billing
  • Peter Russell
  • Joe Stead
  • Colin Ameigh
  • Marc Muller
  • Adrian McKensie
  • Douglas Haskins
  • Mat McLouglin
  • Dan North
  • Josh Gibbs
  • Marit van Dijk
  • Nicola Sedgewick
  • Phil Harper
  • Joep Schuurkes
  • Danny Dainton
  • Gwen Diagram
* If I forgot you, please tell me.

Wednesday, 8 November 2017

Wheel of Testing Part 3 - Applications

I've only had to quit two jobs to finally find the time to finish this blog series. Winning at life. If you need reminders (like I did) check out Part 1 and Part 2 before reading on...

After the first two blogs regarding the Wheel of Testing, I was delighted to receive a few requests for the wheel itself, which got me thinking about applications of it, beyond what its original intent was, which I've explored in detail in part 1 of this series of intermittent blogs. Most models need a little air time to show their value, in software development we crank out models all the time, but I'm not sure how many get used. I am inspired by models such as the "Heuristic Test Strategy Model" by James Marcus Bach, as I have used it and seen the benefits it has brought for my clients, particularly the ability to ask questions. So, I wanted to create a model which has a number of use cases, both real and imagined:

Helping to unlocking a career in testing which may be stuck

It is not uncommon to reach a point in a career where a tester (or anyone), may feel stuck in their role, or believe that testing has little to offer them. Using the Wheel of Testing in one to one meetings as a discussion point has proved effective for this purpose, triggering questions about previously unknown paths in testing. Tool assisted testing is one, triggered an interesting debate with a builder of automated tests. Might there be other ways to assist testing than scripted automation? This surfaced perceptions of what other testers in the organisation did and inspired the person to find out what repitition in system setup and state testers did, plus opened the door to mentoring those same testers. The wheel opened out that career in various directions. 

Contributing to setting the direction for new starters in testing

Testers get into testing from all sorts of directions. From support roles, former developers, recruitment, many different ways to enter the craft. This makes testing kinda cool in my eyes, despite the bemoaning of the lack of professional testers. However, when one comes into testing you need a way in to developing yourself. The Wheel can work as an outside in type of tool in these contexts. Come from an operational role and you might decide that your way in is via your domain and advocacy skills, then fan out to the other areas. As a former developer, your route to the core skills might be through automation or performance testing. I think we would enhance the craft if we helped that journey from the outside in be a little more deliberate.

Building a training programme for testing

I see a lot of training programmes for testers (and developers) popping up. Some organisations are realising that having a multi faceted approach to hiring testers has potential. Also, I see a lot of academy programmes creating more of what organisations already have, as the expectations of what testing is and what a tester does are rarely balanced. I used the wheel to consider what such a programme might cover, given the amount of time you have to provide a grounding in certain skills. The benefit of the wheel here is that the skills radiate out from a set of core tenets. Choosing a balance from each of the 5 areas helps your training stay focused and what makes good testers. According to me at least.

Determining what skills you have (and don't have) in your testing function

In my world, testers can tend to be alone on teams, or at most in pairs. Rarely working together in a silo, where many testers co-locate and perhaps an appreciation of the skills (and skills gaps) of many testers might be more prescient. The Wheel can be used as a living record of the skills within your team. In my former life the Wheel assisted us in recognising deficiencies in the performance and security areas, albeit a little too late as a result of a voluminous security bug bounty. Ouch. Keeping an eye on the balance of your approach and skill availability is one of the primary functions of test leadership in my eyes. Otherwise you are viewing your product through too few lenses, eventually you will feel the pain. Security and performance will likely become key product differentiators too, so lets keep our approach balanced.

Talking about how testers can be a valued in development teams

I really like this one. What are testers terrible at? Talking about testing, especially as an exciting, skilful and varied craft. The Wheel can be used as a guide to these conversations. The number of development teams I have walked into where testing is met with rolled eyes, described as "the bottleneck" and other thinly veiled ways of challenging the value of testing. For example, using the strategy part of the wheel to help the team think critically about what they are about to build, surface assumptions and reason around how observable a product is. That sounds much cooler than 'poking around in the user interface' or 'doing endless browser testing' to me, but its rarely phrased like that by testers. 

Consultants crib sheet for new clients

From experience, many reviews of testing capability are context free, unbalanced recommendations to implement tools and techniques in organisations who don't even have the questions to begin to talk about testing and quality. The bonus of the Wheel that it has layers and sections. It is perfectly possible that an organisation can be technically advanced but lack critical thinking and advocacy skills around testing. I also believe it to be true that an organisation needs to improve in a step by step manner, rather than zero to continuous deployment in one leap. Using the Wheel in this manner helps to traverse that leap. I was recently asked to create a load testing framework for a client. There had been no previous thought about testing and testability on the product, I used the wheel to build an approach to create a path from no previous thought, to meaningful load testing. 

Guide for evaluating a product, especially when first getting started

There are loads of ways to evaluate a product for testers getting started on a new system or product. However I still observe a reasonable degree of freezing in the headlights of complexity when faced with the new. I think the Wheel could be used in two contexts here. Firstly, the system has so much information to expose to a tester its often hard to parse. The Wheel acknowledges this openly, it is about the breadth of scope that testing enjoys, so don't be afraid. Secondly, it offers a number of places to get started. If its a mobile application, usuability might be a good place to get started, if its a high volume payment gateway application programming interface, perhaps start with the available application monitoring or analytics to gather insights. 

A heuristic for giving feedback on a test strategy

Recently I had occasion to give feedback on a test strategy for a new product, which was being built on information gleaned from prototyping around a particular business problem. The system would use an ancient internal web service too, which stakeholders were nervous about. For me in a new product context, testers should be exploring with various lenses using a breadth of techniques to really enhance information flow to the team, providing light touch test automation while the architecture remains in flux, with adequate resilience testing around integrating with the service causing nervousness. I used the Wheel to assess the breadth of the strategy, plus how it gathered information about the key risk and provided feedback.

Like all models, it is useful in some circumstances, but less so in others. Its certainly not complete, everytime I look at it, there could be more. Also, its more about the testing that testers do, doesn't really address in depth testing that other stakeholders might do, such as unit testing for example. However, I am a tester, so its got lots of me in there.

Really, its a whatever you want to use it for type of wheel. A few people have used and referenced it, so I hope its being used somewhere for weird and wonderful things that I never thought of. That would make me happy.

Monday, 30 October 2017

Leeds Testing Atelier V

We did it again. Another punk, free, independent Leeds Testing Atelier happened on the 17th October 2017. Thats number five for those of you counting wristbands.

The technology sector in Leeds grows constantly, with big companies like Sky and Skybet having a massive presence in the city. However, Leeds has always had a strong DIY scene for music and the arts, we want to maintain that in our tech scene too. This is what we hope will make us and keep us different. Our venue is a place where other groups meet, to make music, discuss social issues or advocate for the environment. To be part of that community matches our mission and our hopes for tech in Leeds. There have been other blogs inspired by the day, so we will reflect on some of the positives we encountered and challenges experienced.


  • We had 3 brand new speakers on show. Chris Warren, Jenny Gwilliam and Richie Lee were all doing their very first public talk. Chris was even doing his first public talk on his first attendance at any public conference! Giving first time speakers a chance is part of our ethos and we don't see it as taking a chance, its very much giving everyone a platform regardless of experience. More people sharing stories enriches the community itself with greater understanding, plus its always nice to know someone has faced or is facing similar challenges to you.
  • Many testing conferences address the testing that testers do. This is a great thing. As a craft, we need a place to sharpen our craft with likeminded professionals. However, this testing is only a part of the testing world. On this note we actively encourage developers to attend and talk about the testing they do. At each Atelier we've invited developers to discuss testing (and even their frustrations with testers) so it was great to see Andy Stewart and Luke Bonnacorsi talking about testing from another perspective, as well as Joe Swan and Joe Stead on the Test Automation Pyramid panel.
  • The ace thing about the testing community is that there is always a new testing tool, technique or workshops to try out. One such workshop is the Four Hour Tester by Helena Jeret-Mäe and Joep Schuurkes. We wanted to share this with the community present so we thought lets select a subset of the exercises within and get everyone on their feet, thinking and performing tests. Hopefully a few of those in attendance are inspired to find out a bit more about the Four Hour Tester and try it out at their organisations.


  • At the Atelier we strive for diversity in our speakers and panelists, we pride ourselves as a safe space for anyone to speak. 4 of 9 speakers and 3 of 8 panelists were women, of all speakers and panelists one was from a non white ethnicity. While there are many measures of diversity, we know we can always do better, challenge cultural biases and create a conference programme of experiences from all parts of the community.
  • We had a few submissions from further afield this time. Rosie Hamilton travelled from Newcastle to speak. This is a really gratifying indicator of the reach of the Testing Atelier. It does present a challenge as a free event though. Our ability to pay expenses is very limited, in fact Rosie stayed at Gwen and Ash's house, her company helped with the rest. Not everyone who has a story to tell has a supporting organisation behind them, often the opposite. Much for us to ponder there.
  • Terrifying news everyone. Our resident Norwegian hipster artiste, Fredrik Seiness is moving back to Norge to feast in the icy halls there. We will miss his artistry, t-shirts, weird remarks, focus on diversity and knowledge of where Nick probably is when he won't answer his messages. To be fair Fred will still visit for each Atelier, tell us we are doing it wrong and that its not like the olden days. However as Fred departs, a fitter, stronger, faster model immediately takes his place. Welcome to the Atelier Richie Lee! 

The organisers who make this happen are Gwen Diagram, Nick Judge, Fredrik Seiness, Sophie Jackson Lee, Stephen Mounsey, Richie Lee, Markus Albrecht and Ash Winter. With a nod to Ritch Partridge as always. We give our time freely to make the Atelier a success. 

We are always looking for speakers, workshoppers and panelists. For example, there was an attendee who tested boats there for a living. Like, on the sea type boats. One of the tests was to set the onboard hardware ON FIRE to see how resilient it was. So talking next time. If you would like to talk, please submit your idea here:

We'll end with a thank you to all. Attendees, speakers, panelists, sponsors, Jon the media person, Wharf, Felix and Eleanor. All approached the day with enthusiasm and an open mind.

Ash, on behalf of the Atelier Gang.

Wednesday, 2 August 2017

The Team Test for Testability

You know what I see quite a lot. Really long-winded test maturity models. 

You know what I love to see? Really fast, meaningful ways to build a picture of your teams current state and provoke a conversation about improvement. The excellent test improvement card game by Huib Schoots and Joep Schuurkes is a great example. I also really like 'The Joel Test' by Joel Spolsky, a number of questions you can answer yes or no to to gain insight into their effectiveness as a software development team.

I thought something like this for testability might an interesting experiment, so here goes:

  1. If you ask the team to change their codebase do they react positively?
  2. Does each member of the team have access to the system source control?
  3. Does the team know which parts of the codebase are subject to the most change?
  4. Does the team collaborate regularly with teams that maintain their dependencies?
  5. Does the team have regular contact with the users of the system?
  6. Can you set your system into a given state to repeat a test?
  7. Is each member of the team able to create a disposable test environment?
  8. Is each member of the team able to run automated unit tests?
  9. Can the team test both the synchronous and asynchronous parts of their system?
  10. Does each member of the team have a method of consuming the application logs from Production?
  11. Does the team know what value the 95th percentile of response times is for their system?
  12. Does the team curate a living knowledge base about the system it maintains?

Where each yes is worth one point:

  • 12 -  You are the masters of observation, control and understanding. Change embraced openly. Way to be.
  • 11 - Danny Dainton's Testability Focused Team at NewVoiceMedia - Wednesday 2nd August 2017
  • 8-10 - You are doing pretty good. You can change stuff with a reasonable degree of confidence. Could be better.
  • Less than 8 - Uh oh. You are missing some big stuff and harbouring serious unseen, unknowable risk. Must do better.

What would you add or take away from this list? If you try it, or a variant of it let me know, I'd be fascinated to hear!

Notes On Each Question

  1. If everyone actively recoils from change to all or a specific part of the system and there's always a reason not to tackle this. It's a no.
  2. Everybody should be able to see the code. More importantly a lot of collaboration occurs here. Access for all is part of closeness as a team.
  3. A codebase generally has hotspots of change. Knowing this can inform what to test, how to test and where observation and control points are.
  4. Teams have both internal and external dependencies. The testability of dependencies effects your testability. Close collaboration such as attending their sprint planning helps.
  5. To test a system effectively one must have deep empathy with those who use it. For example if you have internal users you invite them to demo's, for external customers your team is exposed to customer contacts, or dare I say it, even respond to contacts.
  6. Some tests need data and configuration to be set. Can you reliably set the state of these assets to repeat a test again?
  7. Sometimes programmers have environments (I hope) to make their changes in something representative of the world. Your testers and operations focused people should have this too to encourage early testing, reduce dependency on centralised environments and setup time of tests. 
  8. If you don't have unit tests, you probably haven't even tried to guess what a unit of your code is. Immediate no. If you do each member should be able to run these tests when they need them. 
  9. Systems often have scheduled tasks which lifts and shifts or replicates data and other operations which occur on demand. Can you test both?
  10. Its important that all teams members can see below the application, so we aren't fooled by what we see. If you don't have an organised logging library which describes system events you should get one, plus eventually some form of centralisation and aggregation. Or just use syslog.
  11. If you don't know this you either haven't been able to performance test, gather data for analysis, or realised that you can analyse your performance data without outliers. Either way you probably know little about what your code does under load or where to start with tuning.
  12. Documentation. I know you are agile and you thought it was part of your waterfall past. To be fair, this is anything that describes your system, a wiki, auto generated docs, anything. If someone wants to know what something does, where do you go? Only the source code alone is a no for this one. Your team has many members, not that many Product Owners can read code.

Friday, 2 June 2017

Wheel of Testing Part 2 - Content

Thank you Reddit, while attempting to find pictures of the earths core, you surpass yourself.

Turns out Steve Buscemi is the centre of the world.

Anyway. Lets start with something I hold to be true. My testing career is mine to shape, it has many influences but only one driver. No one will do it for me. Organisations that offer a career (or even a vocation) are offering something that is not theirs to give. Too much of their own needs get in the way, plus morphing into a badass question-asker, assumption-challenger, claim-demolisher and illusion-breaker is a bit terrifying for most organisations. Therefore, I hope the wheel is a tool for possibilities not definitive answers, otherwise it would just be another tool trying to provide a path which is yours to define.

In part one, I discussed why I had thought about the wheel of testing in terms of my own motivations for creating it, plus applying the reasoning of a career in testing to it. As in, coming up with a sensible reflection of reality as I see it, rather than creating a 'competency framework' which describes some bionic human that no one has ever met, or would even want to meet.

So, in part 2 I thought I would dig into what's contained within. Not in a 'I'm going to define each term' endless navel gazing type of way but in a 'lets look at it from a different angle' way. I have observed people retreating from ownership of their own development as we are forever foisting methodologies, tools, techniques, models, definitions on each other, naming no examples. Testing and checking. Sorry I mean't to think that, not write it.

Anyway, lets break it down a bit. I love a tortured analogy so lets use the layers of our own lovely planet as an explainer:

Inner Core...

The inner core of the earth is hot, high pressure and fast moving, but dense and solid. Always moving and wrestling with itself but providing a base to develop the self from. I chose these five areas for a core, reflecting my values:

  • Testing - I value testing skill over understanding a business domain. Testing skill helps you to understand domains, but for me business domain understanding doesn't reciprocate to the same degree.
  • Tech - To me, I refer to technical awareness here. What are the strengths and weaknesses of a given technology or architecture? And how does that inform my testing?
  • People - Possibly the most complex part of any role. Fathoming these strange creatures, whether users or developers is a testers pressing concern. Literally nothing happens with these curious beings. 
  • Advocacy - Being able to talk about testing and the value it adds is a real skill. If you can't convince yourself about what it is, who cares and why, then how can you convince anyone else.
  • Strategy - I think you need to have a strategy as a tester. Charters, sessions, mind maps, mnemonics. Without a strategy, you are probably just poking at user interfaces.

Outer Core...

The outer core is where things start to get a bit more fluid and move a bit faster, plus its the middle layer, affected by both the inner core and the mantle. The boundaries get a bit blurrier and navigation is a little bit harder:
  • You start to realise that when someone says something like 'integration tests' it really doesn't mean to them what you think it means. Hilarious and interesting conversations occur and you create a number of tests which are different types of integration tests. 
  • You start to realise that testing is interfaced deeply with many other roles. But those other roles might not know that. When those people say 'QA' a nervous twitch develops but you stay cool and ask them what they mean by that. Hilarious and interesting conversations ensue.
  • You start to realise that models and thinking techniques as well as hands-on experience start to become really important and one enhances the other greatly. Tools become important, you may get carried away by test automation. If your developers says something like 'stop firing that JSON rail gun at the wrong endpoint' then time to pause for thought.


The mantle is a weird place. Almost solid near the surface, extremely fluid and fast moving nearer the outer core. Similarly, a lot of the aspects of the wheel here seem quite specific but underneath the surface you start to realise how they are all part of a very complex entity:

  • You realise that if a system under test is usable and accessible then it is probably more testable. If it is more testable its probably more supportable by operations, which then might improve the relationship between dev and ops. For me, making these connections is the crux of this part of the model.
  • You start to realise how little you know about anything really. Maybe before you thought you knew quite a lot about something, then that gives way to the realisation that Dunning and Kruger weren't lying and its you.
  • You start to realise you can't do it alone. Also, you know the people who focus on something while you buccaneer around the testing wheel? You previously couldn't understand that, but then it occurs to you that you need lots of different skills, outlooks and experiences to grow.

So what about the crust you say? Thats the shallow layer that we usually need to break through in order to really start to question what we are made of. And 
just remember this. Once you've been at it for a while, it can be a long and treacherous journey back to your core...

Friday, 12 May 2017

Independent, Punk, Leeds Testing Atelier IV

On Tuesday 9th May 2017, we did it again, the fourth iteration of the Testing Atelier rocked the mighty city of Leeds. 

We try to do things a little different. 

Our venue Wharf Chambers is different, a community run venue rather than stuffy conference halls or meeting rooms. We wanted to present a different type of event too as many testing conferences are mainly testers talking about testing that testers do. We wanted to show testing as an activity though, something that all roles do in their own way and how those fit together. To this end, we sourced speakers, workshop facilitators and panelists from loads of roles, developers, ops, build engineers, product all contributed. In fact we had pretty much a 50/50 split between testers and other roles. Winning.

As well as having more from all those roles who have a stake in testing as an activity, we had:
  • More focus on understanding issues that are changing our testing lives, including DevOps and Continuous Delivery techniques, in order to realise that testing is often enriched by new patterns for software development. 
  • More gender diversity than before, 40% of speakers, workshop facilitators and panelists identified as female, an improvement on 20% for Atelier III. More diversity, more viewpoints, more understanding, better relationships, better decisions and different thinking.
  • More sponsors enabling us to do more for our attendees, more media (thanks to Codera for helping us out) and more swag (again, thanks to Skelton Thatcher and Ministry of Testing). Infinity Works, Ten10 and Chris Chant kept the bar stocked and bellies full of pizza, a crucial part of the day. 

Anyway, I think this tweet summed up my feelings for the day, quite nicely:
Leeds continues to flourish as a technology city and the Atelier is a big part of that. We'll be back later this year, better than ever.

Friday, 5 May 2017

The Four Hour Tester - Modelling as a Team Exercise

A few weeks ago, myself and a few colleagues embarked on the Four Hour Tester exercises, starting with the skills of interpretation.

As promised we have attempted the second exercise, modelling. Modelling for me is one of the key testing skills, especially if you don't wish to rendered inert when there are no 'requirements' or 'documentation.' Making our models explicit is also critical, we all carry around our models of a product, system or process in our headers, and when externalised, can raise new questions, both of our own understanding, and as a wider group.

The essence of the exercise was to take three tours from Michael Kelly's FCC CUTS VIDS touring heuristic, specifically:
  • Users Tour
  • Data Tour
  • Configuration Tour
And go exploring!

Anyway, we grouped ourselves up and had go, with one slight change, instead of Google Calendar we used the deeply insane Ling's Cars website! Here's what we came up with:

Notes (rough) from our Lings Cars session this aft:
User Tour:

As a holidaymaker
I want to lease a cheap car for 2 persons
so I can easily explore for a week

‘Easy step by step guide’ page is too wordy with unnecessary pictures - unclear for user

Managed to find out:  Insurance is our responsibility, exceeding agreed mileage costs extra

Wanted to find out but were unable:  Multi language sat nav? Fuel level when collected / returned?

User confidence - low (as a user can I trust this business with my money?)

No ability to compare prices with other providers - would be useful

Data Tour:

Looked for:
Cost per week - looks like long term leasing only
Car spec (top speed, mpg, manual/auto, etc) - some data on this was found but difficulty finding the data in amongst the madness

Config Tour:

Stop / Play setting for Homepage video - does not persist on page refresh
Potential personalisation - didn’t sign up but login appeared to be available


User Tours

I want to lease a family car, for £1000 a month, but don't know what I want really

* There are some immediate cars show on the homepage, can scroll through and review.
* There is a list down the left hand nav, so if I knew what I wanted I could click through
* Now I want to filter down my search, because I don't know what I want yet.
* I want to be able to get a quote and order, click Cars
** Order a car
** Get a quote
** How it works
** Full car list!
*** Finally found what I wanted
*** I can search by price and car type finally
*** But still haven't got any results

- If I knew what I wanted, I think it would be much easier to use this site!

Data Tours

List all the major data points of the application

* Lings Deal ID
* Cars
** Price per Month
*** Price by Mileage
** Car Specification
*** Engine Size
*** Name
*** Fuel Type 
*** Transmission 
*** Age
* Agreement
** Warranty
** Extras
* Customers
** States
*** Customers in Proposal
*** Customers in Order
** Time
*** Customers wait time for proposal 
*** Customers wait time for order
** Staff
*** Available in the office
*** Metadata about the staff

Configuration Tour

Attempt to find all the ways you can change settings in the product in a way that the application retains those settings.

* Webcam - Add Staff Overlay then refresh page - Overlay is not retained
* There is no login, so its hard to change settings in a way which affects the experience superficially.
* Interesting testing dynamic that a website with so much stuff on it isn't that personalisable.
* Changed the cookie values, deleted the php session id and nothing happened. Perhaps they are no longer receiving that id in requests from my session, can't see logs.

Evaluation Questions

1. For each tour write down a few things you would want to test.
** Users Tour
*** Find a car via filters and proceed to get a quote - see how long it takes to get from landing to quote 
*** Come to the site with a specific car in mind to lease and compare the timings
*** Skip the quote and proceed straight to order and note the significant differences.
** Data Tours
*** I'd be intruiged to get a group together and try and effect the numbers of quotes/orders in progress
** Configuration Tours
*** I would love to block certain aspects of the site (embedded video players for example) and try to use it.

How did these tours help you to come up with different test ideas?
* Changing angles, while facilitating focus - again with the Lings Cars site, a lot going on, what was the core of the product?


(Shared with permission from the team taking part)

A couple of thoughts from me on the exercise and the session:
  • The Touring Mindset - to enter into and maintain a touring mindset is quite hard, at least without practice. However, I think it is important in attempting to add fresh eyes to a product you may have tested for a long time. More refresher sessions like this would be useful in many contexts, especially long lived product teams.
  • Models and Truth - testing and the concept of truth has a complex relationship. Searching for single sources of truth is a sport that many teams often engage in, but using touring to expose models shows that different people on the same tour with the same mission can find very different truths. Things claimed as truths are my favourite claims to test...
Next up, test design, stay tuned...

Monday, 10 April 2017

The Four Hour Tester - Interpretation as a Team Exercise

I do hope that everyone has heard about the Four Hour Tester by now. A fascinating experiment by Helena Jeret-Mäe and Joep Schuurkes, distilling the key skills in testing into a set of exercises over 4 hours. This resonated with me, I enjoy thought exercises where you reduce something down to what you believe to be critical, really making choices and having to let go of previously unrealised biases and assumptions.

After seeing the model demonstrated at TestBash Manchester last year, I thought it would be beneficial for me to gather my colleagues, pair up and attempt the first exercise, "Interpretation":

Essentially, come up with as many interpretations of the second sentence of the following paragraph:
“You can add reminders in Google Calendar. Reminders carry over to the next day until you mark them as done. For example, if you create a reminder to make a restaurant reservation, you’ll see the reminder each day until you mark it as done.”
Here are some of the results, we came up with:


You CAN add reminders - but nothing else
You can ADD - can you remove?
YOU can add reminders - but nobody else can
Reminders CARRY over - Does it create a new reminder? Does it carry over with the same data? Does it have the created date? Does it keep extending? Is it infinite?
Reminders carry over to the NEXT DAY - Which one? What about on days that cross international borders? Birthdays? Timezones? What about people who fly? International Space Station? Rockets?
You’ll SEE the reminder - IS it visual? Audial cues? What about blind people? Does it take over the device? Does it focus? Does it only vibrate the device (Also visual!)?
You’ll see the reminder EACH DAY - when on each day? Morning? Night?
MARK as done - Tick? Cross? Colour? Fade? To note for later?


1. Yes. Viewpoints from multiple standpoints, ESL, cultural influences, etc etc 
2. VERY! One could develop a system subtlety different to the expected system
3. Times! UI! Permissions! User input!


You can add reminders in Google Calendar. Reminders carry over to the next day until you mark them as done. For example, if you create a reminder to make a restaurant reservation, you’ll see the reminder each day until you mark it as done

1. Who can add reminders/ everyone can add/ certain permission levels?
2. You can add reminders either just for yourself/ other people as well/ shared reminders?
3. You can't add them from anywhere else, eg automatically added booked flights
4. You can only add them (not edit/delete?)
5. Reminders are the only thing you can add?
6. Do you need to be able to add text? Locations? Specific times?
7. You can add them for today (or for the future?)
8. You have to mark them as done or they remain indefinitely/ for a set time period?
9. The reminder makes the reservation for you?
10. It shows you the reminder once/day or every hour?
11. Can you snooze/ dismiss without marking as done?
12. Are you shown the reminder only when you open google calendar?
13. Can mark it as statuses other than DONE?

1. No, not really. We felt that what is there is relatively clear, but there are a lot of gaps in the specification, which isn't something we can solve other than by obtaining further info from stakeholder. Analysing highlights the gaps but doesn't give a clearer/ deeper understanding of what is required.
2. We felt implementation would be similar, but lacking implied/ not specified functionality due to gaps in the spec. If what was implemented strictly adhered to the spec there would probably be missing a significant amount of functionality.
3. Spec, design, interaction with reminders (CRUD), time zones



“You can add reminders in Google Calendar.”
- You - Do I need to be signed in? Who can view the reminder?
- You - Can I set group reminders?
- You - Can I set a reminder for others? Can others set a reminder for me?
- can - Do I have to? What happens if I don’t?
- add - Do I need any permissions to create? Can reminders be updated, removed?
- Google Calendar. - can it sync with other devices linked to the same Google Calendar?
- Google Calendar. - compatibility with another calendar (outlook) or just Google Calendar?

“Reminders carry over to the next day until you mark them as done.”
- Reminders - Can I set multiple reminders? What’s the Max and Min number of reminders?
- Reminders - Can I set a time for the reminder?
- Reminders - Dose it alert? Visual - sound alert - push notification – vibrate?
- Reminders - Once or multiple alerts per reminder? When will the reminder alert, 5, 10, 15 mins before due. Are settings configurable?
- Reminders - Can I delay/snooze the reminder?
- carry over to the next day - Forever?
- carry over to the next day – do they alert again? Dose next day include weekends?
- day - How long is a day? 8-hour workday, 24-hour day?
- day - Clock type 24hour / 12 hour?
- mark them as done. - Can I delay/snooze the reminder? Dose the reminder update in anyway? What’s the definition of done? Is a history stored?


1. Yes – Its richer with questions to try eliminate assumptions and clear up ambiguities
2. Imp1 ‘Day’ = 24-hour day with 12-hour clock / Imp2 ‘Day’ = 8-hour workday with 24-hour clock 
3. Checking boundaries (lots of them)



You can add reminders - but no-one else can
You can add reminders - but it’s not compulsory
You can add reminders - But you can’t delete, share or edit
Reminders carry over to the next day - not an hour, not a week, not a month
Reminders carry over to the next day - not 2 days, not 3 days, but every day
Reminders carry over to the next day - you can’t snooze them for a specific amount of time
For example, if you create a reminder to make a restaurant reservation, you’ll see the reminder each day until you mark it as done. - but you won’t be able to hear it, or touch it
For example, if you create a reminder to make a restaurant reservation, you’ll see the reminder each day until you mark it as done. - not each hour, not each week
For example, if you create a reminder to make a restaurant reservation, you’ll see the reminder each day until you mark it as done. - only you can mark it as done, no-one else
For example, if you create a reminder to make a restaurant reservation, you’ll see the reminder each day until you mark it as done. - you can’t delete it, or complete it, but you can mark it


1. The question is ambiguous because there are multiple sentences. However, my interpretation did become richer as it aimed to expose all possible interpretations of words, and consequently help reduce the ambiguity.
2. Very different, because there are multiple interpretations for the majority of the words.
3. The adding of the reminder, the carrying over of the reminder, marking the reminder as done


(Shared with permission from the team taking part)

A few thoughts from me on the exercise and the session:

  • Questions or interpretations? - I think the temptation to ask questions rather than search for interpretations surfaced here. I guess the key difference for me is empathy. How might someone else interpret this, as opposed to answer my questions. 
  • Gradual realisation of where a requirement seems clear - A few people said, "thats actually pretty clear" as a requirement at first, a tester tries to see the complexity in the simple, more interpretations and questions followed each other cumulatively once that barrier had been breaches.
  • Interpretations of one word, a phrase, the whole sentence, the whole paragraph - this was my favourite part, the difference between micro (one word) and macro (the whole thing) context is at times equally massive. Interpretations of one word can send a ripple of misunderstanding, equal to that of misunderstanding the whole paragraph.
Give it a try as a team! We'll be moving on to exercise 2 as a group in a few weeks....