Skip to main content

A single source of testing truth...


Truth. Oscar Wilde said it best I think:
'The truth is rarely pure and never simple.'
MEWT…

In terms of vehement debates at the recent MEWT gathering in Nottingham, probably the talk which generated the most feedback and opinion was Duncan Nisbet's ‘The Single Source of Truth is a Lie.’ To be honest I was relatively quiet during the debate, as it was straight after my talk and I also need time to parse such things, hence this blog. 

A link to the slides can be found here:

https://mewtblog.files.wordpress.com/2015/10/ssot-is-a-lie.pdf

What Duncan said in my head…

Those were the slides, this is how I understood the talk given by Duncan. First up, there was an admission by Duncan he was just putting this one out there for feedback, which is kind of the point of MEWT really. Second up, there was belt and braces, a definition of truth:
‘Conformity with reality or fact’ or can be otherwise known as ‘verity’
Thus began the front loading of the mind with terms with multiple, deeper meanings depending on their context. To be fair to Duncan, when he picks a subject, he doesn’t dance around on the edges. Truth. 

He used the ‘Three Amigos’ device as a way to introduce the topic, namely how can a session such as that generate a single source of truth, a shared understanding that can be taken and built. This might be living documentation (a la Specification by Example), which is semantically similar to defining some acceptance tests to drive the development. However it manifests itself, is it possible to define truth for a given feature/function/context/situation?

I believe the gist of the talk and the debate was a loose consensus that truth is a multi-faceted beast. Duncan credited James Bach with the following (the three pillars below are littered throughout many disciplines (social work, medicine, mental health) as a model in their own right). The truth is made up of (as I understood):

  • Social truth – conformity with realities commonly held within/across team/social strata;
  • Psychological truth – conformity with realities held individually within one’s psyche;
  • Physical truth – conformity with the reality presented by a physical artifact, such as your product.



Models of truth on top of models for testing…

Duncan then overlaid this on top of the Heuristic Test Strategy Model:

  • Social Truth ↔ Project Characteristics – linked by a need for shared understanding about the feature/function/context/situation;
  • Psychological Truth ↔ Quality Characteristics – linked by how one might feel about the feature/function/context/situation;
  • Physical Truth ↔ Product Characteristics – linked by the production of artifacts pertaining to the feature/function/context/situation.

This resonated with me, I could mentally link this version of how the truth might be determined with a model of questions. Which, for myself, is the point, in that having a model which allows you to determine what you might consider to be truth in your context, is rather useful, even if the precision of that determination is fallible.

The debate began afterwards, ranged from definitions of premise and assumption, disappeared down etymological rabbit holes. I think eventually the white flag was waved and we moved on.

Should testers even talk about truth…?

What do I think of this debate then? I will keep it short and simple, in the light of the potential maze this represents. As a point of order, and in concert with the model presented by Duncan, most of what follows discusses physical truth, namely an artifact/product and what it might do. Social and psychological truths deserve tomes of their own.
When the word ‘truth’ is used in a testing context, I generally think of a few things:


  • Words that we, as testers, shouldn’t use. I would probably put ‘truth’ into a similar bucket as ‘full’, ‘complete’ or ‘done.’ You utter these terms and the ground beneath your feet becomes decidedly shaky. Not one I would put in my safety language locker. Mainly because these terms are literally taken sometimes and truth (to some) seems so darned final.
  • We are in the information business, over the decision making business. “This is what it does” may be more of a tester’s domain over “This is what it should do.” After all, never the twain shall quite meet in beautiful clarity. That is not to say a blend of the two is not something to strive for (preferring early test involvement over at the end, ‘as a service’), but we should be mindful of our core principles.
  • Hang on. Have we not already got an approach for this, as in identifying oracles and being aware of their fallibility? Maybe we’ve already done this question. Nothing wrong with revisiting the Oracle Problem, but I believe that approach remains fundamentally sound, and leaves room for context, whereas truth chases the absolute (doffs cap to John Stevenson here, but I subscribe).




Is this (yet) another impossi-task…?

Speaking of absolute, is truth another techni-coloured dream coat wearing, rainbow generating unicorn with diamonds for eyes that we seem to continuously chase in software development? It sounds suspiciously like that process of nailing down ‘stuff.’ Truth seems to me to be subject to change like all other things, and the more we try and pin the blancmange of truth to the wall, the slipperier the world gets. 

We (in software development, and those whose business depend on it) seem to rather enjoy setting ourselves impossible, contrary goals (deliver this huge thing that the world will still want in two years’ time for example) which directly grind the gears of the world. Maybe this is just one of those. We’ll get over it one day.

Certainly made me think. Truth might just be a journey, and not a destination.

Comments

  1. I become a fan for your analogy of Models of truth on top of models for testing.Your philosophy helps in our testing tools training in Hyderabad to deliver more excellence.thank you.

    ReplyDelete

Post a Comment

Popular posts from this blog

Wheel of Testing Part 2 - Content

Thank you Reddit, while attempting to find pictures of the earths core, you surpass yourself.
Turns out Steve Buscemi is the centre of the world.

Anyway. Lets start with something I hold to be true. My testing career is mine to shape, it has many influences but only one driver. No one will do it for me. Organisations that offer a career (or even a vocation) are offering something that is not theirs to give. Too much of their own needs get in the way, plus morphing into a badass question-asker, assumption-challenger, claim-demolisher and illusion-breaker is a bit terrifying for most organisations. Therefore, I hope the wheel is a tool for possibilities not definitive answers, otherwise it would just be another tool trying to provide a path which is yours to define.


In part one, I discussed why I had thought about the wheel of testing in terms of my own motivations for creating it, plus applying the reasoning of a career in testing to it. As in, coming up with a sensible reflection of real…

Getting started with testability

At TestBash Netherlands, I said that, in my experience, a lot of testers don't really get testability. I would feel bad if I didn't follow that up with a starting point for expanding your mindset and explicitly thinking about testability day to day, and making your testing lives better! 

In large scale, high transaction systems testability really is critical, as compared to the vastness and variability of the world, testing done within organisations, before deployment, is limited by comparison. We need ways to see and learn from the systems we test where it matters, in Production.
Being able to observe, control and understand is central to testing effectively, and there are loads of resources and experience reports out there to help. I was/am inspired by James Bach, Seth Eliot, Matt Skelton, Sally Goble, Martin Fowler and a little bit of PerfBytes/Logchat, so lets see if it works for you! 

Overall Model:

Heuristics of Software Testability by James Bach

http://www.satisfice.com/tool…

The Team Test for Testability

You know what I see quite a lot. Really long-winded test maturity models. 

You know what I love to see? Really fast, meaningful ways to build a picture of your teams current state and provoke a conversation about improvement. The excellent test improvement card game by Huib Schoots and Joep Schuurkes is a great example. I also really like 'The Joel Test' by Joel Spolsky, a number of questions you can answer yes or no to to gain insight into their effectiveness as a software development team.

I thought something like this for testability might an interesting experiment, so here goes:

If you ask the team to change their codebase do they react positively?Does each member of the team have access to the system source control?Does the team know which parts of the codebase are subject to the most change?Does the team collaborate regularly with teams that maintain their dependencies?Does the team have regular contact with the users of the system?Can you set your system into a given state…