Friday, 30 May 2014

The Fallacy of the Single Point

 

Ever heard of the 'Fallacy of the Single Cause?'

It refers to the rarity of single causes resulting in particular effects, it turns out the world is more complex than that. Many different inputs are required to created the blended and various outputs we see in the world around us. Some may contribute more than others and at different times, but as a rule of thumb for life (and testing), pinning your hopes on one cause, is likely to leave you disappointed.

We communicate in stories, but what's the point?

This fallacy has been refined to apply to the penchant for storytelling that is intrinsic to how we communicate. The question is this. How often do you listen to a story and you take away a singular outcome or learning? Thing is, the end of a narrative is only part of that journey, a great many stories express many subtleties as they progress, especially that rich vein of intrigue and oblique learning, reality.

In my eyes, this ability to tell a story has always been critical to testing, whether in the act of testing or reflecting afterwards. 'The Fallacy of the Single Point' has significance here too. As a young tester, I thought I had found a simple formula. Surely, if you cover each requirement with one test (with a variable degree of length/scope), then you will have fulfilled the testing mission for that product? My approach tried to short circuit subtlety rather than acknowledge and compliment it. While a multi-themed narrative unfolded, I was focused on a single point on the horizon.

So, what does this mean in a testing context?

A test which proves a single point has its intoxications. It allows your mind to partition, consider a test as complete, which as far as state is concerned is unhelpful. The inherent complexity of the systems we test create an intense flux in state, making it as fallible an oracle as any other. Imposed narrowness gives rise to blindness, missing the peripheral aspects of a test, lurking just out of plain sight but affecting the outcome nonetheless. The narrowness of that approach also hampers the effective discovery and description of bugs and issues which require clarity, as the wider picture is relegated to the background. 

The opposite of this argument should also be considered. Often I will see tests which prove, this, that, the other and a little extra besides. This is often indicative of a faux efficiency (always the poorer cousin of effectiveness), but at the cost of maximising cerebral focus required for a test, try to maintain an eye on each aspect of a multifaceted test. Usually more than us mere humans can effectively handle, resulting in the crucial detail being missed or link being made.



How do we know if this is happening?

Let us use Session Based Testing as our context, with a greenfield application, where we have very little history or domain background.

When determining charters for sessions, especially early in the testing effort, we may find our focus being overly narrow or wide. There are a number of signals we can look out for to give us information about the width of our focus.

If the charters are too narrow:

"We're done already?" - Imagine a 120 minute session, part of a number of charters to explore a particular piece of functionality, focused on a business critical requirement. Suddenly, 30 minutes in, you feel like it may not be valuable to continue. Make note of this, it may be a natural end to the session but it could also be an indicator of narrow focus.

"Obviously Obvious" - You have a charter on a specific requirement and the session passes without incident, perhaps a few areas for clarity. Someone looks over your shoulder and says "well, that over to the left is obviously broken!" You've missed it. Again, make a note. Perfectly possible that another pair of eyes spotted what you didn't but it may be a signal that you have been too narrow in your focus.

If the charters are too wide:

"Too Many Logical Operators" - Your charter might look like this:

The purpose of this functionality is to import AND export map pins. The business critical input format is CSV BUT one client uses XML, the display format can be either tabular OR spatially rendered. Export can be CSV OR XML.

This charter has at least four pivot points in it where your testing will need to branch. After creating a charter, look for changes in direction, see how comfortable you are with your pivots. This signal is common beyond charters, I see it often in user stories and the like. Questioning the presence and meaning of logical operators is a behaviour I see in many effective testers.

"Can I hold it in my head?" - Our brain only has so much capacity. We all have our individual page size. Consider the charter above. Would be be able to hold all that in your head without decomposing while testing? Would you be able to effectively test it in one session? The answer is (probably) that one cannot.

Is there something simple that can be done?

You can vary the length of your leash. A time limit of your choosing to leave the main mission and explore around the functionality, returning once the limit has expired.

Sessions too narrow? Give yourself a longer leash allowing for exposure to the edges of the charter, then snapping back to the mission at hand.

Sessions too wide? Shorten the leash, keeping you within touching distance of parts of the charter you can reach realistically within the session you have defined.    

This variable leash approach enables progress while also refining the focus of your charters on an iterative basis. As we explore and learn, more effective ways to decompose the system under test will present themselves. The testing story emerges as you move throughout the system under test, the challenge is to find the right balance of focus, to ensure that we are not hoodwinked by 'The Fallacy of the Single Point.'

Monday, 26 May 2014

Reviewed - The Testers Pocketbook by Paul Gerrard


I had heard a great deal about this little book. Some who had read it appreciated its premise, some were in fairly fundamental disagreement. If a text generates polar opposites of agreement, then that immediately piqued my interest! So lets begin with that premise:
"A Test Axiom is something we believe to be self evident and we cannot imagine an exception to it"

I feel this is indeed a risky premise for an approach to testing, could be easily misinterpreted as a set of iron laws to be followed, which will magically output an appropriate approach to testing. With this in mind I set about the enjoyable challenge of the dissection of these axioms. Lets take for example:
"Testing requires a known, controlled environment"

There are absolutely benefits to this statement, but also damaging flipsides to your test approach. A known, controlled environment is limited in variance, therefore only able to expose bugs of a certain nature. In addition, tests run in an environment such as this can be false signals, as the variance and scale of the real world changes outcomes.

On the reverse of this, I found a number of 'axioms' more challenging:
"Testing needs stakeholders"

I can imagine a great deal of variance here in terms of who the stakeholder is, their agenda and beliefs but testing without an audience? Can I imagine where this is not axiomatic? Stakeholders may see testing as a 'hoop to jump through' rather than an a skilful way of providing them the information they need, and feel they don't need testing, but testing needs stakeholders to provide context for scope and priority decisions.   

The 'Axioms' form part of the 'First Equation of Testing':
"Axioms + Context + Values + Thinking = Approach" 

I found this to be another challenging assertion, as the application of axioms in an equation could be interpreted as a formula for success, whereas the real challenge of testing exists with the spaces between the constituent parts of the formula and how they interact. I see danger in creating formula's and checklists for testing, as it perpetuates the linear, tickbox impression of testing as a craft. In fairness to the author, the overall tone of the piece encourages the application of the axioms and formula as part of a wider toolkit. 

Although I found myself disagreeing with (what I saw as) the overall premise of the text, the author strongly advocates a focus on stakeholders and, by extension, providing the right information at the right time to those who make decisions. These sections are well worth reading and paying attention to, I certainly have applied some of those ideas to my recent work and provided an excellent framework for thought and approach. The author builds from a fairly formal approach to testing to give due attention to the spectrum of formality and the value of a varied approach. Initially I felt the text suffered from a lack of acknowledgement of the complexity of contemporary systems but this grew as the text progressed, which helped to provide a more rounded view of testing.

I found the authors real world experience shone through towards the end of the text, the reality of delivery is evident, although I think the author leans too far towards testing being concerned with the cost of failure rather than the benefit of the system meeting its mission. Both are important but I prefer the positive side of this coin and I believe testing enjoys a better profile when approached from this standpoint. 

Thoroughly recommended read for testers of all experience and seniority, I will use some of the 'axioms' as part of my testing toolkit, although with an eye on their fallibility. I'll end with my favourite 'axiom', which is certainly one I believe in:
"Testing never finishes. It stops."

Thursday, 8 May 2014

Lets celebrate! Anyone still out there.....?



Pyrrhic victory. I was reminded of this term a few days ago. 

It is when winning decimates *almost* everything, so winning is basically not worth the cost exacted to achieve it. I believe I have seen this effect on teams during and after very long development projects, the dreaded 'death march.' The projects aims might be valuable and completely worthwhile, but at what cost?

Sometimes, the stresses and strains of such endeavours decimate the team tasked with delivery. Relationships are strained or break, enthusiasm is replaced with cynicism, previously open minds are closed to protect for harm and monotony. Previously conquered silo's re-embed themselves.



Consider those precious 'T-Shaped' people, who are consistently pushed to their limits and burn out, or retreat back into their shells. As a complement to the determined specialist, these guys (and encouraging more of them to flourish) are the key to unlocking effective delivery. Their flexibility and enthusiasm are their best qualities and worst enemies in this context.

So before you embark on the 'next big thing' (with emphasis on the big), take the time to consider its impacts on the humans who deliver it and split it into manageable but valuable pieces. Or you might be left with a delivered project, but no one willing (or even around) to celebrate it. 

Tuesday, 6 May 2014

Reviewed - The Effective Executive by Peter Drucker


I'm always slightly sceptical of the phrase 'timeless' when it comes to management literature, given the infinite variance of people and the situations we find ourselves in. The Effective Executive was described as exactly that by the excellent Manager Tools podcast and I found myself in front of a well know online store ordering a copy.

Overall, it struck me immediately the sparseness and matter of fact nature of the language used by Drucker, although that sparseness expresses the practical nature of the guidance given, starting with managing one's time.

The reality of time is that it is the one thing (on an individual level at least) that you cannot gain more of. Drucker's message is quite bleak at first but the reality of it I will not contest, most executives I know will admit to rarely being able to focus on the critical issues as they are drawn in varied directions to tend to the issues of today, where they may be better served focusing on tomorrow. Indeed that is their primary function. Tracking time to a micro level, I find, is not natural to most. I am vaguely aware of where my time goes on macro level, although I can imagine areas of ineffectiveness lurk which could be righted. Drucker's advice here is well founded, although I believe ideas of slack and long leash learning would be a welcome addition to his time model, even for executives.

It is in the focus on contribution where Drucker's text begins to come alive. Whereas I see most executives focusing on the mechanical process of delivery and management with the  goal of efficiency in mind, Drucker posits that this is sub-optimal. Instead, key concepts and principles should be the domain of the executive, aided by analysis of domain and problem with the aim of results in mind. In particular the question of whether of not an event or problem is a paradigm shift for the organisations, focusing on root causes rather than symptoms.

Another idea which spoke strongly to me is that an executive should seek to utilise a person's strengths, rather than focus on their weakness. As in if a person who has been hired in a management capacity but has a natural aptitude for sales, then use them in that capacity rather than bemoaning their operational shortfalls. As a person with a predominantly practical aspect their personality this appeals to me, as opposed to the long drawn out process of the maintaining the status quo.

Reality (as I in a reality painted by Drucker which I subscribe to) is prevalent within the text. None more so than in its description of enduring leadership, as opposed to the flash of genius leadership. Effective leadership is grounded in determination as few of us possess the brilliance required to effect significant change instantly. Some may see this as another bleak message in a world where we are told anyone can do anything. It is not delivered as such, only the austere thought that if genius were needed everywhere, progress would be slow indeed! Encourage effectiveness so the ordinary can produce extraordinary results was the message I took away. 

Effective decision making is covered in some depth, with a great many useful techniques to take note of and use. The area that struck me most was disagreement. In most organisations, everyone needs to be 'on board' or 'on the same page.' Disagreement is needed to be effective, otherwise we have a danger of making decisions of shallow agreement which do not stand up to serious scrutiny. I have noted that many executive relationships I observe appear to be brittle and don't welcome constructive challenges (not withstanding the non constructive challenges of course). Drucker's argument here resonates in the software development world, where challenge is seen as blockage and being 'the guy that asks awkward questions' is a lonely, lonely place.

All of Drucker's arguments are based on the principle that self-development is the path to effectiveness. Some lessons are learnt easy, others hard but I am in agreement that effectiveness comes from more from within than without. I feel (like Weinberg's Secrets of Consulting), I will learn more from this book with experience, as my own self-development progresses. Lets see how I feel about it in a few years......