Tuesday, 30 December 2014

Hard Skills > Culture Fit

So here's a little bit of hiring people logic for you. I've expressed it as pseudo-code all those technical people who insist on exclusive hard skill hiring, despite the long term pain of it all.

private Handler handler;
public Employee effectiveEmployee;
public int numberofEffectiveEmployees = n; 
if (hard skills > culture fit) {
            handler = new Handler;
            numberofEffectiveEmployees -= 0.5;
        } else {
            effectiveEmployee = new Employee;


My code probably doesn't explain itself (and probably won't compile), so here goes.

So, a handler is the person (who may well have done the hiring if there is any justice) who tidies up the mess of a hire that doesn't fit with the culture that exists at your organisation. Not the public facing culture either, the actual one. Effectively for every poor cultural fit hired, you reduce the effectiveness of your remaining employees by a bit. Probably a fair bit. I went for half. Arbitrary. I harangue technical hires for this mainly as a sweeping generalisation, but it happens all over really.


You can tell you're a handler when:

-The same person is in front of you all the time.
-You exhaust a repertoire of approaches to people management and problem solving that have served you quite well in a successful career thank you.
-Other people talk to you about that person all the time or the conversation always go that way.
-The organisation can't figure out what it wants from person.


You can tell you're a handlee when:

-You are always in front of your manager.
-Your manager appears irrational and changes approaches at seemingly random intervals.
-You appear to be the subject of conversation regularly in contexts that are probably nothing to do with you.
-The organisation can't figure out what to do with you.


This is not a blame thing. Both handler and handlee are doing what comes naturally to them. Both are perfectly effective but just not right now. The problem is the culture black hole which exists between them, very, very slow drawing them both in. Or very fast. I forget which way time dilation and black holes work.


A black hole is good analogy. As soon as you are committed to hire and the initial honeymoon period is done, the culture shock kicks in. And there are few ways to escape once the gravity well kicks in. None of them particularly pleasant.

Next time you think 'hey this person is a Perl wizard' also ask; 'will this person systematically alienate the rest of the humans around them?' You'll thank me for it.

Thursday, 11 December 2014

Train the Trainer - Course Retrospective

What's up with that then?

So, I've been charged with becoming a trainer within my organisation.

Just to set expectations here, I know a tiny amount about how to furnish humans with new knowledge, skills and attitudes. Make no mistake, if this field is an ocean, I am a puddle by comparison. I have dabbled with coaching, but much learning from me will probably have been via proximity and osmosis.

Personally, if I'm going to do something, I want to use my whole arse to do so, not just half of it. I want a set of models to apply in context and (more importantly) a strong paradigm, so when I discuss, create and iterate on training material and courses, I have a starting position to challenge/be challenged on. So, I attended the Train the Trainer course to compliment my own buccaneering learning tendencies.

What did you learn that t'internet didn't know for free?

The internet probably knows some of this stuff but here is a bunch of stuff I have learnt over the last few days:

  • I was pretty worried about creating material, how much time it would take and how I would fit everything else in. It turns out the angle of my thinking wasn't right. Instead of 'how can I create course material?' I should have been thinking 'how can I create exercises which transfer the onus onto the participant to learn.' Still be hard, but feel better.
  • Bloom's Taxonomy - A method of classification for learning objectives split into knowledge, skills and attitudes. If done really well, they will form your assessment too. If turns out my paradigms for knowledge, skill and attitude were a bit wonky too. Especially with reference to the difference between skills and knowledge and how to *really*  tell them part. Here goes:
    • Knowledge - I know how to do something
    • Skill - I can practically apply my knowledge of that something
    • Attitude - I have a belief or a will to do something
    • Simple maybe but its what I'll take forward with me! Have a look at Blooms, its fascinating stuff.
    • Excellent lexicon for objectives too, useful in many contexts.
  • My expectations - It turns out I don't need to try and impart all my knowledge and skills within a certain time period. Also my expectations of others post training course. They might not need to be geniuses. They might need to recall some things, recognise patterns in others, be able to apply for others still. 
  • Fluidity - training courses are not an iron clad, military exercise. They provide a scaffolding which allows room for manoeuvre, but the ability to flex on what really matters to the participants. Simple questions at the beginning of a topic like 'what is your experience of X' can help to frame a session, streamlining as appropriate to meet needs.
  • Objectives linked to activity is key. The opportunity to learn, reflect, add to our theoretical knowledge and apply that knowledge should be embedded in each activity. If that is simple matching of paired subjects or attempts to build competence in complex modelling techniques, I really appreciate the set of heuristics the course furnished me with to assist.
  • Me - I'm a pushy so and so. If you are not careful, I'll be in there, taken over the whole show and be happily reshaping things in my own glorious image. I shouldn't do that anyway. I really, really shouldn't do that in a training context. I'm not creating Cyber-men, I must curb my natural tendencies. I think this will be good for me.
It was worthy of the investment. Now, I look forward to getting the sharp nail of experience through my foot and the associated tetanus jab. Time to apply that knowledge, the real test one might argue.

And finally an external view on 'IT bods'.....

It was wonderful to spend time with people from background whose primary focus isn't technology. It can be a closeted world and certainly challenged my ability to explain the fundamentals of testing and agility in context!

Oh and those guys from different career paths and domains still perceive all 'IT projects' to be late, of poor quality and rarely solve the original problem. Or the problem doesn't exist any more by the time we are done. Or the company doesn't. So far still to go. 

Tuesday, 7 October 2014

The Procrustean Bed of ISO29119

The old stories can teach us a great deal. Every once in a while I see the parallels between antiquity and the present, shown through the lens of one of these stories.

The tale of Procrustes (first introduced to me by the work of Nassim Nicholas Taleb, he writes with great skill and knowledge) and the introduction of the "ISO29119 standard" resonate with each other in my mind.

The Tale of Procrustes in a Nutshell......
"Procrustes kept a house by the side of the road where he offered hospitality to passing strangers, who were invited in for a pleasant meal and a night's rest in his very special bed. Procrustes described it as having the unique property that its length exactly matched whomsoever lay down upon it. What Procrustes didn't volunteer was the method by which this "one-size-fits-all" was achieved, namely as soon as the guest lay down Procrustes went to work upon him, stretching him on the rack if he was too short for the bed and chopping off his legs if he was too long."
(Source : mythweb.com)

So, lets adapt this for our ISO29119 situation:
"The "ISO29119 standard" purports to be the only internationally-recognized and agreed standards for software testing, which will provide your organization with a high-quality approach to testing that can be communicated throughout the world. Advocates describe it as having the unique property that it offers offers a set of standards which can be used in any software development life cycle. What the advocates don't volunteer is that your business problem will need to be stretched or trimmed to meet the new standard. So rather than testing solving your business problem, the focus will be to deliver to the standard."
Who will be Theseus for the Craft of Testing?

In the end, Theseus (as part of his tests) dealt with Procrustes using his own vicious device. However this will most likely not be the case here, I believe most thinking testers are advocating the opposite, continuing to champion the principles of Context Driven Testing. Rightly so, merely rubbishing standards is only one half of the argument. I sincerely hope our community of minds will be our Theseus but time will tell. The uptake of the "ISO29119 standard" is an unknown, concerns are probably in large organisations and government, where group (and double) think can be prevalent, these are the soft targets for peddlers of the cult of the same. 

However all over the development world we desperately and continuously strive to leap into Procrustean Beds. Taking shallow solace in "standards", which humans have been doing for a long, long time as a proxy for thought. Once you jump into a Procrustean Bed, you never emerge quite the same.......

Consider investigating..................

Friday, 3 October 2014

N things I want from a Business Analyst....

Business Analysts. I give them a hard time. I really do. I love them really but I couldn't eat a whole one.

Is something I used to say.

I even went to a Business Analyst meetup once and asked them if they thought they should still exist in our "agile" world or are they being washed away by the tidal wave. Looks can really hurt, in fact they can be pretty pointy.

I wouldn't do that now though, I think I've grown up a bit. Like any great programmer or tester they can really add to a team. And, conversely, like a really poor programmer or tester they can really do some damage. It was unfair to single them out and very possibly bandwagon jumping of the worst kind.

In addition, I fell into a common trap. I was full of hot air when it came to what was bad about Business Analysts, but could not articulate what might make them great.

So here goes............
  • I want a vivid (preferably visual) description of the problem or benefit - lets face it, none of us are George Orwell. We can't describe in words with clarity and paucity, all the complex concepts present in our lives. However, we can deploy many techniques bring flat material to life. Elevator pitches, mind map, product boxes, models, personas and the like are your buddies.
  • I want you to shape a backlog, not provide a shopping list - hearing a backlog described as a shopping list leads me down a path of despair. A backlog is themed beastie, which needs to be shaped. Delivering the stories in a backlog will not implicitly solve the problem, no more than a lump of marble and a hammer/chisel constitutes a statue. Items in a backlog are raw materials. They need sculpting with care to achieve their goals.
  • I want you to work in thirds - for you lucky so and so's who are trying to figure out how on earth to cope in the agile tsunami which is enveloping the world, here's a rule of thumb for you. One third, current sprint, one third, next sprint, one third, the future. The remaining 1% is up to you.
  • I want you to be technically aware but not necessarily technically proficient - technical awareness is a beautiful thing, many testers are a good way down this path. Knowing the strengths and weaknesses of given technology helps you to realise business benefits, because you can appreciate the whole picture, the need, the constraints, the potential.
  • I want you to really, really try with para-functional requirements - This is in two parts, the response times/capacity/scalability the business needs for the real world, coupled with the constraints of the technology deployed. The answer will be somewhere in the middle. If there is anything I have learnt about performance testing especially, is there are few absolutes, para-functional requirements should reflect that subtlety.
  • I want you to be experts in change - in fact you guys should love change deeply, being able to extol its benefits and risks. Helping teams to help their stakeholders realise the value of change in their marketplace. Not snuffing it out to protect business goals which time has rendered of dubious value. 
  • I want you to distinguish between needed and desired - this burns me deeply. The old chestnut about a small percentage of the product actually being used (linky) is serious business. By not determining the difference between what is needed and what is desired, products are being happily helped to fall silently on swords forged by Business Analysts who struggle to articulate this critical difference.
  • I want you to recognise that stories/use cases/whatever are inventory - Imagine the backlogs as a factory, piles of stuff everywhere that our brains are trying to navigate around, winding a path through these piles of stuff trying to find what we need. This takes time and steals from flow, which we can't afford to lose. Before you add it, stop and consider for a moment, whether or not you need it right now.
  • I want you to challenge really technical people to justify value - "Well, we'll need a satellite server configured with Puppet to centralise our upgrade process." Huh? We will? What value does that give the business? Is what I want you to ask. Anything worth building, should be worth articulating as a value proposition.
  • I want you to take ownership of the product goddammit - There ARE decisions you can and should make. If you wish to survive the agile tsunami its time to embrace that change is king, and it means decisions. Big and small, narrow and wide, they are there to be made. By you. YOU.
  • I want you to continuously improve and I'll be watching - I would never want you to do 10 things to improve yourselves. 'N' things please, ever changing in focus to ensure you are delivering value in the contexts you find yourselves.

Basically I want you guys to be superhuman. I think you can be.

Some say being a Business Analyst is old hat. I say it is a gift. But only if you embrace it.

Wednesday, 30 July 2014

The 'Just Testing Left' Fallacy

I am mindful that many of my blogs are descending into mini tirades against the various fallacies and general abuse of the lexicon of software development.

Humour me, for one last time (thats not true by the way).

In meetings, at the Scrum of Scrums, in conversation, I keep hearing it.

    "There's just testing left to do"
And then I read this:


An all too familiar software development tale of woe.

I thought; 'I bet everyone on that project is saying it too.' Next to Water Coolers, Coffee Machines, at the Vending Machine, in meetings and corridors.

At first, it gnawed at me a little.

Then a lot.

Then more than that.

I have three big problems with it:

  1. It's just not true. There is not 'just testing left.' What about understanding, misunderstanding, clarifying, fixing, discussing, showing, telling, checking, configuring, analysing, deploying, redeploying, building, rebuilding and all the small cycles that exist within. Does that sound like there is 'just testing left?' When I challenge back and say, "You mean there's 'just getting it done left?'" I get an array of raised eyebrows. 
  2. Its an interesting insight into how an organisation feels about testing. The implication of such statements about testing might be extensions of; end of the process, tick in the box, holding us up, not sure what the fuss is, my bit is done, its over the fence. Most affecting for me is the inferred: "We are not sure what value testing is adding?"
  3. On a personal level, it's not 'just testing.' Its what I do. And I'm good at it. It involves skill, thought, empathy and technical aptitude. I'm serious about it. As serious as you are about being a Project Manager, Programmer, Sys Admin and the rest.

I wouldn't want to not look into the flipside of this argument (latest neurosis).

What about testers who say:

    "I'm just waiting for the development to finish before I can get started"
What are the implications here then? Perhaps there is less understanding of how damned hard it is to get complicated things to JUST WORK. Never mind solve a problem. I used to make statements like this. Until I learnt to program. Then I found that even the seemingly simple can be fiendish. And people are merciless in their critique. Absolutely merciless. Not only from the testers, from senior managers who used to be technical can't understand why it takes so long (mainly because they have forgotten how complicated it can get, filtering out their own troubled past) to build such a 'simple' system.

And if I start hearing; 'there's just QA left'...................

Sunday, 13 July 2014

The name of the thing is not the thing

I often ponder the question 'should we care about what we call things in the software development world?' One could argue that as long as everyone has a common understanding, then it shouldn't matter right? I rarely see a common understanding (which is good and bad in context), suggesting that we do care enough to name things but sometimes not enough to care about the amount of precision those names have.

Gerald Weinberg quotes in the excellent 'Secrets of Consulting' that 'the name of the thing is not the thing.' As a tester (and critical thinker) this represents a useful message to us. The name given to a thing is not the thing in itself, its a name and we shouldn't be fooled by it. This is a useful device, as I believe the name is an important gateway, to both understanding and misunderstanding, and names take root and spread.....

Testing is not Quality Assurance

There are probably a great many blogs about this, but I hear/see this every day, so it needs to be said again (and again, and again).

The rise of the phrase 'QA' when someone means 'testing' is continuing without prejudice. Those of us who have the vocabulary to express the difference are in a constant correction loop, considered pedants at best, obstructive at worst.

What is at the root of this? The use of interchangeable terms carelessly (where there is no paradigm for either side of the equation and/or a belief there is no distinction), then wonderment at how expectations have not been met. 

So how do I respond to this misnomer?

(Counts to ten, gathers composure) 

Superficially - 'Testing cannot assure quality, but it can give you information about quality.'

If someone digs deeper?

Non superficially - 'When I tested this piece of functionality, I discovered its behaviours. Some are called 'bugs' which may or may not have been fixed. These behaviours were communicated to someone who matters. They then deemed that the information given was enough to make a decision about its quality.'

This feels like a long journey, but one worth making. I will continue to correct, cajole, inform and vehemently argue when I need to. If the expectations of your contribution are consistently misunderstood, then will your contribution as a tester be truly valued?

Test Management Tools Don't Manage Testing

On a testing message board the other day (and on other occasions) I spotted a thread containing the question; 'Which 'Test Management Tool' is best (free) for my situation?' There are many different flavours, with varying levels of cost (monetary and otherwise) accompanying their implementation.

I thought about this statement. I came to the conclusion that I dislike the phrase 'Test Management Tool' intensely. In fact, it misleads on a great many levels. On a grand scale, as its name does not describe it very well at all. It offers no assistance on which tests in what form suit the situation, when testing should start, end, who should do it, with what priority, with which persona. Not sure such a tool manages anything at all. 

So what name describes it accurately? For me, at best it is a 'Test Storage Tool.' A place to put tests, data and other trappings to be interacted with asynchronously. Like many other electronic tools it is at worst it is a 'Important Information Hiding Place.' To gauge this, put yourself in another's shoes. If you knew little about testing and you were confronted with this term, what would you believe? Perhaps that there is a tool that manages testing? Rather than a human.

So what.....?

So, whats the impact here? I can think of a few, but one springs to mind.

If we unwittingly mislead (or perpetuate myths) by remaining quiet when faced with examples like the above. How do you shape a culture which values and celebrates testing? By not saying anything when what testing is and the value it adds are diluted, misrepresented and denigrated certainly helps to shape that culture. Into something you might not like.

Friday, 4 July 2014

Software Testing World Cup - An Experience Report

After much anticipation, myself and three of my colleagues embarked on the Software Testing World Cup journey in the European Preliminary. We had prepared, strategised, booked rooms/monitors, bought supplies and all the other (actually quite long list) of tasks to get ready for the big day. Armed with the knowledge that I would be jetting off on holiday the following day, we entered the (metaphorical) arena to give it our all and hopefully have a little fun. Here are my thoughts about 3 interesting (exhausting) hours.

When I reflect.....

  • Over my testing career, I have learnt to really value time to reflect. Study a problem, sleep on it, speak to peers for advice, come up with an approach. The time just doesn't really exist (in the amount that I needed it) during the competition, which made me uncomfortable. A little discomfort can teach you a great deal, and indeed amplified the more instinctive part of my testing brain.
  • Following on with the above, I'm happy to say I kept my shape. When your instinctive side (coupled with the deep rooted, long learned behaviours) becomes more prevalent, you can, well, go to pieces a little. I didn't. I listened to the initial discussions with the Product Owners, stuck to time limits, continued to communicate, maintained the Kanban board we had set up, all healthy indicators of some useful learned behaviours!  
  • We did quite a lot of preparation and research. We met up a couple of times as a group to discuss approach and the rules of the competition which helped massively, it helped me to discuss the rules as a group and we quickly built a common understanding. Our preparation went beyond the competition, discussing bug advocacy, the principles of testing in a mobile context to name but a few. However, as we know, very few strategies survive first contact, and our overall strategy was no exception! 
  • HOWEVER, I do believe we pivoted our strategy nicely on the day, enabling us to broaden our focus due to the scale of the application and number of platforms. As a team, we decided to familiarise with each area (we had broken down into chunks) on our desktops within a browser, then move on to a specified mobile device (given a steer that iOS would be critical).
  • Finally, I thought it was a really great thing that we decided to be in the same room as a team, really boosted our ability to validate each others defects and check in at important times, such as when we were adding to the report.

Now, about the competition itself......


  • Adding a mobile aspect really created fertile ground for bugs. In fact, I could have raised bugs for the full 3 hours, but the competition was about much more than that. This made the challenge a little different, as it would have been easy just to bug away and lose all perspective. 
  • The small hints before the preliminary were helpful too, allowing us to queue up devices and reach out to our colleagues who had done mobile testing in depth.
  • We had our HP Agile Manager (good grief, the irony in that title) test login's nice and early which was really helpful for familiarity, although a part of me wished I could have tested that system instead! We got logged in to the real project on the day without any issues, although I'm not sure it was the same for everyone. 

Could be better.....

  • A narrower focus of application would have improved the quality and challenge of the defects. To slightly contradict the above, the scope of the application under test was TOO wide! Perhaps a narrower challenge with slightly more gnarly, awkward bugs to find, I guess I felt I didn't have to work hard (at all) to find bugs, never mind the most important ones.
  • Engaging with the Product Owners was a challenge. While I can see that having one giant pool of questions was advantageous to the wide dissemination of information, I would have liked to have seen teams assigned to one (or a pair of) Product Owner. This would have enabled building up more of a rapport, especially as this was one of areas teams would be judged on. 
  • Practically speaking, the start was a little chaotic, moving from streaming URL to streaming URL, but after 10 minutes or so we go there. This reflects so many experiences in the software development world (projects) where we need to find our rhythm.

I think I (we) could have done better. However I always think that about everything I do, part of what keeps me pushing forward with my career. To participate was the key here though, plus I always appreciate a little testing practice, now I'm a little more 'senior' don't always get the chance!

Friday, 30 May 2014

The Fallacy of the Single Point


Ever heard of the 'Fallacy of the Single Cause?'

It refers to the rarity of single causes resulting in particular effects, it turns out the world is more complex than that. Many different inputs are required to created the blended and various outputs we see in the world around us. Some may contribute more than others and at different times, but as a rule of thumb for life (and testing), pinning your hopes on one cause, is likely to leave you disappointed.

We communicate in stories, but what's the point?

This fallacy has been refined to apply to the penchant for storytelling that is intrinsic to how we communicate. The question is this. How often do you listen to a story and you take away a singular outcome or learning? Thing is, the end of a narrative is only part of that journey, a great many stories express many subtleties as they progress, especially that rich vein of intrigue and oblique learning, reality.

In my eyes, this ability to tell a story has always been critical to testing, whether in the act of testing or reflecting afterwards. 'The Fallacy of the Single Point' has significance here too. As a young tester, I thought I had found a simple formula. Surely, if you cover each requirement with one test (with a variable degree of length/scope), then you will have fulfilled the testing mission for that product? My approach tried to short circuit subtlety rather than acknowledge and compliment it. While a multi-themed narrative unfolded, I was focused on a single point on the horizon.

So, what does this mean in a testing context?

A test which proves a single point has its intoxications. It allows your mind to partition, consider a test as complete, which as far as state is concerned is unhelpful. The inherent complexity of the systems we test create an intense flux in state, making it as fallible an oracle as any other. Imposed narrowness gives rise to blindness, missing the peripheral aspects of a test, lurking just out of plain sight but affecting the outcome nonetheless. The narrowness of that approach also hampers the effective discovery and description of bugs and issues which require clarity, as the wider picture is relegated to the background. 

The opposite of this argument should also be considered. Often I will see tests which prove, this, that, the other and a little extra besides. This is often indicative of a faux efficiency (always the poorer cousin of effectiveness), but at the cost of maximising cerebral focus required for a test, try to maintain an eye on each aspect of a multifaceted test. Usually more than us mere humans can effectively handle, resulting in the crucial detail being missed or link being made.

How do we know if this is happening?

Let us use Session Based Testing as our context, with a greenfield application, where we have very little history or domain background.

When determining charters for sessions, especially early in the testing effort, we may find our focus being overly narrow or wide. There are a number of signals we can look out for to give us information about the width of our focus.

If the charters are too narrow:

"We're done already?" - Imagine a 120 minute session, part of a number of charters to explore a particular piece of functionality, focused on a business critical requirement. Suddenly, 30 minutes in, you feel like it may not be valuable to continue. Make note of this, it may be a natural end to the session but it could also be an indicator of narrow focus.

"Obviously Obvious" - You have a charter on a specific requirement and the session passes without incident, perhaps a few areas for clarity. Someone looks over your shoulder and says "well, that over to the left is obviously broken!" You've missed it. Again, make a note. Perfectly possible that another pair of eyes spotted what you didn't but it may be a signal that you have been too narrow in your focus.

If the charters are too wide:

"Too Many Logical Operators" - Your charter might look like this:

The purpose of this functionality is to import AND export map pins. The business critical input format is CSV BUT one client uses XML, the display format can be either tabular OR spatially rendered. Export can be CSV OR XML.

This charter has at least four pivot points in it where your testing will need to branch. After creating a charter, look for changes in direction, see how comfortable you are with your pivots. This signal is common beyond charters, I see it often in user stories and the like. Questioning the presence and meaning of logical operators is a behaviour I see in many effective testers.

"Can I hold it in my head?" - Our brain only has so much capacity. We all have our individual page size. Consider the charter above. Would be be able to hold all that in your head without decomposing while testing? Would you be able to effectively test it in one session? The answer is (probably) that one cannot.

Is there something simple that can be done?

You can vary the length of your leash. A time limit of your choosing to leave the main mission and explore around the functionality, returning once the limit has expired.

Sessions too narrow? Give yourself a longer leash allowing for exposure to the edges of the charter, then snapping back to the mission at hand.

Sessions too wide? Shorten the leash, keeping you within touching distance of parts of the charter you can reach realistically within the session you have defined.    

This variable leash approach enables progress while also refining the focus of your charters on an iterative basis. As we explore and learn, more effective ways to decompose the system under test will present themselves. The testing story emerges as you move throughout the system under test, the challenge is to find the right balance of focus, to ensure that we are not hoodwinked by 'The Fallacy of the Single Point.'

Monday, 26 May 2014

Reviewed - The Testers Pocketbook by Paul Gerrard

I had heard a great deal about this little book. Some who had read it appreciated its premise, some were in fairly fundamental disagreement. If a text generates polar opposites of agreement, then that immediately piqued my interest! So lets begin with that premise:
"A Test Axiom is something we believe to be self evident and we cannot imagine an exception to it"

I feel this is indeed a risky premise for an approach to testing, could be easily misinterpreted as a set of iron laws to be followed, which will magically output an appropriate approach to testing. With this in mind I set about the enjoyable challenge of the dissection of these axioms. Lets take for example:
"Testing requires a known, controlled environment"

There are absolutely benefits to this statement, but also damaging flipsides to your test approach. A known, controlled environment is limited in variance, therefore only able to expose bugs of a certain nature. In addition, tests run in an environment such as this can be false signals, as the variance and scale of the real world changes outcomes.

On the reverse of this, I found a number of 'axioms' more challenging:
"Testing needs stakeholders"

I can imagine a great deal of variance here in terms of who the stakeholder is, their agenda and beliefs but testing without an audience? Can I imagine where this is not axiomatic? Stakeholders may see testing as a 'hoop to jump through' rather than an a skilful way of providing them the information they need, and feel they don't need testing, but testing needs stakeholders to provide context for scope and priority decisions.   

The 'Axioms' form part of the 'First Equation of Testing':
"Axioms + Context + Values + Thinking = Approach" 

I found this to be another challenging assertion, as the application of axioms in an equation could be interpreted as a formula for success, whereas the real challenge of testing exists with the spaces between the constituent parts of the formula and how they interact. I see danger in creating formula's and checklists for testing, as it perpetuates the linear, tickbox impression of testing as a craft. In fairness to the author, the overall tone of the piece encourages the application of the axioms and formula as part of a wider toolkit. 

Although I found myself disagreeing with (what I saw as) the overall premise of the text, the author strongly advocates a focus on stakeholders and, by extension, providing the right information at the right time to those who make decisions. These sections are well worth reading and paying attention to, I certainly have applied some of those ideas to my recent work and provided an excellent framework for thought and approach. The author builds from a fairly formal approach to testing to give due attention to the spectrum of formality and the value of a varied approach. Initially I felt the text suffered from a lack of acknowledgement of the complexity of contemporary systems but this grew as the text progressed, which helped to provide a more rounded view of testing.

I found the authors real world experience shone through towards the end of the text, the reality of delivery is evident, although I think the author leans too far towards testing being concerned with the cost of failure rather than the benefit of the system meeting its mission. Both are important but I prefer the positive side of this coin and I believe testing enjoys a better profile when approached from this standpoint. 

Thoroughly recommended read for testers of all experience and seniority, I will use some of the 'axioms' as part of my testing toolkit, although with an eye on their fallibility. I'll end with my favourite 'axiom', which is certainly one I believe in:
"Testing never finishes. It stops."

Thursday, 8 May 2014

Lets celebrate! Anyone still out there.....?

Pyrrhic victory. I was reminded of this term a few days ago. 

It is when winning decimates *almost* everything, so winning is basically not worth the cost exacted to achieve it. I believe I have seen this effect on teams during and after very long development projects, the dreaded 'death march.' The projects aims might be valuable and completely worthwhile, but at what cost?

Sometimes, the stresses and strains of such endeavours decimate the team tasked with delivery. Relationships are strained or break, enthusiasm is replaced with cynicism, previously open minds are closed to protect for harm and monotony. Previously conquered silo's re-embed themselves.

Consider those precious 'T-Shaped' people, who are consistently pushed to their limits and burn out, or retreat back into their shells. As a complement to the determined specialist, these guys (and encouraging more of them to flourish) are the key to unlocking effective delivery. Their flexibility and enthusiasm are their best qualities and worst enemies in this context.

So before you embark on the 'next big thing' (with emphasis on the big), take the time to consider its impacts on the humans who deliver it and split it into manageable but valuable pieces. Or you might be left with a delivered project, but no one willing (or even around) to celebrate it. 

Tuesday, 6 May 2014

Reviewed - The Effective Executive by Peter Drucker

I'm always slightly sceptical of the phrase 'timeless' when it comes to management literature, given the infinite variance of people and the situations we find ourselves in. The Effective Executive was described as exactly that by the excellent Manager Tools podcast and I found myself in front of a well know online store ordering a copy.

Overall, it struck me immediately the sparseness and matter of fact nature of the language used by Drucker, although that sparseness expresses the practical nature of the guidance given, starting with managing one's time.

The reality of time is that it is the one thing (on an individual level at least) that you cannot gain more of. Drucker's message is quite bleak at first but the reality of it I will not contest, most executives I know will admit to rarely being able to focus on the critical issues as they are drawn in varied directions to tend to the issues of today, where they may be better served focusing on tomorrow. Indeed that is their primary function. Tracking time to a micro level, I find, is not natural to most. I am vaguely aware of where my time goes on macro level, although I can imagine areas of ineffectiveness lurk which could be righted. Drucker's advice here is well founded, although I believe ideas of slack and long leash learning would be a welcome addition to his time model, even for executives.

It is in the focus on contribution where Drucker's text begins to come alive. Whereas I see most executives focusing on the mechanical process of delivery and management with the  goal of efficiency in mind, Drucker posits that this is sub-optimal. Instead, key concepts and principles should be the domain of the executive, aided by analysis of domain and problem with the aim of results in mind. In particular the question of whether of not an event or problem is a paradigm shift for the organisations, focusing on root causes rather than symptoms.

Another idea which spoke strongly to me is that an executive should seek to utilise a person's strengths, rather than focus on their weakness. As in if a person who has been hired in a management capacity but has a natural aptitude for sales, then use them in that capacity rather than bemoaning their operational shortfalls. As a person with a predominantly practical aspect their personality this appeals to me, as opposed to the long drawn out process of the maintaining the status quo.

Reality (as I in a reality painted by Drucker which I subscribe to) is prevalent within the text. None more so than in its description of enduring leadership, as opposed to the flash of genius leadership. Effective leadership is grounded in determination as few of us possess the brilliance required to effect significant change instantly. Some may see this as another bleak message in a world where we are told anyone can do anything. It is not delivered as such, only the austere thought that if genius were needed everywhere, progress would be slow indeed! Encourage effectiveness so the ordinary can produce extraordinary results was the message I took away. 

Effective decision making is covered in some depth, with a great many useful techniques to take note of and use. The area that struck me most was disagreement. In most organisations, everyone needs to be 'on board' or 'on the same page.' Disagreement is needed to be effective, otherwise we have a danger of making decisions of shallow agreement which do not stand up to serious scrutiny. I have noted that many executive relationships I observe appear to be brittle and don't welcome constructive challenges (not withstanding the non constructive challenges of course). Drucker's argument here resonates in the software development world, where challenge is seen as blockage and being 'the guy that asks awkward questions' is a lonely, lonely place.

All of Drucker's arguments are based on the principle that self-development is the path to effectiveness. Some lessons are learnt easy, others hard but I am in agreement that effectiveness comes from more from within than without. I feel (like Weinberg's Secrets of Consulting), I will learn more from this book with experience, as my own self-development progresses. Lets see how I feel about it in a few years......

Saturday, 29 March 2014

The bigger the rock, the smaller the pieces need to be

You know what I really, really value in a fellow professional in the information technology delivery world? That special, magical ability to decompose a large (and potentially complex problem) into small, simple subtasks.

A child can do this right? This is 'Being a Human Being 101.' So why is it a behaviour that eludes a large percentage of those in the information technology industry. This is a trait of people who I like to call 'people who get things done.' Not through heroism or great feats against monolithic bureaucracies, but a simple application of critical thought.

Is there a problem here? 

People like the idea of building big stuff, stuff to "get hold of", its very grand to say we're building an "enterprise level" application. In that vein, I hear "well, this a step change to the product" or "there is no value in splitting up the project into smaller deliverables" on a regular basis. The justifications of the desperate, determined to protect bloated road maps which perpetuate their own existence.

At its root, the real problem with big stuff is that its is counter to how our brains actually work. We are overwhelmed by it, we cannot hold it within our puny cerebrums. Small stuff is natural, we can encircle it with thought and apply ourselves to it. We can be happy that its done, or at least that its time to stop.

If you are going to be marching for a year, you need plenty of opportunities to stop off on the way. Save it all up for one payload and you are likely to trudge forwards with your eyes to the floor for a large part of the journey. Your destination may well be on the other side of the horizon before you realise. 

So why do I see this all around me? 

Aside from my own bias, its actually a thing which takes thought and effort. It's easier *right now* just to plough on and not consider how an entity can be decomposed. At least that shows progress right?

Wrong. This stems from the perception that skilful decomposition is perceived to be responsible for initially 'slowing down' a delivery, while a slice of functionality is built. Speeds up your ability to generate feedback though. Which then means that you are more likely to deliver the right thing. Which, from experience, means you build what's needed, rather than spending time on what isn't.

Can someone be explicitly taught this ability?

I believe so, although its rarely that simple. At its heart is the ability to recognise flows, change the angle of approach when required, and the application of systems thinking. Decomposing complex systems or problems into simple rules of thumb is critical to an iterative delivery. 

I always like the thought of splitting an entity by the questions you wish to answer about it. Or consider the simplest thing you can do to overcome a constraint, expose information about risk or deliver customer value. I always imagine the entity as sphere and I can go anywhere around its surface. Eventually, I'll see the angle of approach. Hey, its the way my mind works. I have to apply the mental brakes, think, rather than plough on. Its taken some practice and discipline on my part.

This ability enables that most precious of habits, that of delivery of value. For now, the delivery of unvalue is pervasive to my eyes, but I'll strive to ensure that this special but underrated ability continues to have a place in the world. 

Friday, 21 March 2014

If you don't believe it, why should anyone else?

The question of what skills do testers need intrigues me. 

This always occurs to me when engaged in the search for 'good' people to hire. We (as in the technical sphere) tend to hire predominantly on 'skills.' Very rarely do we look for behaviours, even rarer we consider beliefs.

After some consideration (and no little practical finger-burning), starting with skills is often a false position, starting with beliefs can be much more powerful.

The following question always strikes me, when I consider this context. How many testers you know can give you explanation of what they believe the essence of testing is? I know relatively few. In fact, I often received the look of a startled rabbit when I lead with this question. You do it every day, but you can't tell me what you believe it is?

To not be compelling with what you believe testing to be puts you and your chosen vocation at a significant disadvantage when interacting with those who are sceptical about its value. To be not compelling when most reasoning is done for argumentative purposes (to convince, not necessarily to make better decisions) further underpins the disadvantage.

So, when I ask myself the golden question, I begin with this:

'Testing is the skilful exploration of an entity for information about its quality, where quality is value to some person.'

To decompose:
  • I believe testing is a skill not 'just an activity';
  • I believe testing is exploration, more than it is deterministic;
  • I provide information about quality to aid decisions about risk;
  • I believe quality is most meaningfully expressed from the point of view of 'some person', who is important in context. 

Does it fly? I think so. Well, is there a 'right' answer is a more pertinent question. Perhaps the urge to be 'right' (or the desire not to be seen to be 'wrong') prevents people from venturing their thoughts.

Is it different to the next tester? I hope so. Will it change as I learn and grow. I hope for that too. Does it lift the ideas of others in a way that appeals to me? (Nods to Jerry Weinberg). Damn right.

However, when I discuss testing I have an advantage, I have questioned myself and my beliefs about my vocation and talk in a compelling manner. To be less than sincere with what you believe testing to be is to enter into a struggle which you may well lose more often than not.

Monday, 17 February 2014


I like to think I have a nose for a problem. Not necessarily a bug, but just when something doesn't seem right. The extent to which I follow up on these gut instincts varies depending on how strongly the nagging feeling remains. 

These 'hunches' last for days, weeks or even months, often I struggle to find the vocabulary to express what I am thinking or feeling.

For example, on a past project, the system under test needed to store certain characteristics about a customer (derived from an external service) on their first interaction with the system.

'First interactions' could take a number of different paths. Something about this nagged at me after the system went into live service. I looked superficially several times at the evidence (including with the product stakeholders) and all SEEMED to be well, yet something still chipped away at my consciousness. By now I felt a little crazy, but time and change then proceeded to distract me.

Then the big day came. The information was called from its place of storage to generate a product for those customers to consume. Vast swathes of customers stored data was missing, stemming from a couple of customers flows I didn't anticipate in my testing.


I then talked to the stakeholders involved and reiterated how I knew that something was wrong.

This was a while ago, but I haven't reflected until now. 

I probably have more questions than answers. 

Why didn't I act on my suspicions? Especially if deep down I knew that there was a problem.

Why did the testing originally done against the system not cover these flows? What questions did I not ask of myself, the stakeholders and the system under test?

Why did the stakeholders not buy (further) into my suspicions? Perhaps my procrastination and vagueness didn't inspire them to investigate further.

For now, I think I'll take away as a resolution to act on these hunches with a little more determination and strive the grow the vocabulary and skills required to express myself with greater clarity.

But, one conclusion remains for me, the gut is one of my favourite parts of the testers mindset.

Friday, 24 January 2014

Shallow Statement Syndrome

'Surely its just a case of doing X and creating a Y, then we'll obviously get to Z. I've done this lots of times before.'
This is an example of Shallow Statement Syndrome, one I hear often from those involved in software development. It comes loaded with preconception and assumption and is generally delivered with great belief by the speaker. As a tester it sets my common sense tingling.

Lets decompose the highlights:

'Surely' - I have already decided, I'm already sure, my mind is closed to options.

'Just' - I don't believe this to be complex, I am implying simplicity and ease.

'Obviously' - The outcome is obvious to me, I don't need to encourage others to envisage the outcome.

'Before' - The issue at hand stirs nostalgia, I have done this in my past, therefore it can be done again in a similar way, possibly by others. 

The problem with Shallow Statement Syndrome is the chasm beneath them when you scratch the surface. Beneath each shallow statement is analysis and detail which needs to be uncovered layer by layer as you iterate.  Those who used to have the responsibility of using technology to create complex systems are particularly prone to this syndrome, their subconscious often masking the challenges they faced.

Many projects fall into this particular abyss, recognising and critically challenging shallow statements before setting off/during the journey across the sometimes rickety rope bridge of software development can save you a short trip down a deep, crocodile-infested ravine.