Wednesday, 30 July 2014

The 'Just Testing Left' Fallacy

I am mindful that many of my blogs are descending into mini tirades against the various fallacies and general abuse of the lexicon of software development.

Humour me, for one last time (thats not true by the way).

In meetings, at the Scrum of Scrums, in conversation, I keep hearing it.

    "There's just testing left to do"
And then I read this:

http://bigstory.ap.org/article/social-security-spent-300m-it-boondoggle

An all too familiar software development tale of woe.




I thought; 'I bet everyone on that project is saying it too.' Next to Water Coolers, Coffee Machines, at the Vending Machine, in meetings and corridors.

At first, it gnawed at me a little.

Then a lot.

Then more than that.

I have three big problems with it:

  1. It's just not true. There is not 'just testing left.' What about understanding, misunderstanding, clarifying, fixing, discussing, showing, telling, checking, configuring, analysing, deploying, redeploying, building, rebuilding and all the small cycles that exist within. Does that sound like there is 'just testing left?' When I challenge back and say, "You mean there's 'just getting it done left?'" I get an array of raised eyebrows. 
  2. Its an interesting insight into how an organisation feels about testing. The implication of such statements about testing might be extensions of; end of the process, tick in the box, holding us up, not sure what the fuss is, my bit is done, its over the fence. Most affecting for me is the inferred: "We are not sure what value testing is adding?"
  3. On a personal level, it's not 'just testing.' Its what I do. And I'm good at it. It involves skill, thought, empathy and technical aptitude. I'm serious about it. As serious as you are about being a Project Manager, Programmer, Sys Admin and the rest.

I wouldn't want to not look into the flipside of this argument (latest neurosis).

What about testers who say:

    "I'm just waiting for the development to finish before I can get started"
What are the implications here then? Perhaps there is less understanding of how damned hard it is to get complicated things to JUST WORK. Never mind solve a problem. I used to make statements like this. Until I learnt to program. Then I found that even the seemingly simple can be fiendish. And people are merciless in their critique. Absolutely merciless. Not only from the testers, from senior managers who used to be technical can't understand why it takes so long (mainly because they have forgotten how complicated it can get, filtering out their own troubled past) to build such a 'simple' system.
 

And if I start hearing; 'there's just QA left'...................

Sunday, 13 July 2014

The name of the thing is not the thing


I often ponder the question 'should we care about what we call things in the software development world?' One could argue that as long as everyone has a common understanding, then it shouldn't matter right? I rarely see a common understanding (which is good and bad in context), suggesting that we do care enough to name things but sometimes not enough to care about the amount of precision those names have.

Gerald Weinberg quotes in the excellent 'Secrets of Consulting' that 'the name of the thing is not the thing.' As a tester (and critical thinker) this represents a useful message to us. The name given to a thing is not the thing in itself, its a name and we shouldn't be fooled by it. This is a useful device, as I believe the name is an important gateway, to both understanding and misunderstanding, and names take root and spread.....

Testing is not Quality Assurance

There are probably a great many blogs about this, but I hear/see this every day, so it needs to be said again (and again, and again).

The rise of the phrase 'QA' when someone means 'testing' is continuing without prejudice. Those of us who have the vocabulary to express the difference are in a constant correction loop, considered pedants at best, obstructive at worst.

What is at the root of this? The use of interchangeable terms carelessly (where there is no paradigm for either side of the equation and/or a belief there is no distinction), then wonderment at how expectations have not been met. 

So how do I respond to this misnomer?

(Counts to ten, gathers composure) 

Superficially - 'Testing cannot assure quality, but it can give you information about quality.'

If someone digs deeper?

Non superficially - 'When I tested this piece of functionality, I discovered its behaviours. Some are called 'bugs' which may or may not have been fixed. These behaviours were communicated to someone who matters. They then deemed that the information given was enough to make a decision about its quality.'

This feels like a long journey, but one worth making. I will continue to correct, cajole, inform and vehemently argue when I need to. If the expectations of your contribution are consistently misunderstood, then will your contribution as a tester be truly valued?

Test Management Tools Don't Manage Testing

On a testing message board the other day (and on other occasions) I spotted a thread containing the question; 'Which 'Test Management Tool' is best (free) for my situation?' There are many different flavours, with varying levels of cost (monetary and otherwise) accompanying their implementation.

I thought about this statement. I came to the conclusion that I dislike the phrase 'Test Management Tool' intensely. In fact, it misleads on a great many levels. On a grand scale, as its name does not describe it very well at all. It offers no assistance on which tests in what form suit the situation, when testing should start, end, who should do it, with what priority, with which persona. Not sure such a tool manages anything at all. 

So what name describes it accurately? For me, at best it is a 'Test Storage Tool.' A place to put tests, data and other trappings to be interacted with asynchronously. Like many other electronic tools it is at worst it is a 'Important Information Hiding Place.' To gauge this, put yourself in another's shoes. If you knew little about testing and you were confronted with this term, what would you believe? Perhaps that there is a tool that manages testing? Rather than a human.

So what.....?

So, whats the impact here? I can think of a few, but one springs to mind.

If we unwittingly mislead (or perpetuate myths) by remaining quiet when faced with examples like the above. How do you shape a culture which values and celebrates testing? By not saying anything when what testing is and the value it adds are diluted, misrepresented and denigrated certainly helps to shape that culture. Into something you might not like.

Friday, 4 July 2014

Software Testing World Cup - An Experience Report



After much anticipation, myself and three of my colleagues embarked on the Software Testing World Cup journey in the European Preliminary. We had prepared, strategised, booked rooms/monitors, bought supplies and all the other (actually quite long list) of tasks to get ready for the big day. Armed with the knowledge that I would be jetting off on holiday the following day, we entered the (metaphorical) arena to give it our all and hopefully have a little fun. Here are my thoughts about 3 interesting (exhausting) hours.

When I reflect.....

  • Over my testing career, I have learnt to really value time to reflect. Study a problem, sleep on it, speak to peers for advice, come up with an approach. The time just doesn't really exist (in the amount that I needed it) during the competition, which made me uncomfortable. A little discomfort can teach you a great deal, and indeed amplified the more instinctive part of my testing brain.
  • Following on with the above, I'm happy to say I kept my shape. When your instinctive side (coupled with the deep rooted, long learned behaviours) becomes more prevalent, you can, well, go to pieces a little. I didn't. I listened to the initial discussions with the Product Owners, stuck to time limits, continued to communicate, maintained the Kanban board we had set up, all healthy indicators of some useful learned behaviours!  
  • We did quite a lot of preparation and research. We met up a couple of times as a group to discuss approach and the rules of the competition which helped massively, it helped me to discuss the rules as a group and we quickly built a common understanding. Our preparation went beyond the competition, discussing bug advocacy, the principles of testing in a mobile context to name but a few. However, as we know, very few strategies survive first contact, and our overall strategy was no exception! 
  • HOWEVER, I do believe we pivoted our strategy nicely on the day, enabling us to broaden our focus due to the scale of the application and number of platforms. As a team, we decided to familiarise with each area (we had broken down into chunks) on our desktops within a browser, then move on to a specified mobile device (given a steer that iOS would be critical).
  • Finally, I thought it was a really great thing that we decided to be in the same room as a team, really boosted our ability to validate each others defects and check in at important times, such as when we were adding to the report.

Now, about the competition itself......

Good!

  • Adding a mobile aspect really created fertile ground for bugs. In fact, I could have raised bugs for the full 3 hours, but the competition was about much more than that. This made the challenge a little different, as it would have been easy just to bug away and lose all perspective. 
  • The small hints before the preliminary were helpful too, allowing us to queue up devices and reach out to our colleagues who had done mobile testing in depth.
  • We had our HP Agile Manager (good grief, the irony in that title) test login's nice and early which was really helpful for familiarity, although a part of me wished I could have tested that system instead! We got logged in to the real project on the day without any issues, although I'm not sure it was the same for everyone. 

Could be better.....

  • A narrower focus of application would have improved the quality and challenge of the defects. To slightly contradict the above, the scope of the application under test was TOO wide! Perhaps a narrower challenge with slightly more gnarly, awkward bugs to find, I guess I felt I didn't have to work hard (at all) to find bugs, never mind the most important ones.
  • Engaging with the Product Owners was a challenge. While I can see that having one giant pool of questions was advantageous to the wide dissemination of information, I would have liked to have seen teams assigned to one (or a pair of) Product Owner. This would have enabled building up more of a rapport, especially as this was one of areas teams would be judged on. 
  • Practically speaking, the start was a little chaotic, moving from streaming URL to streaming URL, but after 10 minutes or so we go there. This reflects so many experiences in the software development world (projects) where we need to find our rhythm.

I think I (we) could have done better. However I always think that about everything I do, part of what keeps me pushing forward with my career. To participate was the key here though, plus I always appreciate a little testing practice, now I'm a little more 'senior' don't always get the chance!