A Personal Model of Testing and Checking


As part of the whole CDT vs Automators vs Team Valor vs Team Mystic battle, one of the main sources of angst appears (to me) to be the testing and checking debate.

The mere mention seems to trigger a Barking Man type reaction in some quarters. Now I enjoy someone barking like a dog as much as the next person but when discussions around testing resemble the slightly grisly scenes in Cujo, we've gone too far. To me, the fallacy at play appears to be "you strongly advocate X therefore you must detest Y." Stands to reason right, I've got two cats I love very much, therefore I cannot stand dogs.

Anyway, I like the testing and checking model. Note the use of the word model. I really mean that, it helps me to think. It helps me to reason about how I am approaching a testing problem and provides a frame, in the form of a distinction. More specifically a distinction which assists my balance.

I've added it to my mental arsenal. As all good testers should do in my eyes with a great many models. Not an absolute, but a guide.

It is in the form of a question, while analysing a testing problem, during testing, or when I'm washing up (sometimes literally) afterwards:

"Now, Ash, how much exploration will you/are you/have you do/doing/have done about the extent to which this here entity solves the problem at hand and how much checking against, say, data being in the place that might be right according to some oracle(s)"

Lets show an example. I'm doing a piece of analysis on a user story, post having a good natter with all the humans involved, for an API written using node.js:

I might have a mission of say:

"To test that product data for home and garden products in the admin data store can be parsed when retrieved and could be consumed by React to be rendered on the mobile website..."

I might generate a couple of charters like:

"Explore the structure of a response from the product api
Using the React properties model oracle
To discover if the data is of the correct type to be consumed by React" 
"Explore the retrieval of specific home and garden products returned from the product api
Using a comparison of the contents of the admin data store as an oracle
To discover if the response data corresponds to the content of the origin"

While valuable, these are probably on my 'checking' spectrum. Therefore I might add:

"Explore the response of home and garden products returned from the product api
Using a variable number of concurrent requestsTo discover the point at which the response time may degrade"

This to me is a bit more 'testy', as I surmise javascript is single threaded, so concurrency may be a problem. If the solution doesn't work, the problem isn't solved. If I get the expected (by some oracle) data back, but if the response time increases by some magnitude when concurrency is introduced, then maybe the problem isn't solved after all. Testing, for a specific technology risk that has a business impact. And so on, I iterate over my charters, with testing and checking in mind.

Do I slave in an exacting fashion to the definitions of testing and checking? Nope. Is it perfectly congruent? Nah. Is it useful to me? Yep.

I could go on but I won't. Its a model, one of many. Be better, select and use models based on their strengths and weaknesses, using your critical mind and experience.


Addendum

For those who may care, my sticky oar on the debate is as follows:


  • Checking is a tactic of testing, a really important one. Automated or otherwise. Good testing contains checking. Automated testing should be embraced, encouraged and understood, in the spirit of seeing the benefit and harm in all things.
  • I often craft tests, which use high volumes of automated checks to explore behaviours regarding stress, repetition and state. I have found some lovely tooling to facilitate this. I often throw these checks away immediately as there is no perceived (to my stakeholders) value left, similarly with tests. I try to avoid sunk cost where I can.
  • I also really like, "a failed check is an invitation to test." Suggests a symbiosis or extension of our senses, or perhaps even a raised eyebrow. The use of the word invitation is delightful, checking facilitating testing.
  • That said calling something a check or a test doesn't bother me overly. This may be lazy language but on occasions I have seen the word 'check' used to suggest 'unskilled', I consider that lazy language a price worth paying, as opposed to potential alienation. As an applied model of communication, testing and checking is a little dangerous in thoughtless hands.
  • With regard to automation, where appropriate I push checks down the stack as far as possible, but without ravenousness. As checking is a tactic of testing, I select it when appropriate. I apply a mostly return on investment model to this, how much to run, how long, its lifespan versus the entropy of the information it yields.
  • Good testing informs why certain tests (checks) are important, what you test (check) and where, in addition to how you do it and the longevity of those tests (checks). Kind of reads OK either way to me. Which is the point I took away from Exhibit C, and that many people have made eloquently to me a good few times.


Some references that I've consumed and thought about:

Exhibit A:

http://www.developsense.com/blog/category/testing-vs-checking

And perhaps Exhibit B:

http://www.satisfice.com/blog/archives/category/testing-vs-checking

And maybe Exhibit C:

http://www.satisfice.com/articles/cdt-automation.pdf

And gimme a D:

http://chrismcmahonsblog.blogspot.co.uk/2016/06/reviewing-context-driven-approach-to.html

And E's are good:

http://www.ministryoftesting.com/2016/04/icky-good-words-software-testing

Comments

  1. Really like the way you wrote the blog, worth reading. Will be sharing it among my circle, keep posting for us.

    ReplyDelete

Post a Comment