Skip to main content

The Team Test for Testability


You know what I see quite a lot. Really long-winded test maturity models. 

You know what I love to see? Really fast, meaningful ways to build a picture of your teams current state and provoke a conversation about improvement. The excellent test improvement card game by Huib Schoots and Joep Schuurkes is a great example. I also really like 'The Joel Test' by Joel Spolsky, a number of questions you can answer yes or no to to gain insight into their effectiveness as a software development team.

I thought something like this for testability might an interesting experiment, so here goes:

  1. If you ask the team to change their codebase do they react positively?
  2. Does each member of the team have access to the system source control?
  3. Does the team know which parts of the codebase are subject to the most change?
  4. Does the team collaborate regularly with teams that maintain their dependencies?
  5. Does the team have regular contact with the users of the system?
  6. Can you set your system into a given state to repeat a test?
  7. Is each member of the team able to create a disposable test environment?
  8. Is each member of the team able to run automated unit tests?
  9. Can the team test both the synchronous and asynchronous parts of their system?
  10. Does each member of the team have a method of consuming the application logs from Production?
  11. Does the team know what value the 95th percentile of response times is for their system?
  12. Does the team curate a living knowledge base about the system it maintains?

Where each yes is worth one point:

  • 12 -  You are the masters of observation, control and understanding. Change embraced openly. Way to be.
  • 11 - Danny Dainton's Testability Focused Team at NewVoiceMedia - Wednesday 2nd August 2017
  • 8-10 - You are doing pretty good. You can change stuff with a reasonable degree of confidence. Could be better.
  • Less than 8 - Uh oh. You are missing some big stuff and harbouring serious unseen, unknowable risk. Must do better.

What would you add or take away from this list? If you try it, or a variant of it let me know, I'd be fascinated to hear!

Notes On Each Question

  1. If everyone actively recoils from change to all or a specific part of the system and there's always a reason not to tackle this. It's a no.
  2. Everybody should be able to see the code. More importantly a lot of collaboration occurs here. Access for all is part of closeness as a team.
  3. A codebase generally has hotspots of change. Knowing this can inform what to test, how to test and where observation and control points are.
  4. Teams have both internal and external dependencies. The testability of dependencies effects your testability. Close collaboration such as attending their sprint planning helps.
  5. To test a system effectively one must have deep empathy with those who use it. For example if you have internal users you invite them to demo's, for external customers your team is exposed to customer contacts, or dare I say it, even respond to contacts.
  6. Some tests need data and configuration to be set. Can you reliably set the state of these assets to repeat a test again?
  7. Sometimes programmers have environments (I hope) to make their changes in something representative of the world. Your testers and operations focused people should have this too to encourage early testing, reduce dependency on centralised environments and setup time of tests. 
  8. If you don't have unit tests, you probably haven't even tried to guess what a unit of your code is. Immediate no. If you do each member should be able to run these tests when they need them. 
  9. Systems often have scheduled tasks which lifts and shifts or replicates data and other operations which occur on demand. Can you test both?
  10. Its important that all teams members can see below the application, so we aren't fooled by what we see. If you don't have an organised logging library which describes system events you should get one, plus eventually some form of centralisation and aggregation. Or just use syslog.
  11. If you don't know this you either haven't been able to performance test, gather data for analysis, or realised that you can analyse your performance data without outliers. Either way you probably know little about what your code does under load or where to start with tuning.
  12. Documentation. I know you are agile and you thought it was part of your waterfall past. To be fair, this is anything that describes your system, a wiki, auto generated docs, anything. If someone wants to know what something does, where do you go? Only the source code alone is a no for this one. Your team has many members, not that many Product Owners can read code.

Comments

  1. The post is absolutely perfectly explaining the need of improvison of tests and teams need to ensure to qualify the test and produce some good quality results to be considered.

    ReplyDelete

Post a Comment

Popular posts from this blog

A Lone Tester at a DevOps Conference

I recently had the chance to go to Velocity Conf in Amsterdam, which one might describe as a DevOps conference. I love going to conferences of all types, restricting the self to discipline specific events is counter intuitive to me, as each discipline involved in building and supporting something isn't isolated. Even if some organisations try and keep it that way, reality barges its way in. Gotta speak to each other some day.

So, I was in an awesome city, anticipating an enlightening few days. Velocity is big. I sometimes forget how big business some conferences are, most testing events I attend are usually in the hundreds of attendees. With big conferences comes the trappings of big business. For my part, I swapped product and testability ideas with Datadog, Pager Duty and others for swag. My going rate for consultancy appears to be tshirts, stickers and hats.

So, lets get to it:

3 Takeaways

Inclusiveness - there was a huge focus on effective teams, organisational dynamics and splitt…

Wheel of Testing Part 2 - Content

Thank you Reddit, while attempting to find pictures of the earths core, you surpass yourself.
Turns out Steve Buscemi is the centre of the world.

Anyway. Lets start with something I hold to be true. My testing career is mine to shape, it has many influences but only one driver. No one will do it for me. Organisations that offer a career (or even a vocation) are offering something that is not theirs to give. Too much of their own needs get in the way, plus morphing into a badass question-asker, assumption-challenger, claim-demolisher and illusion-breaker is a bit terrifying for most organisations. Therefore, I hope the wheel is a tool for possibilities not definitive answers, otherwise it would just be another tool trying to provide a path which is yours to define.


In part one, I discussed why I had thought about the wheel of testing in terms of my own motivations for creating it, plus applying the reasoning of a career in testing to it. As in, coming up with a sensible reflection of real…

Getting started with testability

At TestBash Netherlands, I said that, in my experience, a lot of testers don't really get testability. I would feel bad if I didn't follow that up with a starting point for expanding your mindset and explicitly thinking about testability day to day, and making your testing lives better! 

In large scale, high transaction systems testability really is critical, as compared to the vastness and variability of the world, testing done within organisations, before deployment, is limited by comparison. We need ways to see and learn from the systems we test where it matters, in Production.
Being able to observe, control and understand is central to testing effectively, and there are loads of resources and experience reports out there to help. I was/am inspired by James Bach, Seth Eliot, Matt Skelton, Sally Goble, Martin Fowler and a little bit of PerfBytes/Logchat, so lets see if it works for you! 

Overall Model:

Heuristics of Software Testability by James Bach

http://www.satisfice.com/tool…