I recently had the pleasure of being the centre of attention during an "Ask Me Anything" hosted by the ever amazing and supportive Ministry of Testing. I talked about testability, my favourite topic. As is my wont I over prepared over enthusiastically. I have answered questions on The Club but thought I would share some notes I made, which happen to look something like a blog, which would be a shame not to share.
Behold those notes:
Monologue at the Start
- Throughout my career I’ve tested lots of systems which were difficult to test
- Early on, I think I accepted it as ‘this is how it is’
- We do this after looking at something for a long time!
- I often wonder now, how effective can our testing be in this context?
- Without a focus on testability, it will degrade over time too.
- You know the old “quality is everyone's responsibility?” Guess what, if it’s hard to test, testers will generally be the ones testing it.
- As a discipline I think we accept a lack of testability too readily too often.
- And do more testing
- With more testers
- With more automation
- Which gets infinitely complex
- Then we all get sad
- We focus on function rather than capability.
- Testability is about enabling a balanced test approach, so the whole team can be involved, performing a breadth of techniques to provide information to make holistic decision.
- So I hope we can have a big debate on how we can improve testability.
What does hard to test feel like?
- Interacting with a product gives you no feedback. No logs, no audit trail, only mysterious unmonitored dependencies. You don't know if anything went well. Or badly.
- Interacting with a product gives you vast amounts of feedback. Log files spam uncontrollably, lights twinkle on dashboards, integrated dependencies give strange, opaque answers. You don't know if anything went well. Or badly.
- You release your product. Scores of utterly baffling issues pop up. Seemingly unrelated but somehow intertwined. Next release makes you twitchy.
- You have no relationship with any of the teams that build your internal dependencies, yearly visits from your external dependencies and your operators and customers are dim and distant figures. You are in a feedback free zone.
How does a high level of testability feel?
- You are in control of the amount of feedback your product gives you, allowing either a deep, detailed view or a wide, shallow view. Rather than trying to parse what your product is doing, it will tell its story.
- The product can be set into a state of your choosing, whether that be data or configuration, allowing you to develop your product with much more certainty.
- After release, you are not dreading the call from support or account management that your customers are unhappy. Any problems are flagged early and can be proactively managed.
- Your team have great relationships with all your adjacent teams, you know the strengths, weaknesses, test strategies of your dependencies, you know the hopes and fears of those who operate your system.
Q: What is the biggest challenge in ensuring testability in a product?
A: Our attitude to it, collectively, as development and product teams. We don’t think about it early or often enough. Retrofitting testability to an already hard to test system is tough, so we don’t do it.
- No paradigm - if you don't understand what it is, how can you ask for it? Never mind describing its benefits to those who are paying for the product development. - TRY TALKING ABOUT IT IN ARCH & DESIGN SESSION
- Lack of winning stories - TRY DEVOPS HANDBOOK AND ACCELERATE
- Nobody knows who should be responsible for it - product people think it has nothing to do with them, developers think it's testers' responsibility, testers don't communicate the pain of a hard to test system to developers. - PUT IT IN TERMS OF BENEFIT - We are really keen to get this feature out before the marketing campaign. What does it matter how testable it is? Even when deadlines are tight, keeping our focus on testability is important as you want to be able to make the right call on when to release. If the feature needs lots of setup time to test, we will spend less time getting information about quality and risk.
- There's no time - "we need to build the thing now", when the pressure is on, with deadlines looming, there is no time for testability. CAPTURE AS TESTING DEBT - MAKE IT VISIBLE
- It's not a functional requirement - testability and other operational features never make it into the backlog. But these features are what turns functionality into a product. TRY OPERATIONAL FEATURES
- Starting too big - installing the latest observability tooling is great, but usually too big to swallow against other priorities. START SMALL, BASED ON RISK
I think testability is a massive benefit to everyone. If you collaboratively build a system with those who support it to a high standard of obs, control and understanding, they will like you a lot.
Q: How to approach testing cloud technologies?
A: The cloud provides some interesting new challenges. At a previous company, we used AWS to autoscale for a very high load scenario in a short period of time, but AWS couldn’t scale fast enough. So those services had to pre-scaled, defeating the point a little.
Just goes to show all the cloud in the world still has risk attached to it. Principles to use, from a testability POV:
- Think about state and persistence. How can you set your app in the cloud into the right state (load balancer, nodes, auth) to begin testing.
- Queues and Events - are hard to test, often needed high levels of control and observability. Prone to race conditions and long conversations about eventual consistency.
- Use something like localstack to have a local cloud environment to test on. Alternatives can be expensive, eroding the value of your testing.
- Learn the AWS cli and web interfaces. And the terminology too, buckets for objects, where objects are CSS.
- Environments - YOU CAN HAVE A LOAD BALANCER in your test environments and test that too!
- Waste - loads of cloud implementations are really wasteful, large instances left on. Make the accountants love you too.
Q: Testability and Microservices?
A: Microservices speak to testability in that smallness and isolatability are desirable. The entirety is a different matter. There are three levels here:
- Integration of services
- Aggregate of services
You need to have a strategy for the three levels:
- Testing a single service is isolation is great, but they are often not used in isolation. But you can use this to get great early feedback.
- Integration of services is where you find out about relationships, contracts between services and between teams. This is where your resilience and fault tolerance testing comes in. How decomposable is your system? Mock where appropriate but don’t rely too deeply on them, start them simple and don’t rebuild the services, complex mock of a microservice? Not a microservice.
- Finally, the aggregate, where the customer journeys often occur. Mapping (knowing) which services connect to form a journey will make you a legend. Sharing understanding is key to testability. Plus using a time series database to store aggregated events from all your services with a common id is pretty cool too.
Q: How can we measure the testability of a software product?
A: Measure it with the value you deliver, basically the things that the team is measured on. However, there is always someone who asks for a metric for improvement work. Start with a few simple things:
- Time from build to starting testing - control/simplicity/observability - I mean the whole value stream from build to on a device being tested.
- Ability to get someone up to speed with the system - simplicity - first commit & push perhaps.
- Problem isolation time - decomposability/observability.
- Speed of regression testing cycles - favouring minutes over days.
- Defect escapage into live - too loaded, most companies can’t have a conversation about it.
- Test coverage - again, too loaded, too much silly language that hurts you.
For me, I like evolutions of test and environment strategies and diversity of types of testing performed as nice metrics. It means that you are digging deeper and exposing the risks and your knowledge is changing...
Q: What do you think is the most important factor in testability? And why?
A: I do like a coaching question, making me choose.
Out of the many factors of testability, the one I have seen make the most difference is enhancing observability.
By observability I mean the ability to investigate the strange in a transparent way which is traceable. Either through tracing tools, debugging, logs, audit databases, however really.
Shining a light into the darker parts of your system gives you the thing you need the most, some information on a problem to share with those who are affected by it. Without this information, your interactions with dependent teams will be really bad.
Q: According to you, apart from CODING domain.. what else would i learn if im nt into that CODING skills... this question is as RAW as ME😁
A: From a testability point of view, if coding isn’t your thing there are loads of ways to add value.
- Building relationships between teams is really important for testability, bridging gaps between operational and support and development.
- Understanding and surfacing risk too you should target testability gains at the areas of most risk. Use all your modelling skills to expose these risk and gain testability from it, where it matters most.
- Also, source control. Very high quality outcomes from proper use of source control, especially in configuration. Learn about that too. :)
- Also be great at naming things. A previous job, we had a feature toggling system (session cookie for a website, toggle features) which had names like enabledisregardofdisableonoffbuttontoggle. Don’t make me come over there.
Q: In non-jargon language can you explain what is testability & can you give examples of what it is not.
A: Non jargon? How easy it is to test an entity. Broken down into how you can see what happening internally while you externally, set the system into the state you want, understand whats happening while you test and pinpoint problems accurately.
What it is not? How about a story. My first testing job:
- Raised thousands of bugs, 2 thousand in 2 years. I thought I was a machine.
- However, lack of testability was warping what I thought testing was.
- Poor relations between teams, ticketing system was the conversation mechanism.
- Builds took days (slow feedback, lack of trust) and downtime lasted weeks.
- Obscure tooling and programming languages, niche lacked support.
- Despite the bugs raised, important problems still not found.
- Plus, no one ever really got what they wanted, when they wanted it.
- After a while, this frustrated me loads! So I changed my approach. I went to see the developers on another site and said, lets share a build before official release to a test environment a couple of days early, no bugs raised.
- This practice soon spread, thus the relationship was built.
- Then we could talk about the build and the tooling and all the cool observability whizz bangs.
Q: Can you have testability without observability or vice versa?
A: I think observability is inherent to testing. Think of the differences between monitoring and observable. Or to put it another way, things which you think might happen and investigating things which are UNKNOWN. Being able to investigate the unknown is the trait of a testable system and a big part of testing!
I mean, you can perform testing without observability, but it will likely be ineffective testing. Which is annoying for stakeholders, you can’t describe bugs well for developers or behaviours and their side effects well for product people.
Q: Do you have any tips for getting testability factored in when planning new features with developers and product owners/managers?
A: As well as getting yourself invited. By asking/bribing/doing excellent testing/adding value/pairing/being massively available.
Asking ‘how are we going to test this?’ is going to be a good start, but switching the questions a little can help too, for teams that might show less enthusiasm:
- How can we know the effects of this new feature on the existing system? (or how decomposable is it)
- How will we know the server side effects when operating the client? (or how observable is it)
- How will we set the state that the system needs to be in to start using the feature? (or how controllable is it)
- When we need to explain what the new feature is doing to a customer, can we explain it clearly? (or how understandable is it)
Triggering the debate is the start, then POW hit em with some suggestions for improvements.
Q: What was first, the tester or the testability?
A: Ha ha! Nice. Testability doesn’t necessarily need testers and vice versa.
Testability without testers manifests itself in lots of ways, monitoring, tracing, debugging, beta groups and many more. Testers without testability, you can still test, but with limited effectiveness.
Pragmatically speaking, I think often the tester turns up in a team and then what is known as testability often becomes more explicit. Transferring from the more ethereal concept to something more tangible.
Q: we should reduce dependencies and each released piece of work (story) ot be independent, tastable and of value ...
A: We have dependencies. We work within complexity, we should accept this and engage with it.
But you can make your life better:
- Release behind toggles if you cannot split effectively. Test with a limited subset of sympathetic users, value and reward their feedback.
- Make sure your contract with your dependencies is explicit for services - PACT type tooling to notify of changes for example.
- Have breakers between your system and your dependencies. If they respond with errors break connections and poll until you get a positive response. Fail in favour of the user.
- Get to know the teams that provide your dependencies, certainly the internal ones. Find out how and what they test, it will give you real insight to their cadence of delivery, bugs, and all manner of things.
Taking a “waterfall approach” is a false flag here. Dependency mapping still needs to be done in agile ways of work. Think about risk, do some analysis and build the smallest things that gives you feedback.
Is it possible to test, say, page 5 in a sign-up process without effectively testing pages 1-4 each time you want to test page 5? There are dependencies and responses required on each previous page. Does that mean that page 5 is effectively untestable?
Depending on technologies involved, you can mock out what you need. It might be a service or a datastore within the browser you can get to with Chrome Dev Tools. In short, yes. As ever it depends what page 5 depends upon. Plus if you want to go further than page 5.
"Testability" is a rather big word. How would you break it down to parts people can understand? In other words - what is "testability" made of, and are all parts equally important?
It's a HUGE word, you are right about that. I like Rob Meaney's 10 P’s of Testability Model:
- The people in our team possess the mindset, skill set & knowledge set to do great testing and are aligned in their pursuit of quality.
- The philosophy of our team encourages whole team responsibility for quality and collaboration across team roles, the business and with the customer.
- The product is designed to facilitate great exploratory testing and automation at every level of the product.
- The process helps the team decompose work into small testable chunks and discourages the accumulation of testing debt.
- The team has a deep understanding of the problem the product solves for their customer and actively identifies and mitigates risk.
- The team is provided with the time, resources, space and autonomy to focus & do great testing.
- The teams' pipeline provides fast, reliable, accessible and comprehensive feedback on every change as it moves towards production.
- The team considers and applies the appropriate blend of testing to facilitate continuous feedback and unearth important problems as quickly as possible.
- Production Issues
- The team has very few customer impacting production issues but when they do occur the team can very quickly detect, debug and remediate the issue.
- The team proactively seeks to continuously improve their test approach, learn from their mistakes and experiment with new tools and techniques.
And there it is! You were a lovely audience. Remember, if you want to turn testing into a team sport, it's got to be testability. Then maybe, at some point, quality will actually be everybody's responsibility.