Monday, 18 February 2019

Ask Me Anything - Testability

Related image

I recently had the pleasure of being the centre of attention during an "Ask Me Anything" hosted by the ever amazing and supportive Ministry of Testing. I talked about testability, my favourite topic. As is my wont I over prepared over enthusiastically. I have answered questions on The Club but thought I would share some notes I made, which happen to look something like a blog, which would be a shame not to share.

Behold those notes: 

Monologue at the Start

  • Throughout my career I’ve tested lots of systems which were difficult to test
  • Early on, I think I accepted it as ‘this is how it is’
  • We do this after looking at something for a long time!
  • I often wonder now, how effective can our testing be in this context?
  • Without a focus on testability, it will degrade over time too.
  • You know the old “quality is everyone's responsibility?” Guess what, if it’s hard to test, testers will generally be the ones testing it. 
  • As a discipline I think we accept a lack of testability too readily too often.
  • And do more testing
  • With more testers
  • With more automation
  • Which gets infinitely complex
  • Then we all get sad
  • We focus on function rather than capability.
  • Testability is about enabling a balanced test approach, so the whole team can be involved, performing a breadth of techniques to provide information to make holistic decision.
  • So I hope we can have a big debate on how we can improve testability.


What does hard to test feel like?

  • Interacting with a product gives you no feedback. No logs, no audit trail, only mysterious unmonitored dependencies. You don't know if anything went well. Or badly.
  • Interacting with a product gives you vast amounts of feedback. Log files spam uncontrollably, lights twinkle on dashboards, integrated dependencies give strange, opaque answers. You don't know if anything went well. Or badly.
  • You release your product. Scores of utterly baffling issues pop up. Seemingly unrelated but somehow intertwined. Next release makes you twitchy.
  • You have no relationship with any of the teams that build your internal dependencies, yearly visits from your external dependencies and your operators and customers are dim and distant figures. You are in a feedback free zone.


How does a high level of testability feel?

  • You are in control of the amount of feedback your product gives you, allowing either a deep, detailed view or a wide, shallow view. Rather than trying to parse what your product is doing, it will tell its story.
  • The product can be set into a state of your choosing, whether that be data or configuration, allowing you to develop your product with much more certainty.
  • After release, you are not dreading the call from support or account management that your customers are unhappy. Any problems are flagged early and can be proactively managed.
  • Your team have great relationships with all your adjacent teams, you know the strengths, weaknesses, test strategies of your dependencies, you know the hopes and fears of those who operate your system.

Listeners Questions

Q: What is the biggest challenge in ensuring testability in a product?

A: Our attitude to it, collectively, as development and product teams. We don’t think about it early or often enough. Retrofitting testability to an already hard to test system is tough, so we don’t do it.

  • No paradigm - if you don't understand what it is, how can you ask for it? Never mind describing its benefits to those who are paying for the product development. - TRY TALKING ABOUT IT IN ARCH & DESIGN SESSION
  • Lack of winning stories - TRY DEVOPS HANDBOOK AND ACCELERATE
  • Nobody knows who should be responsible for it - product people think it has nothing to do with them, developers think it's testers' responsibility, testers don't communicate the pain of a hard to test system to developers. - PUT IT IN TERMS OF BENEFIT - We are really keen to get this feature out before the marketing campaign. What does it matter how testable it is? Even when deadlines are tight, keeping our focus on testability is important as you want to be able to make the right call on when to release. If the feature needs lots of setup time to test, we will spend less time getting information about quality and risk.
  • There's no time - "we need to build the thing now", when the pressure is on, with deadlines looming, there is no time for testability. CAPTURE AS TESTING DEBT - MAKE IT VISIBLE
  • It's not a functional requirement - testability and other operational features  never make it into the backlog. But these features are what turns functionality into a product. TRY OPERATIONAL FEATURES
  • Starting too big - installing the latest observability tooling is great, but usually too big to swallow against other priorities. START SMALL, BASED ON RISK


I think testability is a massive benefit to everyone. If you collaboratively build a system with those who support it to a high standard of obs, control and understanding, they will like you a lot. 

Q: How to approach testing cloud technologies?

A: The cloud provides some interesting new challenges. At a previous company, we used AWS to autoscale for a very high load scenario in a short period of time, but AWS couldn’t scale fast enough. So those services had to pre-scaled, defeating the point a little.
Just goes to show all the cloud in the world still has risk attached to it. Principles to use, from a testability POV:


  • Think about state and persistence. How can you set your app in the cloud into the right state (load balancer, nodes, auth) to begin testing.
  • Queues and Events - are hard to test, often needed high levels of control and observability. Prone to race conditions and long conversations about eventual consistency.
  • Use something like localstack to have a local cloud environment to test on. Alternatives can be expensive, eroding the value of your testing.
  • Learn the AWS cli and web interfaces. And the terminology too, buckets for objects, where objects are CSS. 
  • Environments - YOU CAN HAVE A LOAD BALANCER in your test environments and test that too! 
  • Waste - loads of cloud implementations are really wasteful, large instances left on. Make the accountants love you too.

Q: Testability and Microservices?

A: Microservices speak to testability in that smallness and isolatability are desirable. The entirety is a different matter. There are three levels here:

  • Services
  • Integration of services
  • Aggregate of services

You need to have a strategy for the three levels: 

  • Testing a single service is isolation is great, but they are often not used in isolation. But you can use this to get great early feedback.
  • Integration of services is where you find out about relationships, contracts between services and between teams. This is where your resilience and fault tolerance testing comes in. How decomposable is your system? Mock where appropriate but don’t rely too deeply on them, start them simple and don’t rebuild the services, complex mock of a microservice? Not a microservice.
  • Finally, the aggregate, where the customer journeys often occur. Mapping (knowing) which services connect to form a journey will make you a legend. Sharing understanding is key to testability. Plus using a time series database to store aggregated events from all your services with a common id is pretty cool too.

Q: How can we measure the testability of a software product?

A: Measure it with the value you deliver, basically the things that the team is measured on. However, there is always someone who asks for a metric for improvement work. Start with a few simple things:

  • Time from build to starting testing - control/simplicity/observability - I mean the whole value stream from build to on a device being tested.
  • Ability to get someone up to speed with the system - simplicity - first commit & push perhaps.
  • Problem isolation time - decomposability/observability.
  • Speed of regression testing cycles - favouring minutes over days.

Avoid:

  • Defect escapage into live - too loaded, most companies can’t have a conversation about it.
  • Test coverage - again, too loaded, too much silly language that hurts you.

For me, I like evolutions of test and environment strategies and diversity of types of testing performed as nice metrics. It means that you are digging deeper and exposing the risks and your knowledge is changing...

Q: What do you think is the most important factor in testability? And why?

A: I do like a coaching question, making me choose.

Out of the many factors of testability, the one I have seen make the most difference is enhancing observability. 

By observability I mean the ability to investigate the strange in a transparent way which is traceable. Either through tracing tools, debugging, logs, audit databases, however really.
Shining a light into the darker parts of your system gives you the thing you need the most, some information on a problem to share with those who are affected by it. Without this information, your interactions with dependent teams will be really bad. 

Q: According to you, apart from CODING domain.. what else would i learn if im nt into that CODING skills... this question is as RAW as ME😁

A: From a testability point of view, if coding isn’t your thing there are loads of ways to add value. 

  • Building relationships between teams is really important for testability, bridging gaps between operational and support and development. 
  • Understanding and surfacing risk too you should target testability gains at the areas of most risk. Use all your modelling skills to expose these risk and gain testability from it, where it matters most.
  • Also, source control. Very high quality outcomes from proper use of source control, especially in configuration. Learn about that too. :)
  • Also be great at naming things. A previous job, we had a feature toggling system (session cookie for a website, toggle features) which had names like enabledisregardofdisableonoffbuttontoggle. Don’t make me come over there.


Q: In non-jargon language can you explain what is testability & can you give examples of what it is not.

A: Non jargon? How easy it is to test an entity. Broken down into how you can see what happening internally while you externally, set the system into the state you want, understand whats happening while you test and pinpoint problems accurately.

What it is not? How about a story. My first testing job:

  • Raised thousands of bugs, 2 thousand in 2 years. I thought I was a machine.
  • However, lack of testability was warping what I thought testing was.
  • Poor relations between teams, ticketing system was the conversation mechanism.
  • Builds took days (slow feedback, lack of trust) and downtime lasted weeks.
  • Obscure tooling and programming languages, niche lacked support.
  • Despite the bugs raised, important problems still not found.
  • Plus, no one ever really got what they wanted, when they wanted it.
  • After a while, this frustrated me loads! So I changed my approach. I went to see the developers on another site and said, lets share a build before official release to a test environment a couple of days early, no bugs raised.
  • This practice soon spread, thus the relationship was built. 
  • Then we could talk about the build and the tooling and all the cool observability whizz bangs.


Q: Can you have testability without observability or vice versa?

A: I think observability is inherent to testing. Think of the differences between monitoring and observable. Or to put it another way, things which you think might happen and investigating things which are UNKNOWN. Being able to investigate the unknown is the trait of a testable system and a big part of testing!

I mean, you can perform testing without observability, but it will likely be ineffective testing. Which is annoying for stakeholders, you can’t describe bugs well for developers or behaviours and their side effects well for product people.

Q: Do you have any tips for getting testability factored in when planning new features with developers and product owners/managers?

A: As well as getting yourself invited. By asking/bribing/doing excellent testing/adding value/pairing/being massively available.

Asking ‘how are we going to test this?’ is going to be a good start, but switching the questions a little can help too, for teams that might show less enthusiasm:

  • How can we know the effects of this new feature on the existing system? (or how decomposable is it)
  • How will we know the server side effects when operating the client? (or how observable is it)
  • How will we set the state that the system needs to be in to start using the feature? (or how controllable is it)
  • When we need to explain what the new feature is doing to a customer, can we explain it clearly? (or how understandable is it)


Triggering the debate is the start, then POW hit em with some suggestions for improvements.

Q: What was first, the tester or the testability?

A: Ha ha! Nice. Testability doesn’t necessarily need testers and vice versa. 

Testability without testers manifests itself in lots of ways, monitoring, tracing, debugging, beta groups and many more. Testers without testability, you can still test, but with limited effectiveness.

Pragmatically speaking, I think often the tester turns up in a team and then what is known as testability often becomes more explicit. Transferring from the more ethereal concept to something more tangible.

Q: we should reduce dependencies and each released piece of work (story) ot be independent, tastable and of value ...

A: We have dependencies. We work within complexity, we should accept this and engage with it.

But you can make your life better:

  • Release behind toggles if you cannot split effectively. Test with a limited subset of sympathetic users, value and reward their feedback.
  • Make sure your contract with your dependencies is explicit for services - PACT type tooling to notify of changes for example.
  • Have breakers between your system and your dependencies. If they respond with errors break connections and poll until you get a positive response. Fail in favour of the user.
  • Get to know the teams that provide your dependencies, certainly the internal ones. Find out how and what they test, it will give you real insight to their cadence of delivery, bugs, and all manner of things.


Taking a “waterfall approach” is a false flag here. Dependency mapping still needs to be done in agile ways of work. Think about risk, do some analysis and build the smallest things that gives you feedback.

Is it possible to test, say, page 5 in a sign-up process without effectively testing pages 1-4 each time you want to test page 5? There are dependencies and responses required on each previous page. Does that mean that page 5 is effectively untestable?

Depending on technologies involved, you can mock out what you need. It might be a service or a datastore within the browser you can get to with Chrome Dev Tools. In short, yes. As ever it depends what page 5 depends upon. Plus if you want to go further than page 5. 

"Testability" is a rather big word. How would you break it down to parts people can understand? In other words - what is "testability" made of, and are all parts equally important?

It's a HUGE word, you are right about that. I like Rob Meaney's 10 P’s of Testability Model:

  • People
    • The people in our team possess the mindset, skill set & knowledge set to do great testing and are aligned in their pursuit of quality.
  • Philosophy 
    • The philosophy of our team encourages whole team responsibility for quality and collaboration across team roles, the business and with the customer. 
  • Product
    • The product is designed to facilitate great exploratory testing and automation at every level of the product. 
  • Process
    • The process helps the team decompose work into small testable chunks and discourages the accumulation of testing debt.
  • Problem
    • The team has a deep understanding of the problem the product solves for their customer and actively identifies and mitigates risk.
  • Project
    • The team is provided with the time, resources, space and autonomy to focus & do great testing.
  • Pipeline
    • The teams' pipeline provides fast, reliable, accessible and comprehensive feedback on every change as it moves towards production.
  • Productivity
    • The team considers and applies the appropriate blend of testing to facilitate continuous feedback and unearth important problems as quickly as possible.
  • Production Issues
    • The team has very few customer impacting production issues but when they do occur the team can very quickly detect, debug and remediate the issue.
  • Proactivity
    • The team proactively seeks to continuously improve their test approach, learn from their mistakes and experiment with new tools and techniques.



And there it is! You were a lovely audience. Remember, if you want to turn testing into a team sport, it's got to be testability. Then maybe, at some point, quality will actually be everybody's responsibility.


Wednesday, 12 December 2018

How to advance your test automation skills with source control

Image result for source control meme


If you're a tester who wants to get up to speed on coding to become a test automation engineer, don't just start learning to code; start by learning about source control, a methodology for tracking and managing changes to your code. 

In my recent past, testers at one particular organisation I worked with were busy creating test automation and even tools to manage test data and application state. All the application code was in source control but none of the code written by the testers was in source control. It was zipped and emailed, copied to shared file servers, sent by instant message to name a few sharing mechanisms. Versions diverged wildly, valuable code was lost or overwritten leading to delayed testing, panic rewrites of tools, all round disarray!

Without source control, software developments teams will essentially be in a state of chaos, terrified of code change. I often meet testers without access to their teams source control repository. If this describe you as a tester, you won't be truly closely collaborating with your team.

To bridge that gap, here's what you need to know about source control -- and how to get started. This article will cover:

  • Key disadvantages to you as a tester when you cannot access or use source control repositories and tools.
  • How a grounding in source control will help you to acquire the skills you need to grow as an automation engineer.
  • Important learning resources to complement your on the job experience with source control. 

Disadvantages of Being Source Control Unaware

There are a few key disadvantages you will find yourself as a tester if you don't have access to your team's source control repository and you don't appreciate what happens within:

  • Primary source of collaboration - a lot of communication and collaboration happens within source control and it contains a compelling version of your system and your teams history. There are a number of artifacts that teams collaborate around, usually some form of issue tracking software, a shared wiki page but in my experience these are the least active. The truly active oracle for change is the code itself, where the most collaborative work is taking place, during code reviews and merges.
  • Serious risks - if you don't know what your teams source control strategy is, you might be missing out on serious risks. Long lived feature branches might indicate integration problems later down the line. When there is an issue on your kanban board for a while, there might also be feature branch that has been alive just as long and depending on your release cadence further away from your master branch. As a result code merges might be longer and more difficult, even leading to re-implementing sections of code.
  • Quality advocacy - as testers, we want to be strong advocates for quality. Effective use of source control is one of the key leading indicators of quality, not just for code, for configuration as well. Advocating for more effective use of source control can help our teams achieve better quality outcomes especially in safety and speed of deployments. We can advocate for smaller batch sizes for shorter lived branches, application configuration in source control, even encouraging storing database schema changes too.
  • Early testing - often testers wait around for a build. Waiting for someone to complete a merge and deploy onto a test environment. Imagine being able to test the branch that has been created for your new feature straight away? If you were able to run an application locally or build to an environment from a branch you could test earlier, creating a tighter feedback loop with your team and wider group of stakeholders. 

Source Control as Skills Gateway

Learning source control concepts and skills are a gateway to the other technical skills that are valuable for test automation engineers to learn:

  • Bash - although user interfaces exist for popular source control technologies you will likely be exposed to interactions using the command line, which is where a lot of developers and system administrators on your team spend their time. This then opens the door to bash for UNIX users, with its many commands for teasing out information. Check out Julia Evans Wizard Zines for more.
  • Establishing coding habits - if you have access to your application code, you can see the patterns for the code within and learn from them in your own code. Positive behaviours such as striving for single responsibility, appropriate levels of reusability and sensible abstraction between implementation and test are worth looking out for and adding to your coding toolkit. The opposite is also true, where you can learn from the bad examples within your application code!
  • Pipeline integration - without source control, any test automation that you create will be difficult to integrate with your deployment pipeline tooling, where it can add massive post deployment value. Tools such as Jenkins have built in source control integration enabling you to get your test automation code running where it matters, with the experience of adding to your team's pipeline.

Sources of Inspiration

My journey into the world of source control began with 'The Git Parable' by Tom Preston-Werner. I learn a great deal from story telling as a medium and parable is no different, teaching concepts with reference to characters and situations that resonate with me. If I was to pick a starting point, try this:

http://tom.preston-werner.com/2009/05/19/the-git-parable.html

However, each has their own learning style (or more likely mix of styles) so here's some next steps, depending on how you like to consume learning material:


Summary

Learning source control is an important step in a test automation engineers journey:

  • Without source control awareness, you will experience limits to your collaboration with your team, identifying risks, testing early and advocating for quality.
  • Source control is a gateway to many other skills that will make you an effective test automation engineer. Unlocking the potential of the command line, establishing coding habits by recognising patterns and integrate your tests with your deployment pipeline.

If you want to add value, collaborate with your team and add momentum to your learning, start with source control. Your team and your career will thank you for it.

This post was originally published by the great people at TechBeacon here: https://techbeacon.com/how-advance-your-test-automation-skills-source-control

Sunday, 2 December 2018

Modelling a unit of your system

Sunshine, zebras and great conversations

Over the last few days I've been in South Africa for Lets Test, held on a fantastic country retreat near Johannesburg. I facilitated a session called a 'Testers Guide to the Illusions of Unit Testing.' I have a confession to make. We had 2 hours to complete the workshop and, in the spirit of having too much content being way better than too little, we didn't get to delve deeply into this question. Hence this blogpost. 

We had great conversations about how unit testing is an interface between testers and developers, a gateway for deep collaboration. This is predicated on an understanding of unit testing, including what they can and can't achieve and what relation they bear to other types of testing. One of the questions we wished to tackle was:
What is a unit of our system?
How many times have you started to test a system and asked "Hey, what are the unit tests like for this unique piece of software history?" The answer in my experience has often been, "um, yeah, about that we don't have any, its too complex/there is no time/its going to deprecated one day." Or alternatively you are involved in a session about the architecture of a brand new system using the latest technologies, using microservices, lambdas or whatever. In either case, wouldn't it be useful to be able to facilitate a discussion about what is a unit of that system and what factors influence it? You might find yourself with a simpler, more observable, controllable and understandable system. Whats not to like?

Reinventing the wheel



The model itself is another wheel. Other shapes are available, although not when I model something apparently. I added a bunch of segments to the wheel that I think impact the size and shape of a unit in a systems context. The key thing here is context, I know that there are definitions of what a unit test is, contested definitions most of the time. What there isn't a definition for what is a unit of YOUR system, the one that YOU work on. Maybe you can use the above to help your team work it out.

There are key areas here, without delving into each one individually:
  • How the code is stored
  • Who contributes to it
  • What architecture and patterns are present
  • How tightly coupled the system is

The segments are also on a scale of size too. As in what factors contribute to a unit being large or small. For example:



  • Large - Single Database for Multiple Applications - this system may have a gigantic database bottleneck, perhaps even changes that one system makes can impact another. That's a weighty unit.
  • Medium - Broker/Queue Based - maybe messages are routed using built in routing configuration capabilities of RabbitMQ? A unit of this system involves invoking (or mocking) multiple systems so could be seen to have largeness too.
  • Small - Microservice(s) - a service that does one thing well - this could be a unit in itself. At the very least, it indicates that units of your system might tend towards smallness.

Explaining with questions

The notes below include questions and examples of each of the sections. They may be useful as further prompts:

### Architectural Patterns
* What type of system architecture is employed?
* How many layers or tiers does it have?
* How many roles do components within have?
* Examples:
 * Three tier
 * Microservices
 * Broker/queue based
 * Model-view-controller 

 
### Coding Practices
* How do contributors collaborate over code?
* What design patterns are employed with the code?
* To what extent does testing drive the coding process?
* Examples:
 * Driving Development with Tests
 * Pairing
 * Code reviews
 * SOLID principles
 * Inheritance
 * Abstraction 

### Concurrent Teams 
* How many teams contribute concurrently to the repository?
* Are they internal or external teams?
* Which teams are core maintainers?
* Examples:
 * Internal development teams
 * Outsourced development teams
 * Contributors to key libraries 

### Source Control Strategy
* What strategy does your team employ to manage code?
* What size changes are often checked in?
* How long until changes are integrated with trunk?
* Examples:
 * GitHub Flow
 * GitFlow
 * Trunk based development

### Source Control Repo
* How large is the repository that you have?
* How many linked repositories does it have?
* How does it manage its dependencies?
* Examples:
 * Monolith
 * Monorepo
 * Microlith 

### Size of Objects/Data 
* What are the key data items that the code manages?
* What size are the important items?
* What depends on those important items?
* Examples:
 * Core classes instantiated for every interaction
 * Persistence of customer data
 * Transaction and audit histories 

### Risk to Whole
* What dependencies exist within your code?
* Are they any common objects or structures?
* Are there any areas teams fear to touch?
* Examples:
 * Large common classes with multiple roles
 * Old versions of dependencies
 * Hard coding/configuration in the code 
 
### Time Periods Used
* What is the lifespan of objects/data created by your code?
* Are there wide differences in the lifespan of different types of objects used?
* Examples:
 * Long lived daemons
 * Scheduled tasks
 * Asynchronous HTTP requests
 * Cache size and expiry rules

Over to you

Essentially, the wheel and guidance is a prompt to help answer these questions about your own system.

  • What factors help to find a unit of your system? (the segments of the wheel, which can be the ones I added, or if you don't like them I have others)
  • What practices & patterns influence that unit? (your own ways of working and crafting principles)
  • How practices and patterns govern size of a unit? (how the way you build stuff affects your ability to test small units of that stuff)

Conclusion

As with all models, this has some holes in it. Not all layers of the system will have the same size unit for example. What is a unit when you test a React application using Jest for example? What unit is creating snapshot files that creates html testing? An interesting questions plus a new and (welcome) challenge to the orthodoxy of automated/unit testing.

Trying to determine what a unit of your system is, will, at the very least lead to asking some searching questions which may provoke a reaction within your team. Overall smallness is the aim for me. After all, if asking these questions serves to help shrink a unit of your system. I believe that testers can be powerful catalysts in this regard.


Friday, 17 August 2018

Overcome painful dependencies with improved adjacent testability


We had done it: We had built a testable system. We achieved high observability through monitoring, logging and alerting. We instituted controllability by using feature flags. We understood the build through pairing and cross-discipline-generated acceptance test automation. We aimed for decomposability by using small, well-formed services. But our pipeline was still failing — and failing often.

So what went wrong?

We had a horrible external dependency. Their service suffered from frequent downtime, and slow recovery times. Most painful was the fact that test data creation required testers to create a manual request through a ticketing system. We were dependent on a system with low testability, which undermined our own testability. And this had consequences for our flow of work to our customers. 

In this article, I will cover how to address such dependencies and engage with the teams that maintain them, including:


  • Enhanced observability by adding key events from your dependencies to your own logging, monitoring and alerting.
  • Add controllability to applications and share this ability in order to foster a culture of collaboration with your dependencies.
  • Greater empathy with teams that provide your dependencies, they have their own problems to deal with and greater understanding will bring teams closer together.


How testability affects flow

Testability has a tangible relationship with the flow of work. If a system and its dependencies are easier to test, then work will usually flow through its pipeline. The key difference is that all disciplines are likely to get involved in testing if it is easier to test. But if a system and its dependencies are hard to test, you're likely to see a queue of tickets in a "Ready to Test" column—or even a testing crunch time before release.

To achieve smooth flow, treat your dependencies as equals when it comes to testability.

What is adjacent testability?

This term refers to how testable a system you depend upon to provide customer value is. Systems you need to integrate with to complete a customer journey, for example, if your system relies on a payment gateway which suffers from low testability, your end to end tests may fail often, making release decisions problematic for stakeholders. Most systems have integrations with other, internal and external systems. Value generation occurs when those systems work together in concert, allowing customers to achieve their goals.

When considering flow of work, I often reference Eli Goldratt's, "Theory of Constraints." Goldratt discusses two types of optimizations that apply to testability:


  • Local - changes that optimize one part of the system, without improving the the whole system.
  • Global - changes that improve the flow of the entire system.


If you optimize your own testability but neglect your dependencies, you have only local optimization. That manes you can only achieve a flow rate as large as your biggest bottleneck. In the horrible dependency I described above, new test data from the dependency took a week to create. This created other challenges. For example, we had to schedule the creation of the data in advance of the work. And when the work was no longer the highest priority, we had wasted time and energy.

How to improve adjacent testability

Establishing that you may have an adjacent testability challenge is one thing; determining what to do about it is another. On the one hand, you could argue that if a dependency is hard to test, its not your problem. External dependencies might have contractual constraints for reliability, like Service Level Agreements for example. Contracts and reality can be far apart sometimes and service level agreements are not very effective change agents in my experience, so try engaging in the following ways:

Observability and information flow

Enhance observability to provide real feedback about your interactions with dependencies, rather than logging only your system events. Interactions with dependencies are part of a journey through your system. Both internal events and interactions should log to your application logs, exposing the full journey. Replicate this pattern for both production and test environments. The key benefit: You'll provide context-rich information that the people who maintain that dependency can act upon.

For example, after integrating with external content delivery API for an internal application we had issues with triggering our request rate limit. We believed the rate limit block triggered too early, as it should have only triggered for non-cache hit requests. We added the external interactions to our internal application logs, noted that certain more frequent requests needed a longer cache expiry and worked with the external team to solve the problem.

Controllability and collaboration

Controllability is at its best when it is a shared resource, which encourages integration between services early and importantly a dialogue between teams. Feature toggles for new or changed services allow for early consumption of new features, without threatening current functionality. Earlier integration testing between systems addresses risks earlier and builds trust.

As an example, when upgrading a large scale web service by two major versions of php, our test approach included providing a feature toggle to redirect to a small pool of servers running the latest version of php for that service. Normal traffic went to the old version, while our clients tested their integrations on the new. We provided an early, transparent view of a major change, clients integrated with it, while we also tested for changes internally.

Empathy and Understanding

Systems are not the only interfaces which need to be improved in order to improve adjacent testability, how you empathize with others teams you depend on needs attention too. Consuming their monitoring, alerting and logging they receive into your own monitoring, alerting and logging setup helps a great deal.

The instance that I often reflect on is a platform migration project I worked on, a Database Administrator I worked with was often obstructive insisting on tickets being raised for every action. So I added myself to the service disruption alerts email list for that team. Batch jobs we had set up often failed, as we had not considered disk space for temporary files, waking him up at night with alerts. Small fix for us, huge for him. Never had a problem with data being moved or created after that.

Summary

Taking a collaborative approach to improving the testability of your dependencies will result in a significant testability improvement for your own system. Keep these principles in mind:


  • Observability and information flow where the whole journey is the aim, including dependencies.
  • Controllability and collaboration to encourage early integration and risk mitigation.
  • Empathy to understand the problems and pain of those who maintain your dependencies.


As a first step, try and build a relationship with the teams that you depend upon. Understanding their challenges and how you might be able to assist can unlock large testability gains for you and your team.

This post was originally published by the great people at TechBeacon here: https://techbeacon.com/testers-guide-overcoming-painful-dependencies

Thursday, 24 May 2018

Going beyond "how are we going to test this?"


Testability is a really important topic for the future of testing. So much so that I believe that it's a really, really strong area for a tester to diversify into to remain relevant and have a major impact in an organisation. After all testability is for every discipline. If you said your mission was to build loosely coupled, observable, controllable and understandable systems, I know a few operations people from my past would have bitten both my hands off. Bringing various disciplines together is what a focus on testability can do.

It begins with powerful questions when it matters, in a feature inception or story kick off session. Asking this question can be really powerful:

How are we going to test this?

Its a great question, it has the "we" in there, which is a key part as an actor within a cross functional team. It also opens up the debate on the efficacy of testing on the thing that is about to be built, what types of testing are appropriate and enhancements needed to test it effectively (specifically for the types of testing that testers might do). This question often triggers challenges to enhancing testability which you need to be aware of. Have you ever heard any of these?

  • Testability? That sounds like something that only testers should care about. As a developer why should I care?
  • We are really keen to get this feature out before the marketing campaign. What does it matter how testable it is?
  • We will think about testability later, when we have built something sufficient to begin end to end testing.
  • We should focus on the performance and scalability of the system, testability is not as important as those factors. 
  • What I'd really like to know is how will testability make us more money and protect our reputation with our clients? 
  • You can't test this change. It’s just a refactoring, library update or config change exercise which shouldn't have a functional impact.
  • We know its a really big change, but there is no way to split it into valuable chunks.

If faced with these, narrow the focus with questions such as:

  • How can we observe the effects of this new feature on the existing system? (or how decomposable is it)
  • How will we know the server side effects when operating the client? (or how observable is it)
  • How will we set the state that the system needs to be in to start using the feature? (or how controllable is it)
  • When we need to explain what the new feature is doing to a customer, can we explain it clearly? (or how understandable is it)

Great conversations often stem from the question, "how are we going to test this?" but being ready for the challenges that often occur and having focusing questions might be the catalyst take your testability to the next level.

Footnote and References:

Just so I'm clear, I mean this by the four keys aspects of intrinsic testability:

  • Decomposable - The extent to which state is isolatable between components, thus knowing when and where events occur.
  • Observable - The extent to which the requests, processing and responses of a layer or component are observable by the team. 
  • Controllable - The extent to which you can set, manipulate and ultimately reset the state of a layer or component to assist testing.
  • Understandable - The extent to which the team can reason about the behaviour of a layer or component and explain it with confidence.

For more on the various aspects of testability (I mostly discuss the intrinsic with a bit of epistemic) have a look at the following blog with a few references to get started - http://testingisbelieving.blogspot.co.uk/2017/01/getting-started-with-testability.html

Friday, 23 March 2018

Why do Testers become Scrum Masters?


It was late and I was stuck on a train, so I pondered on the question of why do testers often (in my experience) become Scrum Masters. Its a very dear question to me, as its been a big part of my career journey In fact, I've been there and back again. Tester to supposed-to-be-testing-but-being-a-Scrum-Master to Scrum Master, back to Tester and very happy thank you.

I encapsulated my reasoning in the following:
The tweet got a lot of traction, and generated a couple of interesting threads which made me think.
Perhaps part of the reason for the transition is a growing appreciation of where quality has its roots? If testing is a way of providing information about quality, facilitating a team to work closely together, with their customers, with robust technical practices towards a common goal has a more direct impact on quality. Testing is but one measure of quality, perhaps transitioning to becoming a Scrum Master meets the need to be able to impact the bigger picture.
The other potential reason and perhaps the more obvious, is that if the tester career path runs out at a given organisation, or is not appealing, a pivot is required. I have observed this with regard to those who might be called 'manual testers' where the career path is much wider for testers with an interest in the technical pathways of an organisation, the Scrum Master role brings new skills and often greater renumeration.
The part of this that interested me greatly was the end, as this speaks to my world view. I think a great deal about testability and the impact of architecture on testing. It left me wondering that as testers take on more technical roles, perhaps this will be the next migration for testers? For me, if a tester takes a solid appreciation of the value and limits of testing into a new discipline, I don't see that as a reason to be upset, careers evolve and if testing was part of your nurture, more often than not, it persists.

This all contains a reasonable amount of hearsay and bias, so I would love to hear your stories. For transparency, I want to write a talk about this. If you have become a Scrum Master or would like to, or have been on some other comparable journey, get in touch via the comments...




Thursday, 28 December 2017

Testers Guide to Myths of Unit Testing


One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers, and help us realise what other layers of testing can cover most effectively.

Also, I want to do a talk about it, so I figured I would test the premise, see if potential audiences were into it. I put this on Twitter:
30 replies with ideas tends to indicate that people might be into it. 

The List

I thought, as my final blog of 2017, I would provide a super useful list of the myths and legends we as testers might believe about unit testing:
  • That developers always write unit tests
  • That developers never write unit tests
  • That testers can write unit tests for developers
  • That developers know what unit testing is
  • That testers know what unit testing is
  • That a class is a unit to test
  • That a function is a unit to test
  • That two people mean the same thing when they say unit
  • That anyone knows what a unit is in the context of their code
  • That unit tests fill in the bottom of the test automation pyramid
  • That unit tests remain in the bottom layer of the test automation pyramid
  • That unit tests are inherently more valuable than other layers of tests
  • That unit tests are fast to run
  • That unit tests are always automated
  • That lots of unit tests are undoubtedly a very good thing
  • That unit tests can eradicated non determinism completely
  • That unit tests are solitary rather than collaborative
  • That test driven development is about testing
  • That reading unit tests relay the intent of the code being written
  • That unit test document the behaviours of code
  • That when there are unit tests, refactoring happens
  • That when there are no unit tests, refactoring happens
  • That you never need to maintain and review unit test suites
  • If it's not adding value through quick feedback it needs removing or changing.
  • That unit tests sit outside a testing strategy for a product
  • Because they exist, the unit tests are actually good
  • Assertions are actually good. Checking for absence, as opposed to presence
  • If you have a well designed suit of unit tests you don't need to do much other testing
  • 100% code coverage for a given feature is evidence that the feature works as designed
  • That code is always clean if it has unit tests
  • Unit tests are about finding bugs
  • That there is a unit to test
  • That a failing test indicates what is wrong
  • That one problem = 1 failed test
  • That good unit tests are easy/hard (adapt based on your delivery) to write for non-deterministic functions
  • "That unit test coverage is irrelevant to manual testing"? aka "Why look at them? They're JUST unit tests, we have to check that again anyways."
  • That they may/may not believe that is a tester's responsibility to ensure code quality and consistency of the test suite (and that developers may believe the opposite)
  • That unit tests don't count as "automation" if they do not use the UI
  • That unit testing allows safe refactoring
  • That the intent a developer has when they write the thing they call a unit test (guiding the design) is the same as the intent a tester has when they write the thing they call a unit test (discovery and confidence).
  • That a large number of unit tests can replace integration tests.
  • That unit tests evaluate the product.
  • That false negatives ("gaps" or "escapes") in unit tests are a symptom of not having enough unit tests.
  • Writing unit tests while developing the 'production' code is a waste of time, as the code will change and you'll have to rewrite them. 
  • Having unit tests will prevent bugs
  • That coverage stats give anything useful other than an indicator of a potential problem area.
  • When and how often to run them. And how much confidence that actually gives you
  • That code quality for tests doesn't matter as they're just tests
  • When to write the unit tests (before/after the 'production' code 
  • The difference between a unit test and and integration test
  • That how much coverage you get with unit tests says anything about the quality of your test suite
  • That you don't need additional tests because everything is unit tested
  • That unit tests are the *only* documentation you need
  • That they will be shared with the rest of the team
  • TDD is a testing activity/TDD is a design activity/TDD is both/TDD is neither
  • That the purpose of unit tests is to confirm a change didn't break something
The list is raw and some entries straddle disciplines as the world is big, fun muddle despite our efforts to compartmentalise. I hope its a useful guide to interactions with developers regarding this layer of testing. Next time a developer asks for an opinion on existing unit tests or help with writing new ones, have a look through this list and challenge your assumptions. After all, illusions about code is our business...

Thanks

Thanks to the following for their contributions*:
  • Steven Burton
  • Angie Jones
  • Gav Winter
  • James Sheasby Thomas
  • Dan Billing
  • Peter Russell
  • Joe Stead
  • Colin Ameigh
  • Marc Muller
  • Adrian McKensie
  • Douglas Haskins
  • Mat McLouglin
  • Dan North
  • Josh Gibbs
  • Marit van Dijk
  • Nicola Sedgewick
  • Phil Harper
  • Joep Schuurkes
  • Danny Dainton
  • Gwen Diagram
* If I forgot you, please tell me.