Wednesday, 12 December 2018

How to advance your test automation skills with source control

Image result for source control meme


If you're a tester who wants to get up to speed on coding to become a test automation engineer, don't just start learning to code; start by learning about source control, a methodology for tracking and managing changes to your code. 

In my recent past, testers at one particular organisation I worked with were busy creating test automation and even tools to manage test data and application state. All the application code was in source control but none of the code written by the testers was in source control. It was zipped and emailed, copied to shared file servers, sent by instant message to name a few sharing mechanisms. Versions diverged wildly, valuable code was lost or overwritten leading to delayed testing, panic rewrites of tools, all round disarray!

Without source control, software developments teams will essentially be in a state of chaos, terrified of code change. I often meet testers without access to their teams source control repository. If this describe you as a tester, you won't be truly closely collaborating with your team.

To bridge that gap, here's what you need to know about source control -- and how to get started. This article will cover:

  • Key disadvantages to you as a tester when you cannot access or use source control repositories and tools.
  • How a grounding in source control will help you to acquire the skills you need to grow as an automation engineer.
  • Important learning resources to complement your on the job experience with source control. 

Disadvantages of Being Source Control Unaware

There are a few key disadvantages you will find yourself as a tester if you don't have access to your team's source control repository and you don't appreciate what happens within:

  • Primary source of collaboration - a lot of communication and collaboration happens within source control and it contains a compelling version of your system and your teams history. There are a number of artifacts that teams collaborate around, usually some form of issue tracking software, a shared wiki page but in my experience these are the least active. The truly active oracle for change is the code itself, where the most collaborative work is taking place, during code reviews and merges.
  • Serious risks - if you don't know what your teams source control strategy is, you might be missing out on serious risks. Long lived feature branches might indicate integration problems later down the line. When there is an issue on your kanban board for a while, there might also be feature branch that has been alive just as long and depending on your release cadence further away from your master branch. As a result code merges might be longer and more difficult, even leading to re-implementing sections of code.
  • Quality advocacy - as testers, we want to be strong advocates for quality. Effective use of source control is one of the key leading indicators of quality, not just for code, for configuration as well. Advocating for more effective use of source control can help our teams achieve better quality outcomes especially in safety and speed of deployments. We can advocate for smaller batch sizes for shorter lived branches, application configuration in source control, even encouraging storing database schema changes too.
  • Early testing - often testers wait around for a build. Waiting for someone to complete a merge and deploy onto a test environment. Imagine being able to test the branch that has been created for your new feature straight away? If you were able to run an application locally or build to an environment from a branch you could test earlier, creating a tighter feedback loop with your team and wider group of stakeholders. 

Source Control as Skills Gateway

Learning source control concepts and skills are a gateway to the other technical skills that are valuable for test automation engineers to learn:

  • Bash - although user interfaces exist for popular source control technologies you will likely be exposed to interactions using the command line, which is where a lot of developers and system administrators on your team spend their time. This then opens the door to bash for UNIX users, with its many commands for teasing out information. Check out Julia Evans Wizard Zines for more.
  • Establishing coding habits - if you have access to your application code, you can see the patterns for the code within and learn from them in your own code. Positive behaviours such as striving for single responsibility, appropriate levels of reusability and sensible abstraction between implementation and test are worth looking out for and adding to your coding toolkit. The opposite is also true, where you can learn from the bad examples within your application code!
  • Pipeline integration - without source control, any test automation that you create will be difficult to integrate with your deployment pipeline tooling, where it can add massive post deployment value. Tools such as Jenkins have built in source control integration enabling you to get your test automation code running where it matters, with the experience of adding to your team's pipeline.

Sources of Inspiration

My journey into the world of source control began with 'The Git Parable' by Tom Preston-Werner. I learn a great deal from story telling as a medium and parable is no different, teaching concepts with reference to characters and situations that resonate with me. If I was to pick a starting point, try this:

http://tom.preston-werner.com/2009/05/19/the-git-parable.html

However, each has their own learning style (or more likely mix of styles) so here's some next steps, depending on how you like to consume learning material:


Summary

Learning source control is an important step in a test automation engineers journey:

  • Without source control awareness, you will experience limits to your collaboration with your team, identifying risks, testing early and advocating for quality.
  • Source control is a gateway to many other skills that will make you an effective test automation engineer. Unlocking the potential of the command line, establishing coding habits by recognising patterns and integrate your tests with your deployment pipeline.

If you want to add value, collaborate with your team and add momentum to your learning, start with source control. Your team and your career will thank you for it.

This post was originally published by the great people at TechBeacon here: https://techbeacon.com/how-advance-your-test-automation-skills-source-control

Sunday, 2 December 2018

Modelling a unit of your system

Sunshine, zebras and great conversations

Over the last few days I've been in South Africa for Lets Test, held on a fantastic country retreat near Johannesburg. I facilitated a session called a 'Testers Guide to the Illusions of Unit Testing.' I have a confession to make. We had 2 hours to complete the workshop and, in the spirit of having too much content being way better than too little, we didn't get to delve deeply into this question. Hence this blogpost. 

We had great conversations about how unit testing is an interface between testers and developers, a gateway for deep collaboration. This is predicated on an understanding of unit testing, including what they can and can't achieve and what relation they bear to other types of testing. One of the questions we wished to tackle was:
What is a unit of our system?
How many times have you started to test a system and asked "Hey, what are the unit tests like for this unique piece of software history?" The answer in my experience has often been, "um, yeah, about that we don't have any, its too complex/there is no time/its going to deprecated one day." Or alternatively you are involved in a session about the architecture of a brand new system using the latest technologies, using microservices, lambdas or whatever. In either case, wouldn't it be useful to be able to facilitate a discussion about what is a unit of that system and what factors influence it? You might find yourself with a simpler, more observable, controllable and understandable system. Whats not to like?

Reinventing the wheel



The model itself is another wheel. Other shapes are available, although not when I model something apparently. I added a bunch of segments to the wheel that I think impact the size and shape of a unit in a systems context. The key thing here is context, I know that there are definitions of what a unit test is, contested definitions most of the time. What there isn't a definition for what is a unit of YOUR system, the one that YOU work on. Maybe you can use the above to help your team work it out.

There are key areas here, without delving into each one individually:
  • How the code is stored
  • Who contributes to it
  • What architecture and patterns are present
  • How tightly coupled the system is

The segments are also on a scale of size too. As in what factors contribute to a unit being large or small. For example:



  • Large - Single Database for Multiple Applications - this system may have a gigantic database bottleneck, perhaps even changes that one system makes can impact another. That's a weighty unit.
  • Medium - Broker/Queue Based - maybe messages are routed using built in routing configuration capabilities of RabbitMQ? A unit of this system involves invoking (or mocking) multiple systems so could be seen to have largeness too.
  • Small - Microservice(s) - a service that does one thing well - this could be a unit in itself. At the very least, it indicates that units of your system might tend towards smallness.

Explaining with questions

The notes below include questions and examples of each of the sections. They may be useful as further prompts:

### Architectural Patterns
* What type of system architecture is employed?
* How many layers or tiers does it have?
* How many roles do components within have?
* Examples:
 * Three tier
 * Microservices
 * Broker/queue based
 * Model-view-controller 

 
### Coding Practices
* How do contributors collaborate over code?
* What design patterns are employed with the code?
* To what extent does testing drive the coding process?
* Examples:
 * Driving Development with Tests
 * Pairing
 * Code reviews
 * SOLID principles
 * Inheritance
 * Abstraction 

### Concurrent Teams 
* How many teams contribute concurrently to the repository?
* Are they internal or external teams?
* Which teams are core maintainers?
* Examples:
 * Internal development teams
 * Outsourced development teams
 * Contributors to key libraries 

### Source Control Strategy
* What strategy does your team employ to manage code?
* What size changes are often checked in?
* How long until changes are integrated with trunk?
* Examples:
 * GitHub Flow
 * GitFlow
 * Trunk based development

### Source Control Repo
* How large is the repository that you have?
* How many linked repositories does it have?
* How does it manage its dependencies?
* Examples:
 * Monolith
 * Monorepo
 * Microlith 

### Size of Objects/Data 
* What are the key data items that the code manages?
* What size are the important items?
* What depends on those important items?
* Examples:
 * Core classes instantiated for every interaction
 * Persistence of customer data
 * Transaction and audit histories 

### Risk to Whole
* What dependencies exist within your code?
* Are they any common objects or structures?
* Are there any areas teams fear to touch?
* Examples:
 * Large common classes with multiple roles
 * Old versions of dependencies
 * Hard coding/configuration in the code 
 
### Time Periods Used
* What is the lifespan of objects/data created by your code?
* Are there wide differences in the lifespan of different types of objects used?
* Examples:
 * Long lived daemons
 * Scheduled tasks
 * Asynchronous HTTP requests
 * Cache size and expiry rules

Over to you

Essentially, the wheel and guidance is a prompt to help answer these questions about your own system.

  • What factors help to find a unit of your system? (the segments of the wheel, which can be the ones I added, or if you don't like them I have others)
  • What practices & patterns influence that unit? (your own ways of working and crafting principles)
  • How practices and patterns govern size of a unit? (how the way you build stuff affects your ability to test small units of that stuff)

Conclusion

As with all models, this has some holes in it. Not all layers of the system will have the same size unit for example. What is a unit when you test a React application using Jest for example? What unit is creating snapshot files that creates html testing? An interesting questions plus a new and (welcome) challenge to the orthodoxy of automated/unit testing.

Trying to determine what a unit of your system is, will, at the very least lead to asking some searching questions which may provoke a reaction within your team. Overall smallness is the aim for me. After all, if asking these questions serves to help shrink a unit of your system. I believe that testers can be powerful catalysts in this regard.


Friday, 17 August 2018

Overcome painful dependencies with improved adjacent testability


We had done it: We had built a testable system. We achieved high observability through monitoring, logging and alerting. We instituted controllability by using feature flags. We understood the build through pairing and cross-discipline-generated acceptance test automation. We aimed for decomposability by using small, well-formed services. But our pipeline was still failing — and failing often.

So what went wrong?

We had a horrible external dependency. Their service suffered from frequent downtime, and slow recovery times. Most painful was the fact that test data creation required testers to create a manual request through a ticketing system. We were dependent on a system with low testability, which undermined our own testability. And this had consequences for our flow of work to our customers. 

In this article, I will cover how to address such dependencies and engage with the teams that maintain them, including:


  • Enhanced observability by adding key events from your dependencies to your own logging, monitoring and alerting.
  • Add controllability to applications and share this ability in order to foster a culture of collaboration with your dependencies.
  • Greater empathy with teams that provide your dependencies, they have their own problems to deal with and greater understanding will bring teams closer together.


How testability affects flow

Testability has a tangible relationship with the flow of work. If a system and its dependencies are easier to test, then work will usually flow through its pipeline. The key difference is that all disciplines are likely to get involved in testing if it is easier to test. But if a system and its dependencies are hard to test, you're likely to see a queue of tickets in a "Ready to Test" column—or even a testing crunch time before release.

To achieve smooth flow, treat your dependencies as equals when it comes to testability.

What is adjacent testability?

This term refers to how testable a system you depend upon to provide customer value is. Systems you need to integrate with to complete a customer journey, for example, if your system relies on a payment gateway which suffers from low testability, your end to end tests may fail often, making release decisions problematic for stakeholders. Most systems have integrations with other, internal and external systems. Value generation occurs when those systems work together in concert, allowing customers to achieve their goals.

When considering flow of work, I often reference Eli Goldratt's, "Theory of Constraints." Goldratt discusses two types of optimizations that apply to testability:


  • Local - changes that optimize one part of the system, without improving the the whole system.
  • Global - changes that improve the flow of the entire system.


If you optimize your own testability but neglect your dependencies, you have only local optimization. That manes you can only achieve a flow rate as large as your biggest bottleneck. In the horrible dependency I described above, new test data from the dependency took a week to create. This created other challenges. For example, we had to schedule the creation of the data in advance of the work. And when the work was no longer the highest priority, we had wasted time and energy.

How to improve adjacent testability

Establishing that you may have an adjacent testability challenge is one thing; determining what to do about it is another. On the one hand, you could argue that if a dependency is hard to test, its not your problem. External dependencies might have contractual constraints for reliability, like Service Level Agreements for example. Contracts and reality can be far apart sometimes and service level agreements are not very effective change agents in my experience, so try engaging in the following ways:

Observability and information flow

Enhance observability to provide real feedback about your interactions with dependencies, rather than logging only your system events. Interactions with dependencies are part of a journey through your system. Both internal events and interactions should log to your application logs, exposing the full journey. Replicate this pattern for both production and test environments. The key benefit: You'll provide context-rich information that the people who maintain that dependency can act upon.

For example, after integrating with external content delivery API for an internal application we had issues with triggering our request rate limit. We believed the rate limit block triggered too early, as it should have only triggered for non-cache hit requests. We added the external interactions to our internal application logs, noted that certain more frequent requests needed a longer cache expiry and worked with the external team to solve the problem.

Controllability and collaboration

Controllability is at its best when it is a shared resource, which encourages integration between services early and importantly a dialogue between teams. Feature toggles for new or changed services allow for early consumption of new features, without threatening current functionality. Earlier integration testing between systems addresses risks earlier and builds trust.

As an example, when upgrading a large scale web service by two major versions of php, our test approach included providing a feature toggle to redirect to a small pool of servers running the latest version of php for that service. Normal traffic went to the old version, while our clients tested their integrations on the new. We provided an early, transparent view of a major change, clients integrated with it, while we also tested for changes internally.

Empathy and Understanding

Systems are not the only interfaces which need to be improved in order to improve adjacent testability, how you empathize with others teams you depend on needs attention too. Consuming their monitoring, alerting and logging they receive into your own monitoring, alerting and logging setup helps a great deal.

The instance that I often reflect on is a platform migration project I worked on, a Database Administrator I worked with was often obstructive insisting on tickets being raised for every action. So I added myself to the service disruption alerts email list for that team. Batch jobs we had set up often failed, as we had not considered disk space for temporary files, waking him up at night with alerts. Small fix for us, huge for him. Never had a problem with data being moved or created after that.

Summary

Taking a collaborative approach to improving the testability of your dependencies will result in a significant testability improvement for your own system. Keep these principles in mind:


  • Observability and information flow where the whole journey is the aim, including dependencies.
  • Controllability and collaboration to encourage early integration and risk mitigation.
  • Empathy to understand the problems and pain of those who maintain your dependencies.


As a first step, try and build a relationship with the teams that you depend upon. Understanding their challenges and how you might be able to assist can unlock large testability gains for you and your team.

This post was originally published by the great people at TechBeacon here: https://techbeacon.com/testers-guide-overcoming-painful-dependencies

Thursday, 24 May 2018

Going beyond "how are we going to test this?"


Testability is a really important topic for the future of testing. So much so that I believe that it's a really, really strong area for a tester to diversify into to remain relevant and have a major impact in an organisation. After all testability is for every discipline. If you said your mission was to build loosely coupled, observable, controllable and understandable systems, I know a few operations people from my past would have bitten both my hands off. Bringing various disciplines together is what a focus on testability can do.

It begins with powerful questions when it matters, in a feature inception or story kick off session. Asking this question can be really powerful:

How are we going to test this?

Its a great question, it has the "we" in there, which is a key part as an actor within a cross functional team. It also opens up the debate on the efficacy of testing on the thing that is about to be built, what types of testing are appropriate and enhancements needed to test it effectively (specifically for the types of testing that testers might do). This question often triggers challenges to enhancing testability which you need to be aware of. Have you ever heard any of these?

  • Testability? That sounds like something that only testers should care about. As a developer why should I care?
  • We are really keen to get this feature out before the marketing campaign. What does it matter how testable it is?
  • We will think about testability later, when we have built something sufficient to begin end to end testing.
  • We should focus on the performance and scalability of the system, testability is not as important as those factors. 
  • What I'd really like to know is how will testability make us more money and protect our reputation with our clients? 
  • You can't test this change. It’s just a refactoring, library update or config change exercise which shouldn't have a functional impact.
  • We know its a really big change, but there is no way to split it into valuable chunks.

If faced with these, narrow the focus with questions such as:

  • How can we observe the effects of this new feature on the existing system? (or how decomposable is it)
  • How will we know the server side effects when operating the client? (or how observable is it)
  • How will we set the state that the system needs to be in to start using the feature? (or how controllable is it)
  • When we need to explain what the new feature is doing to a customer, can we explain it clearly? (or how understandable is it)

Great conversations often stem from the question, "how are we going to test this?" but being ready for the challenges that often occur and having focusing questions might be the catalyst take your testability to the next level.

Footnote and References:

Just so I'm clear, I mean this by the four keys aspects of intrinsic testability:

  • Decomposable - The extent to which state is isolatable between components, thus knowing when and where events occur.
  • Observable - The extent to which the requests, processing and responses of a layer or component are observable by the team. 
  • Controllable - The extent to which you can set, manipulate and ultimately reset the state of a layer or component to assist testing.
  • Understandable - The extent to which the team can reason about the behaviour of a layer or component and explain it with confidence.

For more on the various aspects of testability (I mostly discuss the intrinsic with a bit of epistemic) have a look at the following blog with a few references to get started - http://testingisbelieving.blogspot.co.uk/2017/01/getting-started-with-testability.html

Friday, 23 March 2018

Why do Testers become Scrum Masters?


It was late and I was stuck on a train, so I pondered on the question of why do testers often (in my experience) become Scrum Masters. Its a very dear question to me, as its been a big part of my career journey In fact, I've been there and back again. Tester to supposed-to-be-testing-but-being-a-Scrum-Master to Scrum Master, back to Tester and very happy thank you.

I encapsulated my reasoning in the following:
The tweet got a lot of traction, and generated a couple of interesting threads which made me think.
Perhaps part of the reason for the transition is a growing appreciation of where quality has its roots? If testing is a way of providing information about quality, facilitating a team to work closely together, with their customers, with robust technical practices towards a common goal has a more direct impact on quality. Testing is but one measure of quality, perhaps transitioning to becoming a Scrum Master meets the need to be able to impact the bigger picture.
The other potential reason and perhaps the more obvious, is that if the tester career path runs out at a given organisation, or is not appealing, a pivot is required. I have observed this with regard to those who might be called 'manual testers' where the career path is much wider for testers with an interest in the technical pathways of an organisation, the Scrum Master role brings new skills and often greater renumeration.
The part of this that interested me greatly was the end, as this speaks to my world view. I think a great deal about testability and the impact of architecture on testing. It left me wondering that as testers take on more technical roles, perhaps this will be the next migration for testers? For me, if a tester takes a solid appreciation of the value and limits of testing into a new discipline, I don't see that as a reason to be upset, careers evolve and if testing was part of your nurture, more often than not, it persists.

This all contains a reasonable amount of hearsay and bias, so I would love to hear your stories. For transparency, I want to write a talk about this. If you have become a Scrum Master or would like to, or have been on some other comparable journey, get in touch via the comments...