Wednesday, 12 December 2018

How to advance your test automation skills with source control

Image result for source control meme


If you're a tester who wants to get up to speed on coding to become a test automation engineer, don't just start learning to code; start by learning about source control, a methodology for tracking and managing changes to your code. 

In my recent past, testers at one particular organisation I worked with were busy creating test automation and even tools to manage test data and application state. All the application code was in source control but none of the code written by the testers was in source control. It was zipped and emailed, copied to shared file servers, sent by instant message to name a few sharing mechanisms. Versions diverged wildly, valuable code was lost or overwritten leading to delayed testing, panic rewrites of tools, all round disarray!

Without source control, software developments teams will essentially be in a state of chaos, terrified of code change. I often meet testers without access to their teams source control repository. If this describe you as a tester, you won't be truly closely collaborating with your team.

To bridge that gap, here's what you need to know about source control -- and how to get started. This article will cover:

  • Key disadvantages to you as a tester when you cannot access or use source control repositories and tools.
  • How a grounding in source control will help you to acquire the skills you need to grow as an automation engineer.
  • Important learning resources to complement your on the job experience with source control. 

Disadvantages of Being Source Control Unaware

There are a few key disadvantages you will find yourself as a tester if you don't have access to your team's source control repository and you don't appreciate what happens within:

  • Primary source of collaboration - a lot of communication and collaboration happens within source control and it contains a compelling version of your system and your teams history. There are a number of artifacts that teams collaborate around, usually some form of issue tracking software, a shared wiki page but in my experience these are the least active. The truly active oracle for change is the code itself, where the most collaborative work is taking place, during code reviews and merges.
  • Serious risks - if you don't know what your teams source control strategy is, you might be missing out on serious risks. Long lived feature branches might indicate integration problems later down the line. When there is an issue on your kanban board for a while, there might also be feature branch that has been alive just as long and depending on your release cadence further away from your master branch. As a result code merges might be longer and more difficult, even leading to re-implementing sections of code.
  • Quality advocacy - as testers, we want to be strong advocates for quality. Effective use of source control is one of the key leading indicators of quality, not just for code, for configuration as well. Advocating for more effective use of source control can help our teams achieve better quality outcomes especially in safety and speed of deployments. We can advocate for smaller batch sizes for shorter lived branches, application configuration in source control, even encouraging storing database schema changes too.
  • Early testing - often testers wait around for a build. Waiting for someone to complete a merge and deploy onto a test environment. Imagine being able to test the branch that has been created for your new feature straight away? If you were able to run an application locally or build to an environment from a branch you could test earlier, creating a tighter feedback loop with your team and wider group of stakeholders. 

Source Control as Skills Gateway

Learning source control concepts and skills are a gateway to the other technical skills that are valuable for test automation engineers to learn:

  • Bash - although user interfaces exist for popular source control technologies you will likely be exposed to interactions using the command line, which is where a lot of developers and system administrators on your team spend their time. This then opens the door to bash for UNIX users, with its many commands for teasing out information. Check out Julia Evans Wizard Zines for more.
  • Establishing coding habits - if you have access to your application code, you can see the patterns for the code within and learn from them in your own code. Positive behaviours such as striving for single responsibility, appropriate levels of reusability and sensible abstraction between implementation and test are worth looking out for and adding to your coding toolkit. The opposite is also true, where you can learn from the bad examples within your application code!
  • Pipeline integration - without source control, any test automation that you create will be difficult to integrate with your deployment pipeline tooling, where it can add massive post deployment value. Tools such as Jenkins have built in source control integration enabling you to get your test automation code running where it matters, with the experience of adding to your team's pipeline.

Sources of Inspiration

My journey into the world of source control began with 'The Git Parable' by Tom Preston-Werner. I learn a great deal from story telling as a medium and parable is no different, teaching concepts with reference to characters and situations that resonate with me. If I was to pick a starting point, try this:

http://tom.preston-werner.com/2009/05/19/the-git-parable.html

However, each has their own learning style (or more likely mix of styles) so here's some next steps, depending on how you like to consume learning material:


Summary

Learning source control is an important step in a test automation engineers journey:

  • Without source control awareness, you will experience limits to your collaboration with your team, identifying risks, testing early and advocating for quality.
  • Source control is a gateway to many other skills that will make you an effective test automation engineer. Unlocking the potential of the command line, establishing coding habits by recognising patterns and integrate your tests with your deployment pipeline.

If you want to add value, collaborate with your team and add momentum to your learning, start with source control. Your team and your career will thank you for it.

This post was originally published by the great people at TechBeacon here: https://techbeacon.com/how-advance-your-test-automation-skills-source-control

Sunday, 2 December 2018

Modelling a unit of your system

Sunshine, zebras and great conversations

Over the last few days I've been in South Africa for Lets Test, held on a fantastic country retreat near Johannesburg. I facilitated a session called a 'Testers Guide to the Illusions of Unit Testing.' I have a confession to make. We had 2 hours to complete the workshop and, in the spirit of having too much content being way better than too little, we didn't get to delve deeply into this question. Hence this blogpost. 

We had great conversations about how unit testing is an interface between testers and developers, a gateway for deep collaboration. This is predicated on an understanding of unit testing, including what they can and can't achieve and what relation they bear to other types of testing. One of the questions we wished to tackle was:
What is a unit of our system?
How many times have you started to test a system and asked "Hey, what are the unit tests like for this unique piece of software history?" The answer in my experience has often been, "um, yeah, about that we don't have any, its too complex/there is no time/its going to deprecated one day." Or alternatively you are involved in a session about the architecture of a brand new system using the latest technologies, using microservices, lambdas or whatever. In either case, wouldn't it be useful to be able to facilitate a discussion about what is a unit of that system and what factors influence it? You might find yourself with a simpler, more observable, controllable and understandable system. Whats not to like?

Reinventing the wheel



The model itself is another wheel. Other shapes are available, although not when I model something apparently. I added a bunch of segments to the wheel that I think impact the size and shape of a unit in a systems context. The key thing here is context, I know that there are definitions of what a unit test is, contested definitions most of the time. What there isn't a definition for what is a unit of YOUR system, the one that YOU work on. Maybe you can use the above to help your team work it out.

There are key areas here, without delving into each one individually:
  • How the code is stored
  • Who contributes to it
  • What architecture and patterns are present
  • How tightly coupled the system is

The segments are also on a scale of size too. As in what factors contribute to a unit being large or small. For example:



  • Large - Single Database for Multiple Applications - this system may have a gigantic database bottleneck, perhaps even changes that one system makes can impact another. That's a weighty unit.
  • Medium - Broker/Queue Based - maybe messages are routed using built in routing configuration capabilities of RabbitMQ? A unit of this system involves invoking (or mocking) multiple systems so could be seen to have largeness too.
  • Small - Microservice(s) - a service that does one thing well - this could be a unit in itself. At the very least, it indicates that units of your system might tend towards smallness.

Explaining with questions

The notes below include questions and examples of each of the sections. They may be useful as further prompts:

### Architectural Patterns
* What type of system architecture is employed?
* How many layers or tiers does it have?
* How many roles do components within have?
* Examples:
 * Three tier
 * Microservices
 * Broker/queue based
 * Model-view-controller 

 
### Coding Practices
* How do contributors collaborate over code?
* What design patterns are employed with the code?
* To what extent does testing drive the coding process?
* Examples:
 * Driving Development with Tests
 * Pairing
 * Code reviews
 * SOLID principles
 * Inheritance
 * Abstraction 

### Concurrent Teams 
* How many teams contribute concurrently to the repository?
* Are they internal or external teams?
* Which teams are core maintainers?
* Examples:
 * Internal development teams
 * Outsourced development teams
 * Contributors to key libraries 

### Source Control Strategy
* What strategy does your team employ to manage code?
* What size changes are often checked in?
* How long until changes are integrated with trunk?
* Examples:
 * GitHub Flow
 * GitFlow
 * Trunk based development

### Source Control Repo
* How large is the repository that you have?
* How many linked repositories does it have?
* How does it manage its dependencies?
* Examples:
 * Monolith
 * Monorepo
 * Microlith 

### Size of Objects/Data 
* What are the key data items that the code manages?
* What size are the important items?
* What depends on those important items?
* Examples:
 * Core classes instantiated for every interaction
 * Persistence of customer data
 * Transaction and audit histories 

### Risk to Whole
* What dependencies exist within your code?
* Are they any common objects or structures?
* Are there any areas teams fear to touch?
* Examples:
 * Large common classes with multiple roles
 * Old versions of dependencies
 * Hard coding/configuration in the code 
 
### Time Periods Used
* What is the lifespan of objects/data created by your code?
* Are there wide differences in the lifespan of different types of objects used?
* Examples:
 * Long lived daemons
 * Scheduled tasks
 * Asynchronous HTTP requests
 * Cache size and expiry rules

Over to you

Essentially, the wheel and guidance is a prompt to help answer these questions about your own system.

  • What factors help to find a unit of your system? (the segments of the wheel, which can be the ones I added, or if you don't like them I have others)
  • What practices & patterns influence that unit? (your own ways of working and crafting principles)
  • How practices and patterns govern size of a unit? (how the way you build stuff affects your ability to test small units of that stuff)

Conclusion

As with all models, this has some holes in it. Not all layers of the system will have the same size unit for example. What is a unit when you test a React application using Jest for example? What unit is creating snapshot files that creates html testing? An interesting questions plus a new and (welcome) challenge to the orthodoxy of automated/unit testing.

Trying to determine what a unit of your system is, will, at the very least lead to asking some searching questions which may provoke a reaction within your team. Overall smallness is the aim for me. After all, if asking these questions serves to help shrink a unit of your system. I believe that testers can be powerful catalysts in this regard.


Friday, 17 August 2018

Overcome painful dependencies with improved adjacent testability


We had done it: We had built a testable system. We achieved high observability through monitoring, logging and alerting. We instituted controllability by using feature flags. We understood the build through pairing and cross-discipline-generated acceptance test automation. We aimed for decomposability by using small, well-formed services. But our pipeline was still failing — and failing often.

So what went wrong?

We had a horrible external dependency. Their service suffered from frequent downtime, and slow recovery times. Most painful was the fact that test data creation required testers to create a manual request through a ticketing system. We were dependent on a system with low testability, which undermined our own testability. And this had consequences for our flow of work to our customers. 

In this article, I will cover how to address such dependencies and engage with the teams that maintain them, including:


  • Enhanced observability by adding key events from your dependencies to your own logging, monitoring and alerting.
  • Add controllability to applications and share this ability in order to foster a culture of collaboration with your dependencies.
  • Greater empathy with teams that provide your dependencies, they have their own problems to deal with and greater understanding will bring teams closer together.


How testability affects flow

Testability has a tangible relationship with the flow of work. If a system and its dependencies are easier to test, then work will usually flow through its pipeline. The key difference is that all disciplines are likely to get involved in testing if it is easier to test. But if a system and its dependencies are hard to test, you're likely to see a queue of tickets in a "Ready to Test" column—or even a testing crunch time before release.

To achieve smooth flow, treat your dependencies as equals when it comes to testability.

What is adjacent testability?

This term refers to how testable a system you depend upon to provide customer value is. Systems you need to integrate with to complete a customer journey, for example, if your system relies on a payment gateway which suffers from low testability, your end to end tests may fail often, making release decisions problematic for stakeholders. Most systems have integrations with other, internal and external systems. Value generation occurs when those systems work together in concert, allowing customers to achieve their goals.

When considering flow of work, I often reference Eli Goldratt's, "Theory of Constraints." Goldratt discusses two types of optimizations that apply to testability:


  • Local - changes that optimize one part of the system, without improving the the whole system.
  • Global - changes that improve the flow of the entire system.


If you optimize your own testability but neglect your dependencies, you have only local optimization. That manes you can only achieve a flow rate as large as your biggest bottleneck. In the horrible dependency I described above, new test data from the dependency took a week to create. This created other challenges. For example, we had to schedule the creation of the data in advance of the work. And when the work was no longer the highest priority, we had wasted time and energy.

How to improve adjacent testability

Establishing that you may have an adjacent testability challenge is one thing; determining what to do about it is another. On the one hand, you could argue that if a dependency is hard to test, its not your problem. External dependencies might have contractual constraints for reliability, like Service Level Agreements for example. Contracts and reality can be far apart sometimes and service level agreements are not very effective change agents in my experience, so try engaging in the following ways:

Observability and information flow

Enhance observability to provide real feedback about your interactions with dependencies, rather than logging only your system events. Interactions with dependencies are part of a journey through your system. Both internal events and interactions should log to your application logs, exposing the full journey. Replicate this pattern for both production and test environments. The key benefit: You'll provide context-rich information that the people who maintain that dependency can act upon.

For example, after integrating with external content delivery API for an internal application we had issues with triggering our request rate limit. We believed the rate limit block triggered too early, as it should have only triggered for non-cache hit requests. We added the external interactions to our internal application logs, noted that certain more frequent requests needed a longer cache expiry and worked with the external team to solve the problem.

Controllability and collaboration

Controllability is at its best when it is a shared resource, which encourages integration between services early and importantly a dialogue between teams. Feature toggles for new or changed services allow for early consumption of new features, without threatening current functionality. Earlier integration testing between systems addresses risks earlier and builds trust.

As an example, when upgrading a large scale web service by two major versions of php, our test approach included providing a feature toggle to redirect to a small pool of servers running the latest version of php for that service. Normal traffic went to the old version, while our clients tested their integrations on the new. We provided an early, transparent view of a major change, clients integrated with it, while we also tested for changes internally.

Empathy and Understanding

Systems are not the only interfaces which need to be improved in order to improve adjacent testability, how you empathize with others teams you depend on needs attention too. Consuming their monitoring, alerting and logging they receive into your own monitoring, alerting and logging setup helps a great deal.

The instance that I often reflect on is a platform migration project I worked on, a Database Administrator I worked with was often obstructive insisting on tickets being raised for every action. So I added myself to the service disruption alerts email list for that team. Batch jobs we had set up often failed, as we had not considered disk space for temporary files, waking him up at night with alerts. Small fix for us, huge for him. Never had a problem with data being moved or created after that.

Summary

Taking a collaborative approach to improving the testability of your dependencies will result in a significant testability improvement for your own system. Keep these principles in mind:


  • Observability and information flow where the whole journey is the aim, including dependencies.
  • Controllability and collaboration to encourage early integration and risk mitigation.
  • Empathy to understand the problems and pain of those who maintain your dependencies.


As a first step, try and build a relationship with the teams that you depend upon. Understanding their challenges and how you might be able to assist can unlock large testability gains for you and your team.

This post was originally published by the great people at TechBeacon here: https://techbeacon.com/testers-guide-overcoming-painful-dependencies

Thursday, 24 May 2018

Going beyond "how are we going to test this?"


Testability is a really important topic for the future of testing. So much so that I believe that it's a really, really strong area for a tester to diversify into to remain relevant and have a major impact in an organisation. After all testability is for every discipline. If you said your mission was to build loosely coupled, observable, controllable and understandable systems, I know a few operations people from my past would have bitten both my hands off. Bringing various disciplines together is what a focus on testability can do.

It begins with powerful questions when it matters, in a feature inception or story kick off session. Asking this question can be really powerful:

How are we going to test this?

Its a great question, it has the "we" in there, which is a key part as an actor within a cross functional team. It also opens up the debate on the efficacy of testing on the thing that is about to be built, what types of testing are appropriate and enhancements needed to test it effectively (specifically for the types of testing that testers might do). This question often triggers challenges to enhancing testability which you need to be aware of. Have you ever heard any of these?

  • Testability? That sounds like something that only testers should care about. As a developer why should I care?
  • We are really keen to get this feature out before the marketing campaign. What does it matter how testable it is?
  • We will think about testability later, when we have built something sufficient to begin end to end testing.
  • We should focus on the performance and scalability of the system, testability is not as important as those factors. 
  • What I'd really like to know is how will testability make us more money and protect our reputation with our clients? 
  • You can't test this change. It’s just a refactoring, library update or config change exercise which shouldn't have a functional impact.
  • We know its a really big change, but there is no way to split it into valuable chunks.

If faced with these, narrow the focus with questions such as:

  • How can we observe the effects of this new feature on the existing system? (or how decomposable is it)
  • How will we know the server side effects when operating the client? (or how observable is it)
  • How will we set the state that the system needs to be in to start using the feature? (or how controllable is it)
  • When we need to explain what the new feature is doing to a customer, can we explain it clearly? (or how understandable is it)

Great conversations often stem from the question, "how are we going to test this?" but being ready for the challenges that often occur and having focusing questions might be the catalyst take your testability to the next level.

Footnote and References:

Just so I'm clear, I mean this by the four keys aspects of intrinsic testability:

  • Decomposable - The extent to which state is isolatable between components, thus knowing when and where events occur.
  • Observable - The extent to which the requests, processing and responses of a layer or component are observable by the team. 
  • Controllable - The extent to which you can set, manipulate and ultimately reset the state of a layer or component to assist testing.
  • Understandable - The extent to which the team can reason about the behaviour of a layer or component and explain it with confidence.

For more on the various aspects of testability (I mostly discuss the intrinsic with a bit of epistemic) have a look at the following blog with a few references to get started - http://testingisbelieving.blogspot.co.uk/2017/01/getting-started-with-testability.html

Friday, 23 March 2018

Why do Testers become Scrum Masters?


It was late and I was stuck on a train, so I pondered on the question of why do testers often (in my experience) become Scrum Masters. Its a very dear question to me, as its been a big part of my career journey In fact, I've been there and back again. Tester to supposed-to-be-testing-but-being-a-Scrum-Master to Scrum Master, back to Tester and very happy thank you.

I encapsulated my reasoning in the following:
The tweet got a lot of traction, and generated a couple of interesting threads which made me think.
Perhaps part of the reason for the transition is a growing appreciation of where quality has its roots? If testing is a way of providing information about quality, facilitating a team to work closely together, with their customers, with robust technical practices towards a common goal has a more direct impact on quality. Testing is but one measure of quality, perhaps transitioning to becoming a Scrum Master meets the need to be able to impact the bigger picture.
The other potential reason and perhaps the more obvious, is that if the tester career path runs out at a given organisation, or is not appealing, a pivot is required. I have observed this with regard to those who might be called 'manual testers' where the career path is much wider for testers with an interest in the technical pathways of an organisation, the Scrum Master role brings new skills and often greater renumeration.
The part of this that interested me greatly was the end, as this speaks to my world view. I think a great deal about testability and the impact of architecture on testing. It left me wondering that as testers take on more technical roles, perhaps this will be the next migration for testers? For me, if a tester takes a solid appreciation of the value and limits of testing into a new discipline, I don't see that as a reason to be upset, careers evolve and if testing was part of your nurture, more often than not, it persists.

This all contains a reasonable amount of hearsay and bias, so I would love to hear your stories. For transparency, I want to write a talk about this. If you have become a Scrum Master or would like to, or have been on some other comparable journey, get in touch via the comments...




Thursday, 28 December 2017

Testers Guide to Myths of Unit Testing


One area that testers might be able to enhance their contributions to software development teams is how we perceive and contribute to unit testing. I believe testers busting their own illusions about this aspect of building something good would bring us much closer to developers, and help us realise what other layers of testing can cover most effectively.

Also, I want to do a talk about it, so I figured I would test the premise, see if potential audiences were into it. I put this on Twitter:
30 replies with ideas tends to indicate that people might be into it. 

The List

I thought, as my final blog of 2017, I would provide a super useful list of the myths and legends we as testers might believe about unit testing:
  • That developers always write unit tests
  • That developers never write unit tests
  • That testers can write unit tests for developers
  • That developers know what unit testing is
  • That testers know what unit testing is
  • That a class is a unit to test
  • That a function is a unit to test
  • That two people mean the same thing when they say unit
  • That anyone knows what a unit is in the context of their code
  • That unit tests fill in the bottom of the test automation pyramid
  • That unit tests remain in the bottom layer of the test automation pyramid
  • That unit tests are inherently more valuable than other layers of tests
  • That unit tests are fast to run
  • That unit tests are always automated
  • That lots of unit tests are undoubtedly a very good thing
  • That unit tests can eradicated non determinism completely
  • That unit tests are solitary rather than collaborative
  • That test driven development is about testing
  • That reading unit tests relay the intent of the code being written
  • That unit test document the behaviours of code
  • That when there are unit tests, refactoring happens
  • That when there are no unit tests, refactoring happens
  • That you never need to maintain and review unit test suites
  • If it's not adding value through quick feedback it needs removing or changing.
  • That unit tests sit outside a testing strategy for a product
  • Because they exist, the unit tests are actually good
  • Assertions are actually good. Checking for absence, as opposed to presence
  • If you have a well designed suit of unit tests you don't need to do much other testing
  • 100% code coverage for a given feature is evidence that the feature works as designed
  • That code is always clean if it has unit tests
  • Unit tests are about finding bugs
  • That there is a unit to test
  • That a failing test indicates what is wrong
  • That one problem = 1 failed test
  • That good unit tests are easy/hard (adapt based on your delivery) to write for non-deterministic functions
  • "That unit test coverage is irrelevant to manual testing"? aka "Why look at them? They're JUST unit tests, we have to check that again anyways."
  • That they may/may not believe that is a tester's responsibility to ensure code quality and consistency of the test suite (and that developers may believe the opposite)
  • That unit tests don't count as "automation" if they do not use the UI
  • That unit testing allows safe refactoring
  • That the intent a developer has when they write the thing they call a unit test (guiding the design) is the same as the intent a tester has when they write the thing they call a unit test (discovery and confidence).
  • That a large number of unit tests can replace integration tests.
  • That unit tests evaluate the product.
  • That false negatives ("gaps" or "escapes") in unit tests are a symptom of not having enough unit tests.
  • Writing unit tests while developing the 'production' code is a waste of time, as the code will change and you'll have to rewrite them. 
  • Having unit tests will prevent bugs
  • That coverage stats give anything useful other than an indicator of a potential problem area.
  • When and how often to run them. And how much confidence that actually gives you
  • That code quality for tests doesn't matter as they're just tests
  • When to write the unit tests (before/after the 'production' code 
  • The difference between a unit test and and integration test
  • That how much coverage you get with unit tests says anything about the quality of your test suite
  • That you don't need additional tests because everything is unit tested
  • That unit tests are the *only* documentation you need
  • That they will be shared with the rest of the team
  • TDD is a testing activity/TDD is a design activity/TDD is both/TDD is neither
  • That the purpose of unit tests is to confirm a change didn't break something
The list is raw and some entries straddle disciplines as the world is big, fun muddle despite our efforts to compartmentalise. I hope its a useful guide to interactions with developers regarding this layer of testing. Next time a developer asks for an opinion on existing unit tests or help with writing new ones, have a look through this list and challenge your assumptions. After all, illusions about code is our business...

Thanks

Thanks to the following for their contributions*:
  • Steven Burton
  • Angie Jones
  • Gav Winter
  • James Sheasby Thomas
  • Dan Billing
  • Peter Russell
  • Joe Stead
  • Colin Ameigh
  • Marc Muller
  • Adrian McKensie
  • Douglas Haskins
  • Mat McLouglin
  • Dan North
  • Josh Gibbs
  • Marit van Dijk
  • Nicola Sedgewick
  • Phil Harper
  • Joep Schuurkes
  • Danny Dainton
  • Gwen Diagram
* If I forgot you, please tell me.





Wednesday, 8 November 2017

Wheel of Testing Part 3 - Applications



I've only had to quit two jobs to finally find the time to finish this blog series. Winning at life. If you need reminders (like I did) check out Part 1 and Part 2 before reading on...

After the first two blogs regarding the Wheel of Testing, I was delighted to receive a few requests for the wheel itself, which got me thinking about applications of it, beyond what its original intent was, which I've explored in detail in part 1 of this series of intermittent blogs. Most models need a little air time to show their value, in software development we crank out models all the time, but I'm not sure how many get used. I am inspired by models such as the "Heuristic Test Strategy Model" by James Marcus Bach, as I have used it and seen the benefits it has brought for my clients, particularly the ability to ask questions. So, I wanted to create a model which has a number of use cases, both real and imagined:

Helping to unlocking a career in testing which may be stuck

It is not uncommon to reach a point in a career where a tester (or anyone), may feel stuck in their role, or believe that testing has little to offer them. Using the Wheel of Testing in one to one meetings as a discussion point has proved effective for this purpose, triggering questions about previously unknown paths in testing. Tool assisted testing is one, triggered an interesting debate with a builder of automated tests. Might there be other ways to assist testing than scripted automation? This surfaced perceptions of what other testers in the organisation did and inspired the person to find out what repitition in system setup and state testers did, plus opened the door to mentoring those same testers. The wheel opened out that career in various directions. 

Contributing to setting the direction for new starters in testing


Testers get into testing from all sorts of directions. From support roles, former developers, recruitment, many different ways to enter the craft. This makes testing kinda cool in my eyes, despite the bemoaning of the lack of professional testers. However, when one comes into testing you need a way in to developing yourself. The Wheel can work as an outside in type of tool in these contexts. Come from an operational role and you might decide that your way in is via your domain and advocacy skills, then fan out to the other areas. As a former developer, your route to the core skills might be through automation or performance testing. I think we would enhance the craft if we helped that journey from the outside in be a little more deliberate.

Building a training programme for testing

I see a lot of training programmes for testers (and developers) popping up. Some organisations are realising that having a multi faceted approach to hiring testers has potential. Also, I see a lot of academy programmes creating more of what organisations already have, as the expectations of what testing is and what a tester does are rarely balanced. I used the wheel to consider what such a programme might cover, given the amount of time you have to provide a grounding in certain skills. The benefit of the wheel here is that the skills radiate out from a set of core tenets. Choosing a balance from each of the 5 areas helps your training stay focused and what makes good testers. According to me at least.

Determining what skills you have (and don't have) in your testing function


In my world, testers can tend to be alone on teams, or at most in pairs. Rarely working together in a silo, where many testers co-locate and perhaps an appreciation of the skills (and skills gaps) of many testers might be more prescient. The Wheel can be used as a living record of the skills within your team. In my former life the Wheel assisted us in recognising deficiencies in the performance and security areas, albeit a little too late as a result of a voluminous security bug bounty. Ouch. Keeping an eye on the balance of your approach and skill availability is one of the primary functions of test leadership in my eyes. Otherwise you are viewing your product through too few lenses, eventually you will feel the pain. Security and performance will likely become key product differentiators too, so lets keep our approach balanced.

Talking about how testers can be a valued in development teams

I really like this one. What are testers terrible at? Talking about testing, especially as an exciting, skilful and varied craft. The Wheel can be used as a guide to these conversations. The number of development teams I have walked into where testing is met with rolled eyes, described as "the bottleneck" and other thinly veiled ways of challenging the value of testing. For example, using the strategy part of the wheel to help the team think critically about what they are about to build, surface assumptions and reason around how observable a product is. That sounds much cooler than 'poking around in the user interface' or 'doing endless browser testing' to me, but its rarely phrased like that by testers. 

Consultants crib sheet for new clients


From experience, many reviews of testing capability are context free, unbalanced recommendations to implement tools and techniques in organisations who don't even have the questions to begin to talk about testing and quality. The bonus of the Wheel that it has layers and sections. It is perfectly possible that an organisation can be technically advanced but lack critical thinking and advocacy skills around testing. I also believe it to be true that an organisation needs to improve in a step by step manner, rather than zero to continuous deployment in one leap. Using the Wheel in this manner helps to traverse that leap. I was recently asked to create a load testing framework for a client. There had been no previous thought about testing and testability on the product, I used the wheel to build an approach to create a path from no previous thought, to meaningful load testing. 

Guide for evaluating a product, especially when first getting started

There are loads of ways to evaluate a product for testers getting started on a new system or product. However I still observe a reasonable degree of freezing in the headlights of complexity when faced with the new. I think the Wheel could be used in two contexts here. Firstly, the system has so much information to expose to a tester its often hard to parse. The Wheel acknowledges this openly, it is about the breadth of scope that testing enjoys, so don't be afraid. Secondly, it offers a number of places to get started. If its a mobile application, usuability might be a good place to get started, if its a high volume payment gateway application programming interface, perhaps start with the available application monitoring or analytics to gather insights. 

A heuristic for giving feedback on a test strategy

Recently I had occasion to give feedback on a test strategy for a new product, which was being built on information gleaned from prototyping around a particular business problem. The system would use an ancient internal web service too, which stakeholders were nervous about. For me in a new product context, testers should be exploring with various lenses using a breadth of techniques to really enhance information flow to the team, providing light touch test automation while the architecture remains in flux, with adequate resilience testing around integrating with the service causing nervousness. I used the Wheel to assess the breadth of the strategy, plus how it gathered information about the key risk and provided feedback.

Like all models, it is useful in some circumstances, but less so in others. Its certainly not complete, everytime I look at it, there could be more. Also, its more about the testing that testers do, doesn't really address in depth testing that other stakeholders might do, such as unit testing for example. However, I am a tester, so its got lots of me in there.

Really, its a whatever you want to use it for type of wheel. A few people have used and referenced it, so I hope its being used somewhere for weird and wonderful things that I never thought of. That would make me happy.

Monday, 30 October 2017

Leeds Testing Atelier V


We did it again. Another punk, free, independent Leeds Testing Atelier happened on the 17th October 2017. Thats number five for those of you counting wristbands.

The technology sector in Leeds grows constantly, with big companies like Sky and Skybet having a massive presence in the city. However, Leeds has always had a strong DIY scene for music and the arts, we want to maintain that in our tech scene too. This is what we hope will make us and keep us different. Our venue is a place where other groups meet, to make music, discuss social issues or advocate for the environment. To be part of that community matches our mission and our hopes for tech in Leeds. There have been other blogs inspired by the day, so we will reflect on some of the positives we encountered and challenges experienced.


Winning

  • We had 3 brand new speakers on show. Chris Warren, Jenny Gwilliam and Richie Lee were all doing their very first public talk. Chris was even doing his first public talk on his first attendance at any public conference! Giving first time speakers a chance is part of our ethos and we don't see it as taking a chance, its very much giving everyone a platform regardless of experience. More people sharing stories enriches the community itself with greater understanding, plus its always nice to know someone has faced or is facing similar challenges to you.
  • Many testing conferences address the testing that testers do. This is a great thing. As a craft, we need a place to sharpen our craft with likeminded professionals. However, this testing is only a part of the testing world. On this note we actively encourage developers to attend and talk about the testing they do. At each Atelier we've invited developers to discuss testing (and even their frustrations with testers) so it was great to see Andy Stewart and Luke Bonnacorsi talking about testing from another perspective, as well as Joe Swan and Joe Stead on the Test Automation Pyramid panel.
  • The ace thing about the testing community is that there is always a new testing tool, technique or workshops to try out. One such workshop is the Four Hour Tester by Helena Jeret-Mäe and Joep Schuurkes. We wanted to share this with the community present so we thought lets select a subset of the exercises within and get everyone on their feet, thinking and performing tests. Hopefully a few of those in attendance are inspired to find out a bit more about the Four Hour Tester and try it out at their organisations.

Challenging

  • At the Atelier we strive for diversity in our speakers and panelists, we pride ourselves as a safe space for anyone to speak. 4 of 9 speakers and 3 of 8 panelists were women, of all speakers and panelists one was from a non white ethnicity. While there are many measures of diversity, we know we can always do better, challenge cultural biases and create a conference programme of experiences from all parts of the community.
  • We had a few submissions from further afield this time. Rosie Hamilton travelled from Newcastle to speak. This is a really gratifying indicator of the reach of the Testing Atelier. It does present a challenge as a free event though. Our ability to pay expenses is very limited, in fact Rosie stayed at Gwen and Ash's house, her company helped with the rest. Not everyone who has a story to tell has a supporting organisation behind them, often the opposite. Much for us to ponder there.
  • Terrifying news everyone. Our resident Norwegian hipster artiste, Fredrik Seiness is moving back to Norge to feast in the icy halls there. We will miss his artistry, t-shirts, weird remarks, focus on diversity and knowledge of where Nick probably is when he won't answer his messages. To be fair Fred will still visit for each Atelier, tell us we are doing it wrong and that its not like the olden days. However as Fred departs, a fitter, stronger, faster model immediately takes his place. Welcome to the Atelier Richie Lee! 

The organisers who make this happen are Gwen Diagram, Nick Judge, Fredrik Seiness, Sophie Jackson Lee, Stephen Mounsey, Richie Lee, Markus Albrecht and Ash Winter. With a nod to Ritch Partridge as always. We give our time freely to make the Atelier a success. 



We are always looking for speakers, workshoppers and panelists. For example, there was an attendee who tested boats there for a living. Like, on the sea type boats. One of the tests was to set the onboard hardware ON FIRE to see how resilient it was. So talking next time. If you would like to talk, please submit your idea here:


We'll end with a thank you to all. Attendees, speakers, panelists, sponsors, Jon the media person, Wharf, Felix and Eleanor. All approached the day with enthusiasm and an open mind.

Ash, on behalf of the Atelier Gang.

Wednesday, 2 August 2017

The Team Test for Testability


You know what I see quite a lot. Really long-winded test maturity models. 

You know what I love to see? Really fast, meaningful ways to build a picture of your teams current state and provoke a conversation about improvement. The excellent test improvement card game by Huib Schoots and Joep Schuurkes is a great example. I also really like 'The Joel Test' by Joel Spolsky, a number of questions you can answer yes or no to to gain insight into their effectiveness as a software development team.

I thought something like this for testability might an interesting experiment, so here goes:

  1. If you ask the team to change their codebase do they react positively?
  2. Does each member of the team have access to the system source control?
  3. Does the team know which parts of the codebase are subject to the most change?
  4. Does the team collaborate regularly with teams that maintain their dependencies?
  5. Does the team have regular contact with the users of the system?
  6. Can you set your system into a given state to repeat a test?
  7. Is each member of the team able to create a disposable test environment?
  8. Is each member of the team able to run automated unit tests?
  9. Can the team test both the synchronous and asynchronous parts of their system?
  10. Does each member of the team have a method of consuming the application logs from Production?
  11. Does the team know what value the 95th percentile of response times is for their system?
  12. Does the team curate a living knowledge base about the system it maintains?

Where each yes is worth one point:

  • 12 -  You are the masters of observation, control and understanding. Change embraced openly. Way to be.
  • 11 - Danny Dainton's Testability Focused Team at NewVoiceMedia - Wednesday 2nd August 2017
  • 8-10 - You are doing pretty good. You can change stuff with a reasonable degree of confidence. Could be better.
  • Less than 8 - Uh oh. You are missing some big stuff and harbouring serious unseen, unknowable risk. Must do better.

What would you add or take away from this list? If you try it, or a variant of it let me know, I'd be fascinated to hear!

Notes On Each Question

  1. If everyone actively recoils from change to all or a specific part of the system and there's always a reason not to tackle this. It's a no.
  2. Everybody should be able to see the code. More importantly a lot of collaboration occurs here. Access for all is part of closeness as a team.
  3. A codebase generally has hotspots of change. Knowing this can inform what to test, how to test and where observation and control points are.
  4. Teams have both internal and external dependencies. The testability of dependencies effects your testability. Close collaboration such as attending their sprint planning helps.
  5. To test a system effectively one must have deep empathy with those who use it. For example if you have internal users you invite them to demo's, for external customers your team is exposed to customer contacts, or dare I say it, even respond to contacts.
  6. Some tests need data and configuration to be set. Can you reliably set the state of these assets to repeat a test again?
  7. Sometimes programmers have environments (I hope) to make their changes in something representative of the world. Your testers and operations focused people should have this too to encourage early testing, reduce dependency on centralised environments and setup time of tests. 
  8. If you don't have unit tests, you probably haven't even tried to guess what a unit of your code is. Immediate no. If you do each member should be able to run these tests when they need them. 
  9. Systems often have scheduled tasks which lifts and shifts or replicates data and other operations which occur on demand. Can you test both?
  10. Its important that all teams members can see below the application, so we aren't fooled by what we see. If you don't have an organised logging library which describes system events you should get one, plus eventually some form of centralisation and aggregation. Or just use syslog.
  11. If you don't know this you either haven't been able to performance test, gather data for analysis, or realised that you can analyse your performance data without outliers. Either way you probably know little about what your code does under load or where to start with tuning.
  12. Documentation. I know you are agile and you thought it was part of your waterfall past. To be fair, this is anything that describes your system, a wiki, auto generated docs, anything. If someone wants to know what something does, where do you go? Only the source code alone is a no for this one. Your team has many members, not that many Product Owners can read code.

Friday, 2 June 2017

Wheel of Testing Part 2 - Content


Thank you Reddit, while attempting to find pictures of the earths core, you surpass yourself.

Turns out Steve Buscemi is the centre of the world.

Anyway. Lets start with something I hold to be true. My testing career is mine to shape, it has many influences but only one driver. No one will do it for me. Organisations that offer a career (or even a vocation) are offering something that is not theirs to give. Too much of their own needs get in the way, plus morphing into a badass question-asker, assumption-challenger, claim-demolisher and illusion-breaker is a bit terrifying for most organisations. Therefore, I hope the wheel is a tool for possibilities not definitive answers, otherwise it would just be another tool trying to provide a path which is yours to define.


In part one, I discussed why I had thought about the wheel of testing in terms of my own motivations for creating it, plus applying the reasoning of a career in testing to it. As in, coming up with a sensible reflection of reality as I see it, rather than creating a 'competency framework' which describes some bionic human that no one has ever met, or would even want to meet.

So, in part 2 I thought I would dig into what's contained within. Not in a 'I'm going to define each term' endless navel gazing type of way but in a 'lets look at it from a different angle' way. I have observed people retreating from ownership of their own development as we are forever foisting methodologies, tools, techniques, models, definitions on each other, naming no examples. Testing and checking. Sorry I mean't to think that, not write it.


Anyway, lets break it down a bit. I love a tortured analogy so lets use the layers of our own lovely planet as an explainer:

Inner Core...




The inner core of the earth is hot, high pressure and fast moving, but dense and solid. Always moving and wrestling with itself but providing a base to develop the self from. I chose these five areas for a core, reflecting my values:

  • Testing - I value testing skill over understanding a business domain. Testing skill helps you to understand domains, but for me business domain understanding doesn't reciprocate to the same degree.
  • Tech - To me, I refer to technical awareness here. What are the strengths and weaknesses of a given technology or architecture? And how does that inform my testing?
  • People - Possibly the most complex part of any role. Fathoming these strange creatures, whether users or developers is a testers pressing concern. Literally nothing happens with these curious beings. 
  • Advocacy - Being able to talk about testing and the value it adds is a real skill. If you can't convince yourself about what it is, who cares and why, then how can you convince anyone else.
  • Strategy - I think you need to have a strategy as a tester. Charters, sessions, mind maps, mnemonics. Without a strategy, you are probably just poking at user interfaces.

Outer Core...




The outer core is where things start to get a bit more fluid and move a bit faster, plus its the middle layer, affected by both the inner core and the mantle. The boundaries get a bit blurrier and navigation is a little bit harder:
  • You start to realise that when someone says something like 'integration tests' it really doesn't mean to them what you think it means. Hilarious and interesting conversations occur and you create a number of tests which are different types of integration tests. 
  • You start to realise that testing is interfaced deeply with many other roles. But those other roles might not know that. When those people say 'QA' a nervous twitch develops but you stay cool and ask them what they mean by that. Hilarious and interesting conversations ensue.
  • You start to realise that models and thinking techniques as well as hands-on experience start to become really important and one enhances the other greatly. Tools become important, you may get carried away by test automation. If your developers says something like 'stop firing that JSON rail gun at the wrong endpoint' then time to pause for thought.

Mantle...





The mantle is a weird place. Almost solid near the surface, extremely fluid and fast moving nearer the outer core. Similarly, a lot of the aspects of the wheel here seem quite specific but underneath the surface you start to realise how they are all part of a very complex entity:


  • You realise that if a system under test is usable and accessible then it is probably more testable. If it is more testable its probably more supportable by operations, which then might improve the relationship between dev and ops. For me, making these connections is the crux of this part of the model.
  • You start to realise how little you know about anything really. Maybe before you thought you knew quite a lot about something, then that gives way to the realisation that Dunning and Kruger weren't lying and its you.
  • You start to realise you can't do it alone. Also, you know the people who focus on something while you buccaneer around the testing wheel? You previously couldn't understand that, but then it occurs to you that you need lots of different skills, outlooks and experiences to grow.

So what about the crust you say? Thats the shallow layer that we usually need to break through in order to really start to question what we are made of. And 
just remember this. Once you've been at it for a while, it can be a long and treacherous journey back to your core...