Skip to main content

Leeds Testing Community unConference

A few of you will know that we (well, Stephen Mounsey, Nick Judge, Fredrik Seiness, Phil Hargreaves et al did all the hard work, I just flounced in and presented a workshop) have recently given birth to the Leeds Testing Community unConference. All conferences start from an acorn, a twinkle in the eye, and this was no exception.  I didn’t want to let it pass without blogging on it, as I believe it to be the beginning of something big! There is a real thirst for this kind of event in Leeds, a thriving tech city with loads going on.

A quick whistle-stop tour of my highlights:
  • Uno – Laurence Wood presented on his agile heroes, including my close testing and monitoring pal Gwen Diagram. I will not say the ‘D’ word. Also, on one to one ratios of developers to product owners (pinch me), a very strong start from a great speaker.
  • Dos – My mobile testing workshop, entitled ‘Extending your testing senses.’ Despite being the only person currently testing in a mobile context in the room, everyone really got stuck in to using the on device Android Developer Tools. Testing search functions on popular networks using CPU monitoring, layout tools and many more. I heard a lot of heartening ‘I didn’t even know my phone did that’ type of comments. Even better the same tools were used in the afternoon workshop. Joy.
  • Tres – Stephen Mounsey and sketch noting was a great interactive session, I felt a sense of accomplishment and satisfaction as my barely legible doodles became a coherent map of the session. Or something like that. The key learning was turning previously dreaded meetings into something engaging and being present. With a tangible output at the end of it.
  • Cuatro – An honourable mention for my good friend Clem Pickering, whose presentation of the Palchinsky Principles really resonated with me, with strong threads of experimentation and viewing failure as learning. Slide of the day was a surprising number on how Prince Charles and Ozzy Osbourne share a surprising number of characteristics, showing just how much perspective impacts your assumptions while testing.

All that remains is a massive thank you to the organisers, hosts (Callcredit Information Group, who I can testify are an extremely engaging organisation for those in software development) and my fellow speakers. 

Isn’t it exciting to be in at the beginning of something awesome?

Comments

Popular posts from this blog

Wheel of Testing Part 2 - Content

Thank you Reddit, while attempting to find pictures of the earths core, you surpass yourself.
Turns out Steve Buscemi is the centre of the world.

Anyway. Lets start with something I hold to be true. My testing career is mine to shape, it has many influences but only one driver. No one will do it for me. Organisations that offer a career (or even a vocation) are offering something that is not theirs to give. Too much of their own needs get in the way, plus morphing into a badass question-asker, assumption-challenger, claim-demolisher and illusion-breaker is a bit terrifying for most organisations. Therefore, I hope the wheel is a tool for possibilities not definitive answers, otherwise it would just be another tool trying to provide a path which is yours to define.


In part one, I discussed why I had thought about the wheel of testing in terms of my own motivations for creating it, plus applying the reasoning of a career in testing to it. As in, coming up with a sensible reflection of real…

Getting started with testability

At TestBash Netherlands, I said that, in my experience, a lot of testers don't really get testability. I would feel bad if I didn't follow that up with a starting point for expanding your mindset and explicitly thinking about testability day to day, and making your testing lives better! 

In large scale, high transaction systems testability really is critical, as compared to the vastness and variability of the world, testing done within organisations, before deployment, is limited by comparison. We need ways to see and learn from the systems we test where it matters, in Production.
Being able to observe, control and understand is central to testing effectively, and there are loads of resources and experience reports out there to help. I was/am inspired by James Bach, Seth Eliot, Matt Skelton, Sally Goble, Martin Fowler and a little bit of PerfBytes/Logchat, so lets see if it works for you! 

Overall Model:

Heuristics of Software Testability by James Bach

http://www.satisfice.com/tool…

The Team Test for Testability

You know what I see quite a lot. Really long-winded test maturity models. 

You know what I love to see? Really fast, meaningful ways to build a picture of your teams current state and provoke a conversation about improvement. The excellent test improvement card game by Huib Schoots and Joep Schuurkes is a great example. I also really like 'The Joel Test' by Joel Spolsky, a number of questions you can answer yes or no to to gain insight into their effectiveness as a software development team.

I thought something like this for testability might an interesting experiment, so here goes:

If you ask the team to change their codebase do they react positively?Does each member of the team have access to the system source control?Does the team know which parts of the codebase are subject to the most change?Does the team collaborate regularly with teams that maintain their dependencies?Does the team have regular contact with the users of the system?Can you set your system into a given state…