Test Your Testing – Big Ideas from Testbash Manchester 2017

Fourth in a series of thinkings, learnings and ramblings on this year’s Testbash Manchester

So I’ve had a couple of days to process this year’s event, and I’m still chewing through some of the major themes of Testbash. There was a lot more on the psychological side than people might expect from a testing conference, and a great deal of analysis of the role and future of test as a discipline. Whilst I’m sure more thoughts will follow, I’m ready to give some digest analysis of the biggest ideas I’ve brought home.

This is the second in a series of three posts summarising my thoughts. Part 1 is here, part 2 is here.

3. Test Your Testing
(Bas Dijkstra, Göran Kero)

The previous two posts covered the mental and practical sides of testing in turn, but this topic covers both at once. We need to take a more holistic approach to testing, which encompasses not just our technical approach, but our own mindset when designing and applying our tools.

Bas Dijkstra – Who Will Guard The Guards Themselves? How to Trust Your Automation and Avoid Deceit

DNI4fqCXkAA70QS.jpg
Bas Dijkstra on the importance of trust in automated testing

Bas Dijkstra shocked me when we met briefly in the pub, the night before the conference. “My slides contain code”, he said. Code? At a test conference? Well, turns out we were ready. Bas demonstrated that tests can be deceptive, that they can give both false positives (reporting bugs when the software is working fine) and false negatives (tests passing when there’s a system breaking bug). The key concept was that of trust – we need to be able to trust our tests, or they become meaningless.

Bas’ comments came in the context of automation, and trust is certainly a key consideration in this area of test. But I think his comments are equally applicable to manual test effort, in that it’s very easy to design a test which misses the point of the actual system, or calls out some minor flaw as a game-changing bug. Sure, plenty of software teams have “always-fail” tests which they still run, red builds or even green builds for system areas which are horribly broken. But couldn’t the same be said for those bugs we “just live with”, without making enough noise that they’re fixed? Just something to consider, as obviously in many cases minor bugs are never going to be priorities for business or end user.

DNI7MtbWAAAuYR_.jpg
Some solutions to the problem of deceptive tests

In response to all this, Bas warns us to test our testing – to consider what each test is telling us, and to keep on top of refactoring automated tests, even going so far as to advocate deleting bad test suites and starting again if that’s what it takes. Better one honest canary test which passes or fails correctly, than 2000 flaky and unrepresentative tests.

7.png
All I could think about when Bas introduced his talk

Göran Kero – What I, A Tester, Have Learnt From Studying Psychology

Göran Kero gave an offbeat and very Swedish take on psychology in testing, which was a lot of fun and very interesting to consider. Whilst his assertion we should study psychology was perhaps unnecessary (we had three very psychological talks on the agenda already, as I touched on previously), some of the ideas he brought up were very interesting indeed.

DNIjXE8WkAA0c0y.jpg
Hmm…

First, Göran spoke about the distinction between correlation and causation. He showed some rather interesting graphs, which represented this in a number of ways – firstly, the correlation between margarine sales and divorce rates, which is obviously total coincidence. Secondly, the correlation between icecream sales and drowning, which is coincidental (banning ice creams is unlikely to prevent drownings), but has a shared cause – hot days make both ice cream sales and swimming more likely. The implication is that we must take care not to infer too much from the trends we find, and as a tester, whilst we often have good gut instincts, I think that’s a fair observation.

DNIksriX0AAlG6C.jpg
Defect Clustering? Are you sure?

Another point Göran raised which I found particularly interesting was the notion of “clustering illusion”, that we can find trends in things which are, essentially, random, because we are so predetermined to look for trends and patterns. This is linked to confirmation bias and something I think all testers, particularly those who have been schooled to interrogate systems for Defect Clustering (if an area seems buggy, it’s worth spending more time on it as there’s probably more bugs there than in other areas), should at least bear in mind. Did you find a lot of defects in that area because of how you tested it? Could it be you are linking things together which are not truly linked?

I nearly brought Vera Gehlen-Baum‘s talk on metacongition back into this list as it certainly relates to the same principle – we need to reflect on how we do what we do in order to keep it on track. In the end I feel the various principles and ideas expressed at this year’s Testbash Manchester were mutually supportive.

It was a great event. Can’t wait for Testbash Manchester 2018!

How do you test your testing?
What can we do to ensure our tests remain fit for purpose?
Why do we trust manual test over automation? Can’t both be deceptive?
Leave your comments below!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s