Moving Test Upstream – How to test the whole SDLC

Fourth in a series of posts summing up my thoughts on the Ministry of Testing’s latest success, TestBash Manchester 2016.

As testers, we’re used to hearing the value of “shifting testing left” in the software development life-cycle. By finding problems sooner, we can be instrumental in saving the business money, effort and time rectifying costly mistakes. But it’s not always clear how to go about this, as a ground-level tester. In this article I’ll discuss some of the techniques, ideas and strategies which can make your testing practice more holistic, ensuring quality is never just an afterthought.

Testers! Be more salmon!

At TestBash Manchester, the talk “Testers! Be More Salmon” by Duncan Nisbet specifically called for testers to drive testing practice out of its little pool at the end of the waterfall, and into other areas of development. I feel a little arrogant to say this was one talk where I felt a hint of smug satisfaction – whilst there’s always more to do, in our organisation we’ve succeeded in getting test into pretty much every stage of development, and have earned considerable buy-in and respect by doing so. But in former roles and other organisations, things haven’t been so good – and now I know how important early test involvement is to quality software.

slipping bugThe only Boehm’s Curve graph I will ever use

To those who are used to testing being a “final activity” prior to release, this brave new world of testing moving outside of its dark corner can seem a bit alien. I started life as a SIT tester, literally in a basement 2 floors below the devs, taking the work of various scrum teams from other areas of the business and giving them a final “integrated” once-over for two months at a time. I had nothing to do with the design process, little visibility of the user stories, and really only a glancing understanding of what the business wanted from its changes.

My role now couldn’t be further removed from this – I’m involved from the story formation stage, helping the Product Owner build testable stories, with a suitable granularity and slicing, ensuring acceptance criteria are not only realistic, but quantitative and testable. A few things have allowed me to keep a stronger handle on what’s being asked for, designed, built – and tested.

Story Refinement (3 Amigos)

Something fairly new to me is the idea of the Three Amigos. This is a kind of “pre-grooming” session, where a Product Owner, developer and tester get together to discuss and refine user stories prior to a major estimating session with the wider team. Prior to 3 Amigos, a lot of time and effort was expended in estimation sessions, working out if stories should be split, if investigations/spikes would be required, or if acceptance criteria were complete or appropriate.

pair_of_maracas_clothing_icon_id_335

Arrrriba!

In 3 Amigos, the Product Owner presents unrefined stories, and through a process of discussion, suggestion and gradual improvement the stories are brought up to a higher standard. If splits are required, this is the stage where the split is undertaken – perhaps some ACs are deliverables in their own right, and deserve to be viewed as separate stories, for example. Perhaps investigations are required to reduce doubt prior to estimating a story.

One of the key functions of a tester in these sessions is to develop an understanding of how one would go about proving the acceptance criteria. Many times, AC say words like “should”, as in “the button should produce a notification on-screen”. That’s not an AC! OK, so it should – that doesn’t mean it will. Another common AC is for something to be “better” or “faster” or “improved”. How do we prove that? Say we have to “improve page load time”. How much is enough? Improve it by 0.000001%? That’s satisfied the AC, after all. These are trite examples but they give the idea that by being present in the early story formation stage, a tester can ensure their requirements of testable AC can be met.

Estimation

I have been a little shocked to hear several testers say they only gave test estimates after dev work was underway. My team estimates in effort, via planning poker, using a totally subjective application of the modified Fibonacci scale to represent story sizes. Test is a key consideration here – a story may imply a 1-line code change, which a dev will naturally estimate as a very very small change. But it may be a fundamental part of the system, affecting almost every transaction – it would be a MAMMOTH test task!

So, it makes a lot more sense to me for test to be considered as part of this process up-front, before the Product Owner decides which stories to prioritise. Unless, of course, the organisation’s test resource is unlimited.

planningpokerPlanning Poker – the only kind of gambling I do

Key considerations for testers during estimation are not only testing to prove the AC, but to validate an absence of regression – and, where appropriate, to support the writing and maintenance of automated tests. It’s likely that these will be in the forefront of a tester’s mind, and unlikely they will be big concerns for anyone else. I’m sure it will be no surprise to anyone reading this, they should be!

Design/”Solutionising”

A step which I feel is often missed in this process of “continuous testing” is during the time in which developers are designing their approach. A user story specifies behaviours, but the solution is often largely in the hands of the developers, to solve as they see best given their deep technical understanding of the system, and the skills in the dev team.

Testers may feel a little intimidated by these highly technical discussions, but in my experience there is tremendous value in being present while devs do their brainstorming. I use this time to come up with my key test approaches (of course, more almost always fall out of the actual process of testing), key considerations which I feel need testing. As I hear the devs discuss their designs and approach, it’s rare that I interject (although of course I will, if I feel it’s relevant).

image-1-software-development-life-cyclesModels are inherently testable in their own right

However at the end of a particular story being discussed, I take a moment to review my test approaches with the devs. I literally “run the tests” on their design, and ask them to prove the key things I’ll need to know to show that the solution matches the AC, what I’m planning to test and what, from my distinct perspective as a tester, needs to happen for the user story to be delivered. This has been enormously valuable to both myself and the devs, and has trapped a fair few functional problems in designs or models before a single line of code has been written.

Can’t emphasis the value of this one enough – one of my secret weapons as a tester, and tremendously powerful. I’m reminded of the old saying usually misattributed to Einstein:

If you can’t explain something to a six-year-old, you really don’t understand it yourself.

That’s kind of nonsense – as Richard Feynman responded, “If I could explain it to the average person, I wouldn’t have been worth the Nobel Prize.” – but it is a great way of both checking the dev’s understanding, proving testability, and checking the suitability of the design in one step. Plus it gives you a head start in designing your test approach, which is a neat side effect.

Development

When our developers want to merge code, they use Bitbucket to comment on changes and considerations before those changes hit our codebase. The “hive mind” comes up with better solutions, notices mistakes or pulls out approaches which don’t meet our coding standards. It’s a valuable step – although as a tester-centric sticker on my laptop says “code reviews are overrated” – and provides a lot of insight and knowledge sharing. But as testers in agile teams, we often have no such oversight in our approaches, and no such learning opportunity.

honest-review-of-comwave-telecommunicationsIn the review process, everyone has something different to offer

A key standard we’ve introduced into our test planning is a review process, whereby a high level test plan (allowing for lots of exploratory goodness when the actual testing commences) is reviewed by both a developer working on the project, and a fellow tester from another team. This ensures key test considerations are given a “technical review” (the number of times one particular dev has suggested weird character strings with special significance to the types of changes we’re making… I should keep a list), and a “methodology review” enabling unconsidered approaches to come to the fore.

As with everything in this article, it’s vital to keep these reviews at a high level and complete them before a lot of work has been done. There’s a major advantage for devs to have the test considerations reinforced while they’re still producing code (“Oh shit, I’d forgotten about that bit” syndrome), and also to complete the review before a lot of data prep or (whisper it) test cases have been written, which may need to be changed or scrapped altogether.

Test

I won’t go too deep here as many others have written reams of good information about approaches to testing, but there’s always something to test or check. I’ve written test cases for reviewing documentation to ensure it meets requirements (these really are a “final checklist”), exploratory charters for things I need a dev to show me on a system (“Pre-Requisites: Capture a dev”) and all manner of weird and wonderful things. But those things have always started upstream.

tumblr_msyuioyg3p1rcc3hoo1_r1_400


Hopefully some of these techniques will be new to you, or at least a new way of approaching the test process. I’m a firm believer that testers can be (and should be) instrumental throughout the development process, rather than siloed off as “the first users”. By maintaining communication, visibility, asking questions and building the team’s understanding of both what the business want, and what we hope to see in the final product, testers can deliver far more value.

We are a positive part of the product development process – not just “monkeys with typewriters” trying to prove how useless everyone else is at their job. If you want to be more than just a checker with a clipboard, at the end of the factory production line, it’s time to swim a little farther upstream.


Duncan Nisbet – Testers! Be More Salmon!

2 thoughts on “Moving Test Upstream – How to test the whole SDLC

  1. Thanks Stu, a great article.

    I really like the points about testers being involved earlier and developers reviewing test plans. As a developer I can find reviewing a test plan very grounding. I can have all manor of weird and wonderful ideas, but simply I am writing code to pass your tests (TDD) which should in turn be testing the AC, it helps me to think lean.

    From a test perspective I can see your points about explicit acceptance criteria like performance improvement, is 0.00001% meeting the AC. But I am torn, I see the AC as a contract between the PO and the scrum team and as the agile manifesto says “collaboration over contract negotiation”. When you have a team of mature reasonable adults you should all have an understanding of the requirements and be able to work together on what is acceptable without specific AC. If you have somebody making a o.ooo1% improvement and trying to claim the story is done there is probably another problem in the team.

    Great work, looking forward to reading the next one.

    Like

    1. Thanks a lot Nigel, really interesting perspective.

      I take your point about negotiation/collaboration, it’s an interesting topic and has given me some food for thought. I guess I liken it to the application of coding standards – whilst the team generate the standards (just as teams refine and estimate stories they commit to), once those standards are agreed it’s fair to hold people to them. They can be revised if they’re found to be obstructive, but largely they’re about boundaries and agreements which enable work rather than hinder it. Not a perfect analogy but hopefully you take my point!

      At the same time you’re right, a good agile development environment should be about trust and expecting everyone to do the best job possible, but one way test can drive this is to ensure an absence of ambiguity in requirements. 0.00001% is an extreme example, but in that situation is a 5% improvement enough? A 20% improvement? No page calls over 1 second? There are just other ways to quantify requirements which enable a team to work with a clearer understanding of the work they need to undertake.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s