The Wellspring of Quality – Why debugging conversations and testing ideas is the ultimate quality approach

When we talk about testing or QA, we often think of an activity which occurs mostly at the end of the development cycle. Code “drops into test” once it’s merged, tests are prepared ready for this critical (and late) moment when testing can begin in earnest. But in collaborative, agile teams, testing and quality focus is a continual process which enables the majority of bugs to be identified before they have been through the costly process of being written, merged and discovered in the software.

Testing only happening at the end of the process is A Bad Thing, and here’s an example to explain why.

Example 1

Requirement 1
Requirement 2
Requirement 3

Developer X
Tester Y

Developer X has a misunderstanding about Requirement 2. Let’s assume that misunderstanding means they will write something which causes a bug. They won’t mean to, of course – they will pursue their understanding to its logical conclusion and write code which satisfies their understanding. But the completed software will behave in a fundamentally different way from the customer’s requirement.

They then raise a pull request, explaining what their code does, and a code review happens. The review finds a few minor issues to fix, and Developer X spends a few hours fixing them – then merges their code.

Tester Y has been preparing tests, and when the merge and deploy has finally happened, executes them. Tests pass for Requirement 1, but a test fails on Requirement 2. Dev X and Tester Y have an awkward conversation where Tester Y has to show Developer X they misunderstood Requirement 2, probably a confirmatory conversation with the Product Owner to validate whose understanding is correct, then some rework of the code, another PR, another code review, a few more tasks to resolve, another merge, another deploy,

Tester Y has been waiting for the code to return, but given the deep nature of the changed code, needs to re-execute tests for Requirements 1-3. This time, all the tests pass, the feature is shown to the PO and signed off for release.

Time taken: 3 days

Example 2

Requirement 1
Requirement 2
Requirement 3

Developer X
Tester Y

Developer X has a misunderstanding about Requirement 2. Let’s assume that misunderstanding means they will write something which causes a bug.

Tester Y asks a question which clarifies the team’s understanding of Requirement 2. The requirement is amended to be more specific.

Developer X writes code to satisfy this newly clarified requirement, then raises a pull request, explaining what their code does, and a code review happens. The review finds a few minor issues to fix, and Developer X spends a few hours fixing them – then merges their code.

Tester Y has been preparing tests, and when the merge and deploy has finally happened, executes them. Tests pass for Requirements 1-3, the feature is shown to the PO and signed off for release.

Time taken: 2 days

Make time to have conversations – it shouldn’t need a separate ticket, it should be something ongoing throughout the whole process!

One question, one moment of doubt verbalised, at the right moment and to the right community of people, saved the team a third of their whole time in development. And yes, the examples above are very simplistic. They only cover one type of bug which, whilst common, is not the only sort of bug we might introduce. However the same approach, of asking the right questions, testing understanding, ideas and designs, can reap similar rewards – and the tester is the best-placed team member to introduce these kinds of tests.

Is a Question a Test?*

In my opinion, every question we ask in the development process is a test. It asks something of the software, it uncovers information which can only be found by actually engaging with the software. The software may not be written yet, it might just be a collection of ideas, requirements or designs, but those upstream questions are absolutely tests.

They test:

  • The team’s understanding
  • Whether the team SHARE the same understanding
  • The requirements, and what drives them
  • That the envisaged solution is fit for purpose
  • How we are going to go from the idea to the finished software
  • When we might expect to complete the work and have it in the hands of customers
  • What risks the solution introduces or exposes

Etc etc etc… these are very valuable tests! This stuff can make or break the dev process and it’s very important to build based on the information these tests yield. The answers may be positive, negative, puzzling or expected. But they all have the prospect of giving the team greater information, enabling them to make better decisions and – hopefully – introduce fewer bugs.

There’s a very common graph used when training new testers, Boehm’s Curve, which demonstrates that the longer it takes to find a bug, the greater the expense of that bug on the team. Having a conversation on day 1 of a project is way better than letting a customer report a really nasty bug, then working through the process to patch it out – broadly speaking, that’s a trend you can track as follows:

(you’ll note I disagree with this assessment of testing as a distinct phase between the requirement and deployment, but it makes the point just the same).

The later we find the bug,

What is a Bug?

We generally think of a bug as a problem in code which produces an unfavourable behaviour. This could mean when I click the “1” button, “2” appears in a box. It could mean an invisible JS error, or it could mean that when I press “1”, my computer sets on fire.

But bugs are not just in code. Poor communication introduces bugs in understanding. In Example 1 where Developer X and Tester Y have different understanding of the requirement, they may have to debug that understanding and introduce a fix in the form of a tough conversation. The fact this conversation is “tough”, is a bug in the team’s dynamic. In both examples, the requirement is buggy, because it is imprecise. One could describe the team’s arms-length relationship with the PO as another type of bug. Were these guys even involved informing the requirement? Doesn’t sound like it as they clearly lack an understanding of what the customer wants. Another bug!

Bugs can hit any and every part of our process and a tester is always on the lookout for them. By exposing these pockets of “unfavourable” stuff, testers have a key role in making things better – the code, the customer’s day, the team’s working practices.

Debugging the Conversation

Given the above, how do we go about debugging these important conversations, which happen upstream of code being written? First up, we have to be tuned in to the conversations of our teams. We have to be present, focused, aware and in the room. As testers, we are rarely the most technically proficient person in the conversation, and technical conversations can go over our heads. But human interactions are universal. You don’t need to understand PHP to notice people are not on the same page, or to recognise they’re guessing.

One or two people being engaged is not enough. Look around – how can you bring the others into the flow of conversation? How can you leave the room certain that everyone has what they need to make good decisions?

If you feel someone in the room doesn’t understand the requirement, you can call it out – softly, by advocating for them. The classic “Sorry, I’m not technical, can you explain…” is a great tool in a tester’s belt, for the benefit of their teammates.

If you feel someone in the room is railroading others, you can address that. “I haven’t heard much from [x], please can we hear their opinion?” is well within your capacity as a teammate.

If you feel the team need to get closer to the customer’s original requirement, vs some generalised, marketable version – ask! “Where did this come from? Who needs this? How do they use the system?” are great questions.

And these questions, these interventions… they’re tests! They’re the test the team needs to start strong, to be sure their understanding is of appropriate quality, to establish whether or not they are ready for a professional development team to use as the basis for their work.

Testing the Idea

Another key practice in low-cost, high-impact testing is to run your tests as early as you possibly can.

As soon as a team describes, whiteboards or Slacks how a solution might work, be ready to run some tests. These can literally be the same principles you’ll be testing against when it comes to the finished code – does it meet the requirements of the story? Hopefully! But the tester has further requirements, which represent the ongoing viability of the system.

Examples include:

  • Which areas of the system might this introduce regression to?
  • Can we know more about how this will impact [feature]? How?
  • What’s the security implications of this change?
  • Will this be audited?
  • How performant will this be?

More than anything else, just running your ideas for actually testing the eventual code against the planned solution – literally calling out what you’re going to test and letting the developer demonstrate they believe their solution will satisfy it, and why – is a game-changing approach to early-introduced quality which every team can adopt right now. Do your team check in on what their solution is prior to writing code? If not, why not start doing so? It doesn’t have to be a long or involved process, but it might save you the pain of late-discovered bugs – or worse still, bugs in production.

Whiteboarding is a great opportunity to do some in-depth testing. Get someone to show you how the requirement is met by the board – and not just the requirement of the story, but the requirement of your test approach!

Not just Features

Another area where testing ideas is important is when planning tests.

Our team operates a review process which lets them get a second opinion, a second pair of eyes, before tests are really planned out. By reviewing “test intents”, we get the benefit of early tests on another key part of our process. We get to share good practice, share concerns, share experience.

What testers spend their time doing is as valuable and important to the eventual product as what developers do. They should be held to the same criteria of success and benefit from the same kinds of questions. challenges and, yep, tests as code, ideas and designs do.


There’s a lot more to talk about here, but the important thing is to open the conversation. Where can we test upstream of our habitual, “It’s now available to test” approach? Strategies like Three Amigos and Story Refinement give great opportunities to build quality early, so long as testers are engaged. But every conversation benefits from a critical, quality-focused mindset.

When you recognise that every question is a test, it becomes a lot easier to see why it’s vital to have tester involvement as early as possible, and why it’s such a great efficiency to respond fully and honestly when those questions get asked. We have the opportunity to introduce fewer bugs, yes, but also to reduce the cost and impact of the bugs we do face.

Get upstream! Get testing from day one! And build your own strategies to ensure nothing your team does goes untested – be that a conversation, a solution or an assumption.

What are the approaches you or your team take to ensure you get testing as early as possible? How do you introduce bug checks throughout your dev process? Let me know in the comments!

*Is a Test a Question?

Some things we call tests are not questions, and in my opinion this means they are not tests. Sometimes, all a test does is make an assertion, and yield a binary pass/fail outcome. They aren’t capable of delivering information about the solution, but they are pretty good at flagging whether or not requirements are delivered.

This is where I distinguish between a test (which has the possibility of delivering new information) and a check (which can only ever be a pass or a fail). Checks have their place but you can’t build a whole quality practice on them. They are the most expensive way of learning about the quality of your software – you learn something is wrong, but get no clue as to why and have to expend significant effort in chasing down the problem. Checks can be automated, but this is an inherently expensive exercise which requires maintenance and a greater initial outlay to introduce automated coverage.

THIS HAS A PLACE! I am not for a second saying automated checks are a bad thing, but like the examples above, they’re at the end of the process (even in TDD, the automated check comes way later than the idea, which is the foundation from which the solution follows), they’re expensive and, on their own, they’re definitely not the best way to bake quality into a development process.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s