Does your Performance Testing deliver business value?
We delivered a Performance Testing lunch and learn session at the QAstionTime event on 24 June, hosted by our friends at Xpertise Recruitment.
4 min read
Darryl Kennedy Apr 23, 2026 12:25:44 PM
If you're responsible for getting tech change over the line in retail, the pattern will be familiar: a UAT phase that doubles in length, defects that surface three weeks before go-live, and scope debates that everyone thought were sorted months ago.
When does this usually happen? When testing enters the picture too late!
In most retail tech programmes, it happens after the build – which is precisely when it's most expensive and most disruptive to act on what the testing finds.
The discipline of early testing – involving QA during requirements and design – has been talked about for years but pressured delivery teams tend to deprioritise it. We've seen time and again how it's a costly trade-off...
Most delivery failures in retail tech trace back to decisions made, or deferred, before any code was written.
By the time issues surface in testing, the team is often weeks deep into a build and changing course is expensive. But the earlier in the cycle a problem is found, the cheaper and faster it is to resolve.
That's the logic behind shift-left testing – getting quality thinking into requirements and design, so that what reaches the build phase is actually ready to be built.
For retail tech teams busy managing integration-heavy platforms, seasonal release windows, and scope pressure from multiple business stakeholders, the case for earlier involvement – and talking through scenarios – is strong. It's a very worthwhile exercise.
Bring testers into requirements, not just sprints
When QA is involved while requirements are still being shaped, unclear assumptions and missing edge cases get caught early. For retail platforms with complex promotion logic, multi-ERP integrations, and layered fulfilment rules, that scrutiny at the front end prevents a much larger rework conversation later. Testers ask different questions to developers and product owners – and they're questions worth asking before the requirements are finalised.
Write testable acceptance criteria before development starts
Engineers, product owners, and QA leads frequently hold different pictures of what 'done' looks like. When you agree on testable acceptance criteria before development begins, the whole team has a shared definition to build toward. You'll find that sprint reviews move faster and commercial stakeholders have something concrete to sign off against.
Set the test strategy before the first ticket is picked up
A test strategy is a set of decisions: what's in scope, where the biggest risks lie, who owns what, which test types apply where. Making those decisions before coding starts means the team has quality goals to build toward. If they're left until later, testing gets shaped by whatever time remains and whatever risks happen to be visible at that point.
Review requirements and design before writing test cases
In practice, shift-left means reviewing requirements documents for testability, walking through designs with a quality lens, and writing test cases before any build work begins. Issues caught at this stage – a flow that doesn't account for a returns scenario, a data field that behaves differently across integrations – are resolved in a conversation, before anyone has written code against the wrong assumption.
Identify the high-risk areas and go there first
Spreading testing evenly across everything is a reasonable-sounding approach that can leave the most important things under-tested e.g. browse, basket, checkout, promotions, payments. By prioritising validation effort around these journeys means the most impactful failures get found first – and give you enough time to fix them.
Make automation decisions upfront
It's easier to build in automation at the start. Automation retrofitted at the end of a programme fits badly and often doesn't get done. In the meantime, manual regression quietly expands until it's holding up every release cycle. Remember too that automation is a design choice, not a default. Fit‑for‑purpose beats coverage metrics every time.
Build testing into the pipeline from the start
A testing phase at the end of a project ends up carrying all the risk created by earlier decisions. By running automated checks on every build, doing structured functional testing throughout each sprint, and maintaining clear feedback loops, the team gets useful insights continuously. This means problems are identified while the work is still fresh in people’s minds, making them much easier to fix.
Distribute ownership across the whole delivery team
If you're really keen to slow a programme down, the best way is to hand finished work to QA at the end of a sprint. Developers own technical validation and unit-level checks. Specialist testers own structured functional regression and cross-system flows. Product owners and commercial stakeholders own acceptance – confirming that what's been built works in the real-world trading scenarios it was designed for. By agreeing clear ownership from the beginning keeps testing moving at pace with delivery.
And just to be super clear...
Shift‑left does not mean skipping UAT
It does not mean more process
In a retail tech programme managing releases against peak trading windows, with legacy integrations to navigate and scope landing from multiple parts of the business, shift-left has practical implications at every stage.
QA is in requirements workshops. The definition of done covers testability alongside functional completion. Automation is scoped as part of the architecture. When the business changes direction – which it does and will – the team can immediately see what that means for existing test coverage and go-live confidence.
Reliable delivery teams tend to share one habit: quality thinking starts on day one.
By the time a feature reaches formal testing, the unknowns have already been worked through and go-live is a planned event, not a scramble.
You know this is the right approach. Most delivery teams do.
But the practical question is how to introduce it into a team already under pressure without adding more friction to the cycle.
The answer is usually incremental. Agree acceptance criteria before the next feature goes into development. Get QA into the next requirements review. Write a handful of test cases before the build starts. Those habits, applied consistently, compound into a different way of working over time.
For teams who want a clearer view of where delivery risk is concentrated – across quality ownership, integration fragility, and test coverage – getting an independent assessment before the next major go-live is worth the time.
We delivered a Performance Testing lunch and learn session at the QAstionTime event on 24 June, hosted by our friends at Xpertise Recruitment.
To quote Top Gun: “I feel the need … the need for speed!” This is what every business and IT leader should be thinking about when it comes to...
Peak Paranoia is Real. Here’s How to Use It. Spoiler: It’s not panic, it’s pattern recognition. But only if you act.
In a world where we rely so heavily on technology, applications and big data to carry out daily tasks, campaigns and deliverables efficiently, you...
Streaming content has become standard as internet connection speeds have increased over the past 20 years, both in the home and across mobile...