Scaled Agile

Scaled Agile – Don’t forget about UAT

With the continued drive toward agile based methodologies over recent years, one of the key benefits it brings is the concept of little and often releases to production. This is at odds with the traditional waterfall methodology, and one stage in particular: User Acceptance Testing (UAT).

Whilst little and often is fairly easy to implement in businesses with a small number of products, I’ve worked at a number of client sites where they are grappling with scaled agile in order to cope with multiple teams all delivering parts of an enterprise level service. All have encountered problems and we have struggled to find examples of similar organisations that have cracked it! Below is a summary of the two different approaches to delivery, along with the pros and cons of each.

Agile vs Waterfall

 

Waterfall

Key features:

  • Infrequent releases, big bang approach, live dates locked in by the business.
  • Multiple development teams, a single UAT team working alongside end users from the business.
  • Emphasis on manual customer journeys in a live-like environment.

What do we like?

  • Plenty of time to plan test scenarios, data needs and execution schedules.
  • Ensure that the whole customer life-cycle is proven in real time (e.g. month end reporting).
  • End users sitting alongside the delivery teams and getting familiar with the solution prior to launch (doubles up as an early training opportunity).

What are we not so keen on?

  • Requirement gathering/design of whole solution up often leads to later changes, causing rework for all downstream teams.
  • Delays in code deployments causing downtime in the test team, whilst waiting around to begin the test execution schedule. Then, frustratingly, not all the carefully planned scenarios can be executed after all!
  • A mad rush of testing late in the process by a small army of people hell bent on raising bugs. This takes the bug count from zero to 100’s in a matter of days which them puts pressure back on the development teams.
  • When code does go live, it is a huge deployment. If it goes wrong, it can go REALLY wrong and be difficult to fix quickly.

 

Scaled Agile

Key features:

  • Frequent releases, driven by the teams.
  • Multiple autonomous scrum teams with shared/dependent products.
  • Emphasis on automated build, deployment and testing processes.

What do we like?

  • Lots of small releases has the effect of de-risking each one, making it easier to locate root causes of any live issues that do occur and then fix quickly.
  • Getting an MVP to market and incorporating valuable customer feedback to shape the future backlog.
  • Emphasis on automation at all stages of the process (not just testing), removing likelihood of human bottlenecks or error.
  • Emphasis on BDD/TDD in an iterative approach, so test cases are built up front and are less likely to change later.

What are we not so keen on?

  • Aspiration for 100% test automation: is this really possible or desirable? Often, not enough time is factored in for manual testing, which in my experience always adds value.
  • Reliance on unnecessarily complex feature flags/configuration to put code live early and keep it dormant until needed. Will the final live version product combinations at the point of switch on have actually been tested prior to this?
  • Do we get the time to test a true E2E customer journey that we would in a traditional UAT phase? Or is it just a series of short standalone tests (using different data sets) executed consecutively?

 

Conclusion

These are some of the pros and cons that I have experienced over the years. Although I have read articles regarding UAT still having a place within scaled agile, it is not something I can honestly say I have seen happen on the ground. I have been on sites where there are multiple scrum teams delivering shippable products that have undergone extensive siloed functional testing, but have not been given the time for what I would consider an adequate UAT equivalent phase. This has led to ticking time-bombs of dormant bugs that only kick in once user journeys have played out in real time on the live estate.

In order to gain the benefits of scaled agile we have had to be more pro-active in a number of areas so that we can build quality into the delivery process as early as possible. We need to instil the best parts of traditional requirements, design, build and testing roles into our agile engineers. This will help offset the reduced time for being set aside for manual testing at the end of the process. One such example is investing more in the skills of our teams than in the past. We need broader skill-sets and a greater ability to adapt. This is not just about formal training and qualifications either. This can also be achieved by setting aside time in each sprint for engineers to do their own research on latest tools and industry trends. Then if they find something of benefit, we need to allow them the time to get skilled up and create a proof of concept.

Another big challenge I’ve come across is changing the mindset of the leadership team. Sometimes they will want their teams to embrace new concepts without looking inward at their own behaviours. Often the most difficult conversations are with those who need to alter their command and control mindset (which they falsely believe to be empowering). Can a company change the way it manages budgets, constructs business cases, measures ROI, launches products, tracks staffing costs, etc.

 

I would love to hear from others who have encountered the good and the bad of both, and the subsequent efforts made to resolve them. Shared experiences of successes and failures will help us to be more likely to succeed in whatever approach we adopt in the future.

 

Alex Kaiser (Spike95 – Principal Test Automation Consultant) has enjoyed a wide ranging career in testing, spanning 20+ years. He has had experience of all the joys – private and public sector, huge programmes and small startups, every methodology and new wave of roles. Being part of some fantastic groundbreaking achievements and also some absolute stinkers that are best hidden at the end of his CV!

Spike95 is a technical testing consultancy. We make sure software is reliable, scalable and delivers the experience you and your customers expect.