Repairing a Broken Test Automation Solution: Part Two

Welcome back to our ‘Repairing a Broken Test Automation Solution’ series, part two. As always, take some time to read part one first in order to get a complete overview of the first three key test automation requirements – audit trail, technical and scalability.

In our previous article, we highlighted what a ‘good’ automation solution looks like, regardless of the tools and technology your current implementation has in place.

If your code is accurate, fit for purpose, scalable and your test cases are traceable, then you’ve already taken a step forward towards optimising technical efficiency and reducing the number of issues you will face in the future.

Now, we’ll look at the last three major areas to help keep you on the go…

4. Quality Assurance & Training

Having lots of automated tests is great, but if we don’t know what they test then it’s counter intuitive if someone has to mine through logs and manually create results and reporting dashboards.

Automated tests should create reports and dashboards dynamically and it’s even possible to create a solution that updates a centralised report in real time – enabling us to monitor the progress of the pack as it’s running.

We must have clear processes in place in order to know:

  • What everyone is working on
  • Which tests are broken and need maintaining
  • How we get test automation code live
  • What skills do we have in the team and what do we need
  • What new functionality needs to be automated as it’s being delivered by development
  • The sign off process
  • The scope and feasibility of automated tests

Extremely complicated automation solutions may be lovely for the ego of the person writing them, but how will they be maintained by the team later on? Training and handover wise, who will run the pack after it’s built?

On the flip side, sometimes solutions are complicated because the person writing them is missing a much easier way to implement them in the first instance.

Don’t create a key man problem where the company is reliant on just one or two people! This is a team effort which everyone should be involved with and part of our job is to help developers, QA’s and testers build upon their capabilities for prompt decision making.

5. Repeatability

Automated tests must be as consistent as possible, or we are creating a problem at the outset. Our tests should be reliable, so we know that it still functions the same way today as it did yesterday – mapping the expected result and ensuring the test adheres to that.

We may need to be flexible with some data due to system constraints, but the outcome must be predictable. Dynamic synchronisation is key if we are to execute tests in multiple environments where some will be faster/slower than others. Don’t use hardcoded waits as these will just slow everything down and are extremely subjective.

When tests don’t sync, they are flaky and often get removed from CI as they can’t be trusted. In this scenario, find out why and fix them. Don’t just add an automatic retry option as you’ll only be masking the problem.

If tests are flaky in a shared part of the application, it’s likely the flakiness will impact more than just a single test if we have modularised/reusable code.

A lot of consideration must be given to the test data the automated tests are using. Different test environments will possibly have different underlying data in them, so think of a way to manage this problem dynamically.

Tests must be a single atomic unit, meaning they are self-sufficient and not reliant on external factors. In many cases, making sure the test manages its own pre-requisites by creating data in needs will save time and a lot of maintenance issues.

Automated tests will burn a lot of data so have a strategy to ensure this is replenished.

6. Execution & Operation

Tests must be robust and run without manual intervention – either at a click of a button or from a CI trigger with the exception of environmental configuration to point to a different test environment.

Ensure flaky tests are resolved and have a process to run these regularly outside of CI to get them working as expected. Otherwise your CI process will regularly fail when there are no application issues.

Test packs must also be executed as quickly as possible. Unit/API tests are extremely quick but UI cases much slower. If you have built the pack to be self-contained, you can use more servers/grids to execute many threads in parallel to reduce the overall time.

Also look at tagging options to see if you need to execute all tests constantly or whether you can use a subset and then execute larger packs out of hours.

If test packs are running slow, then debugging and maintenance is needed to see what/where the problems are. Poor sync points are most likely the reason.

In other circumstances, a very slow test environment means your pack will be slow too. This is because UI tests follow what a user does, and therefore, cannot execute any faster than the application allows.

Tests must be accurate in what they are doing, and it must be obvious from looking at the dashboards. If you have poor/ambiguous titles for your tests/scenarios, then reporting and result analysis will be ineffective.

Often automation is a black box, everyone is happy when the tests are running fine, but everyone is involved in root cause if something is missed and impacts live. Ensure that this is everyone’s responsibility to review the test pack for accuracy and scope so it’s not a black box exercise.

Nobody wants a spanner thrown into the works, which is why Spike95 is committed to supporting and helping our clients with quick, streamlined solutions that accelerates delivery and improves performance.

Get in touch with us today by sending us your details below and let us get your test automation solution back on track!