Quickly starting with Test Automation, part 2 – Intermediate Automation tips

Welcome back to our ‘Quickly starting with Test Automation’ series, part 2 – Intermediate Automation tips. If you haven’t had a chance to check out part 1, we recommend you do so if you’re a newbie to the industry, or simply looking to brush up on the basics towards an effective test automation solution.

The recent rise in web-based applications in the marketplace requires an influx of skilled, technical-orientated developers, QA’s and testers at the helm of a business in order to steer its systems and deliveries on the course towards success. Contending with business requirements that change on a daily basis is no mean feat, and for those making the shift from manual testing, it can be even harder in getting up to speed with the latest tools, technologies, and methodologies.

Having an automated process in place is proven to eliminate time, save on costs and carry out critical scripted tests consistently – allowing your team to focus on the more complex tasks at hand.

If you’re ready to take your testing skills to the next level, then here are some of our intermediate automation tips to help point you in the right direction!


Selecting the appropriate automated testing tool(s) for your company/project can be a complex and time-consuming process. There are now dozens of options available on the market that support different technologies, have a range of features and functionality, provide integration to other tool chains, and come with numerous cost models.

It’s imperative to have a set of requirements ready before you start looking. Do this by creating a matrix and group your requirements logically, including both business drivers as well as the technical ones.

These groupings could comprise of the following:

  • Ease of use
  • What support is available
  • The reputation of the vendor/tool
  • What types of demo/trial licenses are available
  • Cost – this should include all costs i.e. licence, subscription fee, maintenance costs, support, hardware, training, consultancy
  • Supported environment/technologies – Do you have plans to include legacy or bespoke applications or are you web only
  • Specific features or functionality you may need i.e. script recorder, integration to API layer etc.
  • Integration – How will the automated test tool interact with tools/solutions you have in place or are considering
  • Delivery methodology – Are you waterfall, agile, or do you use BDD?  Remember to future proof this if you are planning a transformation
  • Skills – What skills are available in the team now, how will you create and maintain this over time

To score the requirements, use consistent responses i.e. exceeds requirement, meets requirement, partially meets requirement, fails requirement.

Selenium is far and away the most popular test automation solution on the market today, but this only supports web technologies. Cypress.io is also beginning to get a lot of traction but remember to look at your requirements here as Selenium and Cypress are not a direct comparison, they are designed for different purposes.

Appium leads the way for mobile device testing and there are a number of other solutions on the market which essentially provide a user interface around Selenium/Appium i.e. TestProject and Katalon Studio.

Record and playback

Otherwise known as codeless automation, “record and playback” allows you to hit a record button, complete your business process and create a script of the steps you followed, along with the inputs you entered into the application once you’ve stopped recording.

When learning something for the first time, this is an ideal, lightweight solution for those with little to no programming knowledge in recording basic business process scripts and exporting them to your chosen development language.

However, we cannot stress enough that these recorded scripts are not the end product. Recorded scripts will contain hardcoded values to inputs, URLs and objects in the system the script was recorded in which will all change or become invalid after use. For example, if you record a script to register a user, it’s highly likely that script will not play back successfully as the username/email address will now exist in the database.

Experienced test automation engineers like ourselves are unlikely to use this functionality, but we still occasionally use it to create skeleton scripts that will then be embellished further. This is particularly useful during proof of concepts to ensure the application is compatible with the test tool under evaluation.

Test data planning & challenges

Every automated test script or scenario you create must be a single atomic unit. This means that it needs to be capable of running unattended and unaided in any test environment you point the test pack at.

Plenty of thought needs to go into the design of not only the script, but the data it will need or consume. When executing tests manually, we have dependencies – searching for a customer or product that exists in the database. Test automation is exactly the same, but we need to be explicit with this kind of information and its source.

One of the first things we do is sit with testers and observe their current processes:

  • What exactly do they do at every step of the way – including steps outside of the application they are testing, and steps not included in the manual test case they are running.
  • What additional tools do they use i.e. do they use SQL to access a database?
  • Where do they source data from?
  • What are they checking?

You will quickly realise that people perform many tasks when executing manual tests that you probably didn’t know they were doing – spending 10 to 15 minutes just finding data in the test environment.

A robust test automation solution will need to have defined data and one approach we’ve seen in several organisations is an Excel sheet used as a lookup that someone has created beforehand for the pack to use. This may work for a single execution cycle but when tests need to be executed a 2nd or 3rd time or regularly in a CI environment, this is clearly a constraint.

Automated tests should not need that level of human involvement before each and every run or the effectiveness is drastically diminished. Solutions evolve over time and there are always ways in which we can make them more robust, execute faster and more consistently.

When looking at other data challenges, these can consist of:

  • Insufficient test data coverage
  • Outdated scripts and documentation
  • Unusable environment data for testers to manipulate

Think about the clear and obvious data challenges based on what you know about the system you are testing, and the business processes your familiar with. Adopting some simple approaches like injecting pre-requisite test data via an API or SQL will be a good starting point and once you have some different ones in place you can scale them out to the wider test pack.

Applying robust synchronisation with your application

Ensuring our automated tests keep pace with the application we are testing is critical. As we talked about in our previous blog, automated test scripts are not smart, they will only do what we ask them to do – so if we don’t provide an adequate means of synchronisation, our tests will continue on blindly and become brittle with a high probability of failure.

Every test automation tool on the market will provide a means of creating the necessary synchronisation we need. If it’s a UI test, then we probably need to be looking at visual cues to tell us that the application is ready for us to move to the next step after some server processing has been completed.

Whatever that cue is, we need to work out the most effective way to tell our script to wait for this point. Providing a hardcoded wait or sleep is the absolute last resort if we have no other means to do this dynamically.

Hardcoded values are subjective, and as systems change, we may also need to move code between test environments, so a wait of 5 seconds may work fine when developed but is not likely to be sustainable.

We use dynamic, conditional waits so the script can carry on sooner if the cue appears or wait the maximum allotted time if not.

If something has gone wrong and the cue never appears, then we don’t want the script to wait for an unnecessary amount of time and slow down the other tests. Most automation tools will provide the option to have a global timeout and then specific ones at the appropriate points.

When automating back end systems and processes the visual cues won’t be there so we need to look at other options available to us. By performing manual tests a number of times, this helps us to understand what happens and how to create the synchronisation.

To create back end synchronisation, we must still remember to allow a conditional boundary to ensure the scripts only waits for the correct amount of time.

Changing our sync points by small amounts can make a huge difference to the overall execution time when running large suites of tests.  However, it’s a double edge sword and having a threshold too low will likely cause too many false failures by not allowing the tests to wait the correct amount of time.  Having global control of your sync points allows you to tune the test packs accordingly to find the correct thresholds.

Introducing logging and test reporting

Every automated test script that you create needs to have logging included to ensure you can debug any issues that occur and understand what went wrong and why. A test automation framework can simplify this process which we’ll cover in more detail next week.

Remember, when the solution is complete your tests will be running unattended so it’s imperative to use these logs to track down issues as quickly as possible.

You can log every single step the test performs but this can create too much noise and slow you down. As a bare minimum we recommend logging all of the information needed to replicate the test manually as this is one of the first steps you’ll likely perform.

Automated tests can fail for a variety of reasons i.e. data, environment, synchronisation, objects, a defect or something random like a connectivity issue so a quick manual check with the data the test used will tell you pretty quickly. You’ll also use these in conjunction with a screenshot the test captured when it failed to further assist the debugging process.

Spike95 are here to help

With our level of technical testing expertise in test automation solutions, you can rest assured that your software will be reliable, scalable and goes above and beyond in delivering the results you and your customers expect.

Click here to download a PDF copy of part 2 of our Quickly Starting Test Automation guide with our compliments. We hope our intermediate automation tips were useful.

Watch out for part 3 coming soon!

For a quick chat on how we can help your business, or more automation advice get in touch here.