The future of software testing is automated AI

Key takeaways

  • Quality Assurance (QA) is critical to ensuring software and technology is kept up to date without breaking.
  • Testing has traditionally been manual, in writing scripts, deploying them, and fixing the bugs they identify.
  • AI automation promises a future where QA testing is quicker, easier, provides greater coverage and more stability.

We’ve all been there. Plugging the new Blu-ray player into the TV — exactly as the manual directed us to — suddenly silences the sound output from the Hi-Fi system. Downloading a new game to the laptop stops the webcam from working, or worse, the whole computer from turning on. A scheduled update to your phone stops the Wi-Fi from connecting.

In the general scheme of things, such problems are inconvenient and annoying to individuals. But to organisations, a computer system breaking, software stuttering or website crashing can mean more than an hour or two fiddling with wires behind the stereo. Critical infrastructure can go down, hackers can breach systems, customer information can be exposed, financial transactions thwarted or brand loyalty lost.

Downtime in any form is costly, which is why IT departments typically employ quality assurance (QA) engineers and testers. But to date, testing has been slow, labour intensive and constrained by resources. Balancing the critical nature of the work with the manual hours involved is an endless task.

Could AI provide the answer?

Testing time and testing times

In August, 2019 British Airways experienced a systems failure that led to hundreds of flights being cancelled or delayed when online check-in systems went down.1 In 2016, Nest thermostats received a firmware update containing a bug which caused the devices to stop responding and the temperature in owners’ homes to subsequently plummet.2 Last year, an IT migration failure at TSB Bank resulted in 80,000 customers switching to competitors at an overall cost to the company of £330 million.3

In short, testing is important.

Even if your business is lucky enough not to suffer a massive technology meltdown causing PR nightmares, the lost productivity from employees not being able to login to core systems or having to create time-intensive workarounds to complete daily work should be enough to keep you up at night.

There are many reasons why systems fail. Bugs are not always due to badly written software as their somewhat negative entomological etymology implies, and often are often simply the result of the complex and complicated IT systems that businesses now work within.*

For this reason, a range of assets need to be quality tested. These include websites and apps, to software performance, user password security, APIs, technology stacks, internal networks, credentialing to the cyber security of systems. Just to name a few.

The trouble with testing

To test systems and software, QA testers write scripts which are sets of instructions the program follows to see if everything works the way it should. For example, a script could test a customer login on a website by doing the things that a user would do such as click on the ‘login’ prompt, input their username and click ‘enter’.

As you can imagine, there are hundreds of thousands of different things that need to be tested, and with every alteration you need to test them all again. In some industries we at PwC Australia work in, 85-95 percent of scripts are done manually, one at a time and with the average script taking up to an hour to create and debug,4 it’s no wonder that up to 40 percent of IT budgets were once spent on testing.5

The cost therefore of testing less, or not at all, is exponential. In such a scenario if the cost of finding and fixing a bug was (for example) $100 dollars while in development, it would be $1500 when found in testing, and a massive $10,000+ when found by a customer.6

Additionally, testing is often the last thing that is done by the IT team, and if defects or errors are found, work tends to stop (on everything else) while they are fixed. Operational flow is disrupted and teams find themselves unable to cope with the maintenance and the documentation of scripts. In practice, this means  testing is often insufficient on complex applications and infrastructure.

As an added sting in the tail, when things don’t work, people stop using them. That $20 million spent on a new and improved platform to create business efficiency and competitive edge? Down the drain.

Is AI the answer?

Testing can be automated with the use of automation tools. A tester can write a script to test functionality and once it’s optimised and proven to work, that script can then be recycled and used to retest the same functions when further changes are made. In essence, it can remove much of the repetition of the coding work, however, scripts still need to be maintained and coverage is still limited to what a tester thinks to, (or has time to), test.

In and of itself, this is a step up from manual testing, but 65 percent of respondents to the 2019-20 World Quality Report say automation (and the maintenance required to update scripts) can’t keep up with the amount of change between application releases.7 This is where artificial intelligence (AI) testing really proves its worth.

AI-driven testing is much faster and some applications can create test scripts autonomously with machine learning to generate thousands of scripts in a matter of minutes.8 Thousands of items and user scenarios can be tested concurrently, eliminating the countless hours lost in an approach limited by human capacity.

Using machine learning, AI test automation generates its own scripts (rendering the work ‘codeless’) and rigorously tests the functionality, performance and security of systems by crawling them for functional changes and adapting code as needed. The tool can learn what makes a good test and continue to learn by observing the behaviour of users to create even better, more targeted ones. It will fix or ‘self-heal’ problems, and prioritise tests (and the most critical parts of them) as the software changes.9

This maintainability can’t be underestimated, given the countless hours lost by QA testers in rechecking code post every minor change. The increasing focus on regulation and policy in today’s business landscape means the ability to auto-adapt to changes without breaking software functionality will become increasingly important.

These abilities, along with its speed and dexterity, means greater testing coverage. AI testing tools identify bugs and address them by themselves, without regular human intervention or the effort or resources needed to manually code, run or maintain scripts. This means IT departments can instead focus on analysis, improved quality and usability. Fewer bugs translates to more stable technology and software, with less risk and the ability to get releases out to market faster.

Money, time, effort and stability

Is AI the way to go to balance the need for testing with resourcing constraints? We have used AI automation in our work, and the benefits have been immense. We were able to bring 300 percent increase in test case coverage with a client, at up to 80 percent less cost. And it was developed in a fraction of the time standard automated testing takes.

When one glitch can cripple an organisation, the need for QA testing must be prioritised earlier in technology development and implementation. It is predicted that by 2024 three quarters of large enterprises will be using AI-enabled test automation tools to support continuous testing.10 With 21 percent of the World Quality Report survey participants saying they are trialling AI tools,11 it’s certain that AI automated testing is on track for more than a passing grade.

*For example, fixes and workarounds laid on top of ‘out of the box’ applications often mean updates don’t work as they should. Legacy technology doesn’t always play well with its newer siblings. Software needs to work on multiple platforms and operating systems. New processes added to a website can interfere with existing ones. Not to mention, customers have a knack for doing things that were never expected.

For further information on the benefits of digitalisation, visit PwC Australia’s Digital Transformation site.



References

  1. https://www.theguardian.com/business/2019/aug/07/british-airways-it-glitch-causes-disruption-for-passengers-delays
  2. https://www.theregister.com/2016/01/14/nest_foul_up/
  3. https://www.independent.co.uk/news/business/news/tsb-it-failure-cost-compensation-customers-switch-current-account-a8757821.html
  4. Source: PwC client work
  5. https://www.capgemini.com/2018/09/qa-budget-trends-analysis
  6. https://www.researchgate.net/figure/IBM-System-Science-Institute-Relative-Cost-of-Fixing-Defects_fig1_255965523
  7. https://content.microfocus.com/world-quality-report-quality-driven-development-tb/world-quality-report-2019-2020?lx=R6G46K&utm_source=techbeacon&utm_medium=techbeacon&utm_campaign=00134846&_ga=2.150454454.1046732439.1600318392-371491527.1599793052
  8. https://www.prnewswire.com/news-releases/innominds-partners-with-appvance-to-offer-ai-driven-quality-engineering-services-301048145.html
  9. https://simpleprogrammer.com/ai-test-automation/
  10. https://info.advsyscon.com/it-automation-blog/gartner-it-automation
  11. https://content.microfocus.com/world-quality-report-quality-driven-development-tb/world-quality-report-2019-2020