By John P. Desmond, AI Trends Editor
Before AI, software testing was a crucial step in the software development life cycle. After AI, it still is. But now AI can help with the testing.
AI and machine learning are being applied to software testing, defining a new era that makes the testing process faster and more accurate, according to a recent account from AZ Big Media.
The authors outline benefits of AI applied to software testing as:
Improved automation testing. Quality assurance engineers spend time performing tests to ensure new code does not destabilize existing, functioning code. As more features and functions are added, more code needs to be tested, potentially overwhelming QA engineers. Manual testing becomes impractical.
Tools to automate testing can run tests repeatedly over an extended period. The addition of AI functions to these tools is powerful. Machine learning techniques will help the AI testing bots evolve with the changes in the code, learning and adapting to the new functions. When they detect modifications to the code, they can determine whether it is a bug or a new feature. The AI can also detect whether minor bugs can be tested on a case-by-case basis, speeding up the process even more.
Assistance in API testing, which developers use to evaluate the quality of interactions between different programs communicating with servers, databases, and other components. The testing ensures that requests are processed successfully, that the connection is stable and the user gets the correct output.
The addition of AI to this process helps to analyze the functionality of connected applications and create test cases. The AI is capable of analyzing large data sets to identify potentially risky areas of the code.
QA Engineers Will Use Different Tools and Expertise to Test AI Apps
Paul Merrill, principal, Beaufort Fairmont
As AI moves into testing, the tools used by QA engineers to perform testing will change. In an account in TechBeacon, author Paul Merrill relates an anecdote from Jason Arbon, the CEO and founder of test.ai, a company that uses AI to test mobile apps. Arbon also worked at Google and Microsoft as a developer and tester. He wrote the book, How Google Tests Software (2012).
Arbon tells his kids about old days when he had a car with manual window cranks, and they laugh. Soon, QA engineering will be laughing at the notion of selecting, managing, and driving systems under test (SUT). “AI will do it faster, better, and cheaper,” Merrill stated.
test.ai offers bots that explore an application, interact with it, extract screens, elements and paths. It then generates an AI-based model for testing, which crawls the application under test on a schedule determined by the customer. On the site is the statement, “Go Beyond Legacy Software Test Automation Tools.”
The founders of Applitools, offering a test automation platform powered by what it calls “Visual AI,” describe a test infrastructure that needs to support expected test results from the same data that trains the decision-making AI. “This varies greatly from our current work with systems under test,” stated Merrill, who is a principal at Beaufort Fairmont, software testing consultants based in Cary, N.C.
Angie Jones, senior director of developer relations, Applitools
He describes the experience of Angie Jones, former senior software engineer in test at Twitter, writing in a recent article in 2017 titled, “Test Automation for Machine Learning: An Experience Report.” Jones described how she systematically isolated the learning algorithms of the system from the system itself, isolating the current data in order to expose how the system learns and what it concludes based on the data she gives it. Jones is now senior director of developer relations at Applitools.
Merrill poses these questions, “Will processes such as these become best practices? Will they be incorporated into methodologies we’ll all be using to test systems?”
About AI in testing, the cofounders of Applitools, Moshe Milman and Adam Carmi, were quoted by Merrill as stating, “First, we’ll see a trend where humans will have less and less mechanical dirty work to do with implementing, executing, and analyzing test results, but they will be still integral and necessary part of the test process to approve and act on the findings. This can already be seen today in AI-based testing products like Applitools Eyes.”
About this, Merrill states, “When AI can make less work for a tester and help identify where to test, we’ll have to consider BFF status.”
Describing the skills needed by AI testers, Milman and Carmi state on the Applitools blog, “Test engineers would need a different set of skills in order to build and maintain AI-based test suites that test AI-based products. The job requirements would include more focus on data science skills, and test engineers would be required to understand some deep learning principles.“
Four Approaches to AI in Software Testing Outlined
Four AI-driven test approaches were described by an account entitled AI in Software Testing: 2021, on the site of TestingXperts, a software testing company based in Mechanicsburg, Pa.
The four approaches are: differential testing, visual testing, declarative testing and self-healing automation.
In differential testing, QA engineers classify differences and compare application versions over each build.
Example products supporting this include Launchable, which is based on an ML algorithm that predicts the likelihood of failure for each test based on past runs and whenever the source code changes under test. This tool lets the user record the test suite so that tests that are likely to fail are run first. One can choose this tool to run a dynamic subset of tests that are likely to fail, thereby reducing a long-running test suite to a few minutes.
In visual testing, engineers test by look and feel of an application by leveraging image-based learning and screen comparisons. Example products incorporating this include the platform from Applitools, with its Visual AI features, including Applitools Eyes which helps to increase test coverage and reduce maintenance. The Ultrafast grid is said to help with cross-browser and cross-device testing, speeding up functional and visual testing. The Applitools platform is said to integrate with all modern test frameworks and works with many existing testing tools including those from like Selenium, Appium and Cypress.
In declarative testing, engineers aim to specify the intent of the test in a natural or domain-specific language, then the system decides how to perform the test. Example products include Test Suite from UIPath used to automate a centralized testing process, and through robotic process automation helping to build robots that execute tests. The suite includes tools for testing interfaces, for managing tests and for executing tests.
Also, tools from Tricentis aim to allow Agile and DevOps teams to achieve their test automation goals, with features including end-to-end-testing of software applications. The tool encompasses test case design, test automation, and test data design, generation and analytics.
In self-healing automation, the elements selected to test are automatically adjusted to changes in the UI. Example products include Mabi, a test automation platform built for continuous integration and continuous deployment (CI/CD). Mabi crawls the app screens and runs default tests common for most applications; it uses ML algorithms to improve test execution and defect detection.
Read the source articles and information from AZ Big Media, in TechBeacon, in“Test Automation for Machine Learning: An Experience Report” from Angie Jones, on the Applitools blog and from AI in Software Testing: 2021 on the site of TestingXperts.
Read MoreAI Trends