Lessons from ProvarLabs' First Brown Bag Session with Dev Tools AI: How AI Will Fix End-to-End Testing
* This is a contributed post from Chris Navrides, CEO at Dev Tools AI. The content is based on his presentation at ProvarLabs’ first internal Brown Bag session, a knowledge-sharing session featuring industry thought leaders, which reached 60+ attendees.
When writing user interface (UI) automation, the test is written for the state of the app as it was in that particular moment in time. All subsequent changes to the app or page after that moment will result in potentially broken or flaky tests. Changes can occur for a variety of reasons:
- Refactors: Changing the class names to be more appropriate
- Change of web frameworks: As an example, the asp.net app being re-written to use react
- Experimentation: Random experiments that can alter the UI or backend to change the user experience
- Backend data changes: This can result in the UI test assumptions being challenged
All of these changes cause instability in the underlying UI tests that then must be fixed and maintained. Luckily, artificial intelligence (AI) provides a solution for end-to-end testing of apps. Read on to learn more.
DOM and Flow Changes
The issues mentioned above are usually grouped into one of two types of errors: the under page model/DOM (Document Object Model) changes, or the flow (or test steps) the test needs to take in order to complete the desired scenario.
DOM changes occur when some value within the page is altered. These can be minor, such as a CSS class changing, or major where entire elements disappear. When these happen in a given test case that is looking for them, it results in a failure, even when the object is still visually there.
Flow changes occur when a different set of steps appear in the application being tested. This is a common practice for things like seasonal promotions or alerting users. Many free apps will randomly show ads to a user that can take over the entire screen. International companies also experience this where payments and security requirements of specific regions have additional demands. If a test is meant to execute 100% of the time, in all these scenarios, this presents a difficult challenge.
AI Changes the Game
With advances in AI and machine learning, computers can start to emulate human testers. Harnessing this technology can help solve the problems faced during test execution.
Building locators that are resistant to DOM changes, within a UI testing framework, can be achieved through visual AI to find the element on the screen. This has several advantages in that it allows the test case to execute like an end user and not have false negatives when an underlying change occurs. It also can be executed across different platforms, such as iOS/Android and web, as most applications tend to use the same iconography for all users.
What this looks like in practice is a set of bounding boxes for a given screenshot (fig. 1) that the model detects. The model outputs boxes of what it sees are given elements (fig. 2).
Fig. 1: A reference image of www.nytimes.com
Fig. 2: Element bounding boxes over the screenshot
To assist in flow changes, tests can use reinforcement learning within a given application. This will give them the ability to robustly execute a given test, without needing to build the additional logic to check for possible new screens/pop-ups.
To start with this approach, you must first have an understanding of the application. This can be accomplished by learning from existing tests or by doing a crawl. The purpose of this is to understand which buttons on a given screen result in going to a new screen.
You can determine which screen is which by using the URL or page name for each screen. Each element can be determined via its XPath or element name. When you combine these, you will get a graph where nodes represent a screen and edges are elements.
Upon building the app graph, you can apply a reinforcement learning approach. A popular reinforcement learning technique is QLearn. QLearn is an algorithm that seeks to find the best action to take given the current state.
This state-action graph can then be applied where for a given element you wish to interact with, you know which state/node it should be in. For each action in a given test script, a system can then look at what the current state is and the set of actions to get to a state where the element exists. It then changes the nature of a test script to only list what actions it should execute that are pertinent to the core logic being tested by that test case.
AI and machine learning are being used for testing today. They have real applications at companies both big and small. There are a variety of different methods that can be used and various areas that it can be applied to. In the future, all of these ways will be common in test frameworks, so those looking to stay ahead of the curve should consider implementing them now.
To learn more about Dev Tools AI, visit www.dev-tools.ai. For more on ProvarLabs’ current initiatives, which include these Brown Bag sessions and much more, visit www.provartesting.com/provarlabs.