Improving Android User Interface Testing Through Automated Screen Understanding
DOI:
https://doi.org/10.13021/jssr2022.3435Abstract
Due to the vast number of Android devices, it is difficult for developers to thoroughly test applications across different hardware configurations. Incompatibility issues can arise, invoking the need to automate software testing. Current automated testing approaches typically involve random or model-based input generation, which can often get trapped in “tarpits,” or screens that the tool is unable to progress past. We propose a learning-based approach that classifies screen states based on visual and text element extraction and is able to identify and handle tarpit screens. Our classifiers are trained using screenshots and metadata from the Rico dataset, representing UI configurations likely to appear in an app. To manage variability in UI designs our technique abstracts screenshots to create a “silhouette” image with color-coded boxes representing the bounds of UI elements. These silhouettes are decomposed into vector representations using an autoencoder, which is in turn used by our classifier to assign a label to a screen. Afterward, the app is tested using screen execution heuristics. The combined classifier achieves a 60% accuracy, compared to the 52% and 54% accuracies for exclusively visual and textual element classifiers. Our approach is able to identify tarpit screens with ~75% accuracy. In the future, we hope to illustrate that automated testing techniques that use our approach achieve superior code coverage compared to existing model-based techniques.
Published
Issue
Section
Categories
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.