Improving Android User Interface Testing Through Automated Screen Understanding


  • Alyssa McGowan Aspiring Scientists’ Summer Internship Program Intern
  • Safwat Khan Aspiring Scientists’ Summer Internship Program Co-mentor
  • Dr. Kevin Moran Aspiring Scientists’ Summer Internship Program Primary Mentor
  • Dr. Wing Lam Aspiring Scientists’ Summer Internship Program Primary Mentor



Due to the vast number of Android devices, it is difficult for developers to thoroughly test applications across different hardware configurations. Incompatibility issues can arise, invoking the need to automate software testing. Current automated testing approaches typically involve random or model-based input generation, which can often get trapped in “tarpits,” or screens that the tool is unable to progress past. We propose a learning-based approach that classifies screen states based on visual and text element extraction and is able to identify and handle tarpit screens. Our classifiers are trained using screenshots and metadata from the Rico dataset, representing UI configurations likely to appear in an app. To manage variability in UI designs our technique abstracts screenshots to create a “silhouette” image with color-coded boxes representing the bounds of UI elements. These silhouettes are decomposed into vector representations using an autoencoder, which is in turn used by our classifier to assign a label to a screen. Afterward, the app is tested using screen execution heuristics. The combined classifier achieves a 60% accuracy, compared to the 52% and 54% accuracies for exclusively visual and textual element classifiers. Our approach is able to identify tarpit screens with ~75% accuracy. In the future, we hope to illustrate that automated testing techniques that use our approach achieve superior code coverage compared to existing model-based techniques.





College of Engineering and Computing: Department of Computer Science