Can We Interpret a Robot's Predictions While Navigating in an Unknown Environment?
DOI:
https://doi.org/10.13021/jssr2022.3373Abstract
Recent progress has allowed online behavior correction to improve the navigation of robots in an unknown environment. However, to start relying on this technology for navigation, we must first ensure that this improvement is a result of a better understanding of its surrounding environment. We hypothesize that existing tools for neural network interpretability (e.g., GradCAM) can help us measure the robot’s level of understanding. We will use such tools to inspect the predictions of the robot before and after this behavior correction and, by comparing those with the output of a high-performance reference network, we will study whether or not these changes signify that the robot better understands its environment. When comparing the preliminary results of neural network A with neural network B, we study the robot’s understanding in a simulated maze environment.
Published
Issue
Section
Categories
License
Copyright (c) 2022 Anjini Verdia, John Paul Salvatore, Nicole Ortuno , Dr. Gregory Stein
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.