Can We Interpret a Robot's Predictions While Navigating in an Unknown Environment?


  • Anjini Verdia
  • John Paul Salvatore
  • Nicole Ortuno
  • Dr. Gregory Stein



Recent progress has allowed online behavior correction to improve the navigation of robots in an unknown environment. However, to start relying on this technology for navigation, we must first ensure that this improvement is a result of a better understanding of its surrounding environment. We hypothesize that existing tools for neural network interpretability (e.g., GradCAM) can help us measure the robot’s level of understanding. We will use such tools to inspect the predictions of the robot before and after this behavior correction and, by comparing those with the output of a high-performance reference network, we will study whether or not these changes signify that the robot better understands its environment. When comparing the preliminary results of neural network A with neural network B, we study the robot’s understanding in a simulated maze environment.





College of Engineering and Computing: Department of Computer Science