Interpreting neural network predictions of unseen space for a Learning over Subgoals Planner

Authors

  • RUCHI BONDRE Department of Computer Science, George Mason University, Fairfax, VA
  • Swathi Guptha Department of Computer Science, George Mason University, Fairfax, VA
  • Gregory J. Stein Department of Computer Science, George Mason University, Fairfax, VA

DOI:

https://doi.org/10.13021/jssr2023.3967

Abstract

Recent research has allowed for robots to navigate through unknown environments using various predictive learning methods. When predictive models, specifically neural networks, do not perform well, researchers seek to change the robots behavior. However, if we are to audit the robot behavior or its predictions about unseen space, we must be able to understand what specifically in the model is the driving factor of the good or bad predictions being made. Neural network interpretability can help us with this understanding. To understand how neural networks are interpreting our simulated maze environment images, we use the Captum tool to understand what parts of the image are responsible for making predictions. We will use this capability to inspect and compare predictions of two neural networks, each trained in a different environment. By interpreting and evaluating them, we will study whether the network is making the best predictions because of the robot’s ability to understand its environment, with simulated experiments in a maze-like space. 

Published

2023-10-27

Issue

Section

College of Engineering and Computing: Department of Computer Science

Categories