Interpreting neural network predictions of unseen space for a Learning over Subgoals Planner
DOI:
https://doi.org/10.13021/jssr2023.3967Abstract
Recent research has allowed for robots to navigate through unknown environments using various predictive learning methods. When predictive models, specifically neural networks, do not perform well, researchers seek to change the robots behavior. However, if we are to audit the robot behavior or its predictions about unseen space, we must be able to understand what specifically in the model is the driving factor of the good or bad predictions being made. Neural network interpretability can help us with this understanding. To understand how neural networks are interpreting our simulated maze environment images, we use the Captum tool to understand what parts of the image are responsible for making predictions. We will use this capability to inspect and compare predictions of two neural networks, each trained in a different environment. By interpreting and evaluating them, we will study whether the network is making the best predictions because of the robot’s ability to understand its environment, with simulated experiments in a maze-like space.
Published
Issue
Section
Categories
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.