GraphWorld: A Novel Approach for Testing and Visualizing Graph-based Reinforcement Learning Algorithms

Authors

  • HRIDAY SAINATHUNI Department of Computer Science, George Mason University, Fairfax, VA
  • KESHAV SUBRAMONIAN Department of Computer Science, George Mason University, Fairfax, VA
  • Manshi Limbu Department of Computer Science, George Mason University, Fairfax, VA
  • Xuesu Xiao Department of Computer Science, George Mason University, Fairfax, VA

DOI:

https://doi.org/10.13021/jssr2023.3993

Abstract

Reinforcement Learning (RL) is a rapidly emerging branch of Machine Learning and computer science as a whole, based on the inherent reward system of biological beings. It is becoming increasingly popular to combine RL with engineering, specifically in the realm of motion-planning for autonomous robots. Cooperation between robotic systems is currently a vastly unexplored yet important field, as it is essential for the integration of autonomous robots in day-to-day life. Specifically, robotic systems can be represented in a graph framework using the concepts of nodes and edges, creating a more complex and descriptive method of inputs for neural networks in RL applications. In our study, we create an open-source, graph-based method of simulating agents, which can represent robotic systems in real-life. With the options of either tuning multiple parameters to fit a certain environment or creating completely randomized graph structures, our environment can simulate and provide dynamic visualizations for virtually any real life scenario. Therefore, our code offers a useful method for researchers to test the effectiveness and scalability of their graph network architectures. 

Published

2023-10-27

Issue

Section

College of Engineering and Computing: Department of Computer Science

Categories