GraphWorld: A Novel Approach for Testing and Visualizing Graph-based Reinforcement Learning Algorithms
DOI:
https://doi.org/10.13021/jssr2023.3993Abstract
Reinforcement Learning (RL) is a rapidly emerging branch of Machine Learning and computer science as a whole, based on the inherent reward system of biological beings. It is becoming increasingly popular to combine RL with engineering, specifically in the realm of motion-planning for autonomous robots. Cooperation between robotic systems is currently a vastly unexplored yet important field, as it is essential for the integration of autonomous robots in day-to-day life. Specifically, robotic systems can be represented in a graph framework using the concepts of nodes and edges, creating a more complex and descriptive method of inputs for neural networks in RL applications. In our study, we create an open-source, graph-based method of simulating agents, which can represent robotic systems in real-life. With the options of either tuning multiple parameters to fit a certain environment or creating completely randomized graph structures, our environment can simulate and provide dynamic visualizations for virtually any real life scenario. Therefore, our code offers a useful method for researchers to test the effectiveness and scalability of their graph network architectures.
Published
Issue
Section
Categories
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.