Using Motion Capture Technology to Learn 6-DoF Vehicle Kinodynamics for Autonomous Navigation on Vertically Challenging Terrain

Authors

  • Matthew Choulas Department of Computer Science, George Mason University, Fairfax, VA
  • Harry Yu Department of Computer Science, George Mason University, Fairfax, VA
  • Claire Chen Department of Computer Science, George Mason University, Fairfax, VA
  • Aniket Datar Department of Computer Science, George Mason University, Fairfax, VA
  • Chenhui Pan Department of Computer Science, George Mason University, Fairfax, VA
  • Xuesu Xiao Department of Computer Science, George Mason University, Fairfax, VA

Abstract

Traditional autonomous robots limit themselves to flat surfaces and unobstructed navigation trajectories, categorizing their environment into either free space or obstacles, and treating obstacles as untraversable terrain. Recent research has demonstrated that wheeled robots have the potential for off-road autonomous navigation across vertically challenging terrain (rocks, boulders, debris). However, to navigate successfully, autonomous robots need accurate kinodynamic models to compute motion trajectories and predict vehicle-terrain interactions. Most wheeled robots use simplified models such as Ackermann-steering or differential drive which assume that vehicle motion is restricted to a 2D plane and doesn’t account for complex underlying terrain. Machine learning has been used to develop models that can efficiently predict vehicle-terrain dynamics in order to plan trajectories on vertically challenging terrain. Our work aims to improve the accuracy of these models through the use of motion capture technology. First, we set up a 4-camera OptiTrack motion capture system surrounding a 3.1m by 1.3m rock testbed. Next, we equip a 1/10th scale vehicle with numerous infrared markers, allowing its pose to be tracked in real-time with ~1mm accuracy. Finally, by manually driving the vehicle over the rock testbed, we collect a dataset of vehicle-terrain interactions including a terrain elevation map, the tracked robot pose, and corresponding control inputs. This dataset will be used to train a supervised machine learning model to understand vehicle-terrain dynamics (predict future robot poses given the terrain map and control input). Additionally, we developed an equivalent tracked vehicle out of a Traxxas chassis, Azure Kinect depth camera, and NVIDIA Jetson Orin compute module. We then use the motion capture system to collect data on the tracked vehicle, verifying the applicability of our technique across multiple platforms. Overall, our work provides a high-quality dataset to improve navigation on challenging terrain and develops an additional robotic platform to verify the versatility of these navigation techniques across different hardware.

Citations:

  1. Datar, C. Pan, M. Nazeri, i, A. Pokhrel, and X. Xiao, “Terrain-Attentive Learning for Efficient 6-DoF Kinodynamic Modeling on Vertically Challenging Terrain,” in 2024 IEEE International Conference on Intelligent Robots and Systems. (IROS). IEEE, 2024.
  2. Datar, C. Pan, and X. Xiao, “Learning to model and plan for wheeled mobility on vertically challenging terrain,” arXiv preprint arXiv:2306.11611, 2023.
  3. Datar, C. Pan, M. Nazeri, and X. Xiao, “Toward wheeled mobility on vertically challenging terrain: Platforms, datasets, and algorithms,” in 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.

Published

2024-10-13

Issue

Section

College of Engineering and Computing: Department of Computer Science