Multi-Camera 3D Behavioral Tracking of Rodents using OptiTrack and DANNCE Deep Learning Framework

Authors

  • Kazhal Shafiei Interdisciplinary Program for Neuroscience, George Mason University, Fairfax, VA
  • Fatemeh Farokhi Moghadam Department of Bioengineering, George Mason University, Fairfax, VA
  • Holger Dannenberg Interdisciplinary Program for Neuroscience, George Mason University, Fairfax, VA

Abstract

In experimental neuroscience, understanding how neural activity correlates with behavior during spatial navigation tasks requires precise tracking of animal movement kinematics. This typically involves recording brain signals in animals, such as mice, as they explore mazes, alongside simultaneous video tracking to capture behavioral data. However, there is currently a lack of comprehensive pipelines that integrate camera calibration, labeling procedures, and 3D reconstruction into a seamless workflow.

In this project, we used a multi-camera system with four OptiTrack Flex 13 cameras arranged around a custom-designed experimental arena to capture high-resolution 3D rodent behavioral data. Camera calibration was performed using OptiTrack’s Motive software, employing wand-based motion calibration and ground-plane tools to obtain precise intrinsic and extrinsic parameters. These parameters were then applied in DANNCE, a markerless video-based 3D tracking system that integrates projective geometry and 3D convolutional neural networks (CNNs) to infer animal landmarks across camera views. Approximately 2–3 minutes of synchronized video frames were manually labeled in MATLAB to identify anatomical keypoints, which were triangulated into accurate 3D coordinates. These labeled frames formed the training dataset for the DANNCE deep learning framework, enabling it to predict detailed 3D skeletal representations from multi-view video for the remainder of the recordings.

Preliminary results demonstrated calibration errors consistently below 0.5 mm, indicating high spatial accuracy. The manual labeling and triangulation process successfully generated reliable 3D keypoints for model training. Initial DANNCE training showed promising accuracy in predicting rodent skeletal postures and movements from unlabeled video data, with further validation underway to enhance model performance and robustness. This integrated 3D tracking system offers a powerful tool for quantitative analysis of rodent behavior, with significant potential to advance neuroscience and behavioral research.

Published

2025-09-25

Issue

Section

Interdisciplinary Program in Neuroscience