Influence of Expanded Field-of-View on YOLO-Based Car Detection Performance in Embedded Vision Systems

Authors

  • Rafael Steinbuks Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA
  • James Gallagher Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA
  • Edward Oughton Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA

DOI:

https://doi.org/10.13021/jssr2025.5281

Abstract

Real-time object detection under embedded-vision constraints is central to autonomous navigation and smart-city sensing, yet single-camera systems suffer from limited field-of-view (FOV) and degraded performance in low-light scenes. To address this limitation, we evaluate a dual-Picamera2 rig with a combined 180° FOV against a conventional single-camera setup (90° FOV) for car detection on the COCO dataset using YOLOv7-tiny and YOLOv11-nano. We systematically vary ambient-light levels (800 lx, 250 lx, 20 lx) and measure end-to-end latency on a Raspberry Pi 5. Both configurations are trained on identical image subsets and inferred at 640 × 480 px. We explore mean average precision (mAP) vis-à-vis latency and power trade-offs, given a variety of illumination levels and model variants. A reproducible data processing pipeline is contributed to the literature that helps analysts’ decide when widening FOV yields greater benefit compared to merely upgrading to a newer YOLO variant on resource-constrained edge hardware.

Published

2025-09-25

Issue

Section

College of Science: Department of Geography and Geoinformation Science