Influence of Expanded Field-of-View on YOLO-Based Car Detection Performance in Embedded Vision Systems
DOI:
https://doi.org/10.13021/jssr2025.5281Abstract
Real-time object detection under embedded-vision constraints is central to autonomous navigation and smart-city sensing, yet single-camera systems suffer from limited field-of-view (FOV) and degraded performance in low-light scenes. To address this limitation, we evaluate a dual-Picamera2 rig with a combined 180° FOV against a conventional single-camera setup (90° FOV) for car detection on the COCO dataset using YOLOv7-tiny and YOLOv11-nano. We systematically vary ambient-light levels (800 lx, 250 lx, 20 lx) and measure end-to-end latency on a Raspberry Pi 5. Both configurations are trained on identical image subsets and inferred at 640 × 480 px. We explore mean average precision (mAP) vis-à-vis latency and power trade-offs, given a variety of illumination levels and model variants. A reproducible data processing pipeline is contributed to the literature that helps analysts’ decide when widening FOV yields greater benefit compared to merely upgrading to a newer YOLO variant on resource-constrained edge hardware.
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.