Semantic-driven multi-camera pedestrian detection

Author:

López-Cifuentes AlejandroORCID,Escudero-Viñolo Marcos,Bescós Jesús,Carballeira Pablo

Abstract

Abstract In the current worldwide situation, pedestrian detection has reemerged as a pivotal tool for intelligent video-based systems aiming to solve tasks such as pedestrian tracking, social distancing monitoring or pedestrian mass counting. Pedestrian detection methods, even the top performing ones, are highly sensitive to occlusions among pedestrians, which dramatically degrades their performance in crowded scenarios. The generalization of multi-camera setups permits to better confront occlusions by combining information from different viewpoints. In this paper, we present a multi-camera approach to globally combine pedestrian detections leveraging automatically extracted scene context. Contrarily to the majority of the methods of the state-of-the-art, the proposed approach is scene-agnostic, not requiring a tailored adaptation to the target scenario–e.g., via fine-tuning. This noteworthy attribute does not require ad hoc training with labeled data, expediting the deployment of the proposed method in real-world situations. Context information, obtained via semantic segmentation, is used (1) to automatically generate a common area of interest for the scene and all the cameras, avoiding the usual need of manually defining it, and (2) to obtain detections for each camera by solving a global optimization problem that maximizes coherence of detections both in each 2D image and in the 3D scene. This process yields tightly fitted bounding boxes that circumvent occlusions or miss detections. The experimental results on five publicly available datasets show that the proposed approach outperforms state-of-the-art multi-camera pedestrian detectors, even some specifically trained on the target scenario, signifying the versatility and robustness of the proposed method without requiring ad hoc annotations nor human-guided configuration.

Funder

Universidad Autónoma de Madrid

Publisher

Springer Science and Business Media LLC

Subject

Artificial Intelligence,Hardware and Architecture,Human-Computer Interaction,Information Systems,Software

Reference44 articles.

1. Alahi A, Jacques L, Boursier Y, Vandergheynst P (2011) Sparsity driven people localization with a heterogeneous network of cameras. J Math Imaging Vision 41(1):39–58

2. Aliakbarpour H, Prasath VBS, Palaniappan K, Seetharaman G, Dias J (2016) Heterogeneous multi-view information fusion: review of 3D reconstruction methods and a new registration with uncertainty modeling. Science 4:8264–8285

3. Baqué P, Fleuret F, Fua P (2017) Deep occlusion reasoning for multi-camera multi-target detection. In: IEEE international conference on computer vision (ICCV), pp 271–279

4. Chavdarova T, Baqué P, Bouquet S, Maksai A, Jose C, Bagautdinov T, Lettry L, Fua P, Van Gool L, Fleuret F (2018) Wildtrack dataset. https://cvlab.epfl.ch/data/wildtrack

5. Chavdarova T, Baqué P, Bouquet S, Maksai A, Jose C, Bagautdinov T, Lettry L, Fua P, Van Gool L, Fleuret F (2018) WILDTRACK: a multi-camera HD dataset for dense unscripted pedestrian detection

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Multi-Camera Detection Framework for Lifelong Broiler Flock Monitoring;2024

2. Artificial intelligence-based spatio-temporal vision sensors: applications and prospects;Frontiers in Materials;2023-12-07

3. Multi-Scale Occluded Pedestrian Detection Based on Deep Learning;2023 International Conference on Evolutionary Algorithms and Soft Computing Techniques (EASCT);2023-10-20

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3