Exploiting multimodal synthetic data for egocentric human-object interaction detection in an industrial scenario
-
Published:2024-05
Issue:
Volume:242
Page:103984
-
ISSN:1077-3142
-
Container-title:Computer Vision and Image Understanding
-
language:en
-
Short-container-title:Computer Vision and Image Understanding
Author:
Leonardi Rosario, Ragusa FrancescoORCID, Furnari Antonino, Farinella Giovanni Maria
Reference53 articles.
1. Bambach, S., Lee, S., Crandall, D.J., Yu, C., 2015. Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions. In: International Conference on Computer Vision. pp. 1949–1957. 2. Benavent-Lledo, M., Oprea, S., Castro-Vargas, J.A., Mulero-Perez, D., Garcia-Rodriguez, J., 2022. Predicting Human-Object Interactions in Egocentric Videos. In: International Joint Conference on Neural Networks. pp. 1–7. 3. Bhatnagar, B.L., Xie, X., Petrov, I., Sminchisescu, C., Theobalt, C., Pons-Moll, G., 2022. BEHAVE: Dataset and Method for Tracking Human Object Interactions. In: Conference on Computer Vision and Pattern Recognition. pp. 15935–15946. 4. Yolov4: Optimal speed and accuracy of object detection;Bochkovskiy,2020 5. Chao, Y.-W., Liu, Y., Liu, X., Zeng, H., Deng, J., 2018. Learning to Detect Human-Object Interactions. In: Winter Conference on Applications of Computer Vision. pp. 381–389.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|