In the Eye of Transformer: Global–Local Correlation for Egocentric Gaze Estimation and Beyond
-
Published:2023-10-18
Issue:
Volume:
Page:
-
ISSN:0920-5691
-
Container-title:International Journal of Computer Vision
-
language:en
-
Short-container-title:Int J Comput Vis
Author:
Lai BolinORCID, Liu Miao, Ryan Fiona, Rehg James M.
Abstract
AbstractPredicting human’s gaze from egocentric videos serves as a critical role for human intention understanding in daily activities. In this paper, we present the first transformer-based model to address the challenging problem of egocentric gaze estimation. We observe that the connection between the global scene context and local visual information is vital for localizing the gaze fixation from egocentric video frames. To this end, we design the transformer encoder to embed the global context as one additional visual token and further propose a novel global–local correlation module to explicitly model the correlation of the global token and each local token. We validate our model on two egocentric video datasets – EGTEA Gaze + and Ego4D. Our detailed ablation studies demonstrate the benefits of our method. In addition, our approach exceeds the previous state-of-the-art model by a large margin. We also apply our model to a novel gaze saccade/fixation prediction task and the traditional action recognition problem. The consistent gains suggest the strong generalization capability of our model. We also provide additional visualizations to support our claim that global–local correlation serves a key representation for predicting gaze fixation from egocentric videos. More details can be found in our website (https://bolinlai.github.io/GLC-EgoGazeEst).
Funder
National Institutes of Health
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference89 articles.
1. Al-Naser, M., Siddiqui, S.A., Ohashi, H., Ahmed, S., Katsuyki, N., Takuto, S., & Dengel, A. (2019). Ogaze: Gaze prediction in egocentric videos for attentional object selection. 2019 digital image computing: Techniques and applications (dicta) (pp. 1–8). 2. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lučić, M., & Schmid, C. (2021). Vivit: A video vision transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6836–6846). 3. Bellitto, G., Proietto Salanitri, F., Palazzo, S., Rundo, F., Giordano, D., & Spampinato, C. (2021). Hierarchical domain-adapted feature learning for video saliency prediction. International Journal of Computer Vision, 129(12), 3216–3232. 4. Bertasius, G., Wang, H., & Torresani, L. (2021). Is space-time attention all you need for video understanding?. In International Conference on Machine Learning. 5. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., & Dhariwal, P. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. An Outlook into the Future of Egocentric Vision;International Journal of Computer Vision;2024-05-28 2. Privacy Preserving Gaze Estimation Via Federated Learning Adapted To Egocentric Video;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14 3. A Survey on Multimodal Large Language Models for Autonomous Driving;2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW);2024-01-01 4. SwinGaze: Egocentric Gaze Estimation with Video Swin Transformer;2023 IEEE 16th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC);2023-12-18
|
|