CTA-Net: A gaze estimation network based on dual feature aggregation and attention cross fusion
-
Published:2024
Issue:3
Volume:21
Page:831-850
-
ISSN:1820-0214
-
Container-title:Computer Science and Information Systems
-
language:en
-
Short-container-title:ComSIS
Author:
Xia Chenxing1, Tao Zhanpeng2, Wang Wei3, Zhao Wenjun2, Ge Bin2, Gao Xiuju3, Li Kuan-Ching4, Zhang Yan5
Affiliation:
1. College of Computer Science and Engineering, Anhui University of Science and Technology, Huainan China + Institute of Energy, Hefei Comprehensive National Science Center, Hefei, China + Anhui Purvar Bigdata Technology Co. Ltd Huainan, China 2. College of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China 3. Anyang Cigarette Factory, China Tobacco Henan Industrial Co, Anyang, China 4. Department of Computer Science and Information Engineering, Providence University, Taichung City, Taiwan 5. The School of Electronics and Information Engineering, Anhui University, Hefei, China
Abstract
Recent work has demonstrated the Transformer model is effective for computer vision tasks. However, the global self-attention mechanism utilized in Transformer models does not adequately consider the local structure and details of images, which may result in the loss of information and local details, causing decreased estimation accuracy in gaze estimation tasks when compared to convolution or sequential stacking methods. To address this issue, we propose a parallel CNNs-Transformer aggregation network (CTA-Net) for gaze estimation, which fully leverages the advantages of the Transformer model in modeling global context while the convolutional neural networks (CNNs) model in retaining local details. Specifically, Transformer and ResNet are deployed to extract facial and eye information, respectively. Additionally, an attention cross fusion (ACFusion) Block is embedded with CNN branch, which decomposes features in space and channels to supplement lost features, suppress noise, and help extract eye features more effectively. Finally, a dual-feature aggregation (DFA) module is proposed to effectively fuse the output features of both branches with the help feature a selection mechanism and a residual structure. Experimental results on the MPIIGaze and Gaze360 datasets demonstrate that our CTA-Net achieves state-of-the-art results.
Publisher
National Library of Serbia
Reference41 articles.
1. Biswas, P., et al.: Appearance-based gaze estimation using attention and difference mechanism. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3143-3152 (2021) 2. Cai, X., Chen, B., Zeng, J., Zhang, J., Sun, Y.,Wang, X., Ji, Z., Liu, X., Chen, X., Shan, S.: Gaze estimation with an ensemble of four architectures. arXiv preprint arXiv:2107.01980 (2021) 3. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proceedings of the European Conference on Computer Vision. pp. 213-229. Springer (2020) 4. Chen, Z., Shi, B.E.: Appearance-based gaze estimation using dilated-convolutions. In: Proceedings of the Asian Conference on Computer Vision. pp. 309-324. Springer (2018) 5. Cheng, Y., Huang, S., Wang, F., Qian, C., Lu, F.: A coarse-to-fine adaptive network for appearance-based gaze estimation. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 34, pp. 10623-10630 (2020)
|
|