Abstract
AbstractIn comparison to convolutional neural networks (CNN), the newly created vision transformer (ViT) has demonstrated impressive outcomes in human pose estimation (HPE). However, (1) there is a quadratic rise in complexity with respect to image size, which causes the traditional ViT to be unsuitable for scaling, and (2) the attention process at the transformer encoder as well as decoder also adds substantial computational costs to the detector’s overall processing time. Motivated by this, we propose a novel Going shallow and deeper with vIsion Transformers for human Pose estimation (GITPose) without CNN backbones for feature extraction. In particular, we introduce a hierarchical transformer in which we utilize multilayer perceptrons to encode the richest local feature tokens in the initial phases (i.e., shallow), whereas self-attention modules are employed to encode long-term relationships in the deeper layers (i.e., deeper), and a decoder for keypoint detection. In addition, we offer a learnable deformable token association module (DTA) to non-uniformly and dynamically combine informative keypoint tokens. Comprehensive evaluation and testing on the COCO and MPII benchmark datasets reveal that GITPose achieves a competitive average precision (AP) on pose estimation compared to its state-of-the-art approaches.
Publisher
Springer Science and Business Media LLC
Reference70 articles.
1. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems; 2017; Vol. 2017-December.
2. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 2020; Vol. 12346 LNCS.
3. Sun, X.; Wu, P.; Hoi, S.C.H. Face Detection Using Deep Learning: An Improved Faster RCNN Approach. Neurocomputing 2018, 299, doi:https://doi.org/10.1016/j.neucom.2018.03.030.
4. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. 2020.
5. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Kai Li; Li Fei-Fei ImageNet: A Large-Scale Hierarchical Image Database.; Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2010.