Abstract
Human parsing is a fine-grained human semantic segmentation task in the field of computer vision. Due to the challenges of occlusion, diverse poses and a similar appearance of different body parts and clothing, human parsing requires more attention to learn context information. Based on this observation, we enhance the learning of global and local information to obtain more accurate human parsing results. In this paper, we introduce a Global Transformer Module (GTM) via a self-attention mechanism to capture long-range dependencies for effectively extracting context information. Moreover, we design a Detailed Feature Enhancement (DFE) architecture to exploit spatial semantics for small targets. The low-level visual features from CNN intermediate layers are enhanced by using channel and spatial attention. In addition, we adopt an edge detection module to refine the prediction. We conducted extensive experiments on three datasets (i.e., LIP, ATR, and Fashion Clothing) to show the effectiveness of our method, which achieves 54.55% mIoU on the LIP dataset, 80.26% on the average F-1 score on the ATR dataset and 55.19% on the average F-1 score on the Fashion Clothing dataset.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference37 articles.
1. Application of an incremental SVM algorithm for on-line human recognition from video surveillance using texture and color features
2. The cityscapes dataset for semantic urban scene understanding;Cordts;Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016
3. Human semantic parsing for person re-identification;Kalayeh;Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2018
4. Learning human-object interactions by graph parsing neural networks;Qi;Proceedings of the European Conference on Computer Vision (ECCV),2018
5. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献