Affiliation:
1. Key Laboratory of Optical Engineering Chinese Academy of Sciences Chengdu China
2. School of Electronic, Electrical and Communication Engineering University of Chinese Academy of Sciences Beijing China
3. Institute of Optics and Electronics Chinese Academy of Sciences Chengdu China
Abstract
AbstractIn the field of landmark detection based on deep learning, most of the research utilise convolutional neural networks to represent landmarks, and rarely adopt Transformer to represent and encode landmarks. Meanwhile, many works focus on modifying the network structure to improve network performance, and there is little research on the distribution of landmarks. In this article,the authors propose an unsupervised model to extract landmarks of objects in images. First, Transformer structure is combined with the convolutional neural network structure to represent and encode the landmarks; next, positive and negative sample pairs between landmarks are constructed, so that the semantically consistent landmarks on the image are pulled closer in the feature space and the semantically inconsistent landmarks are pushed farther in the feature space; then the authors concentrate their attention on the most active points to distinguish the landmarks of an object from the background; finally, based on the new contrastive loss, the network reconstructs the image by the landmarks of the object that are continuously learnt during training. Experiments show that the proposed model achieves better performance than other unsupervised methods on the CelebA, Annotated Facial Landmarks in the Wild, 300W datasets.
Publisher
Institution of Engineering and Technology (IET)
Subject
Computer Vision and Pattern Recognition,Software