Cross-modal Contrastive Learning with a Style-mixed Bridge for Single Image 3D Shape Retrieval

Author:

Song Dan1ORCID,Huo Shumeng1ORCID,Fu Xinwei1ORCID,Zhang Chumeng1ORCID,Li Wenhui1ORCID,Liu An-An1ORCID

Affiliation:

1. The School of Electrical and Information Engineering, Tianjin University, China

Abstract

Image-based 3D shape retrieval (IBSR) is a cross-modal matching task, which searches similar shapes from a 3D repository using a natural image. Continuous attentions have been payed to this topic, such as joint embedding, adversarial learning and contrastive learning. Modality gap and diversity of instance similarities are two obstacles for accurate and fine-grained cross-modal matching. To overcome the two obstacles, we propose a style-mixed contrastive learning method (SC-IBSR). On one hand, we propose a style transition module to mix the styles of images and rendered shape views to form an intermediate style, and inject it to image contents. The obtained style-mixed image features serve as a bridge for later contrastive learning in order to alleviate the modality gap. On the other hand, the proposed strategy of fine-grained consistency constraint aims at cross-domain contrast and considers different importance of negative (positive) samples. Extensive experiments demonstrate the superiority of the style-mixed cross-modal contrastive learning on both the instance-level retrieval benchmark (i.e., Pix3D, Stanford Cars and Comp Cars that annotate shapes to images), and the unsupervised category-level retrieval benchmark (i.e., MI3DOR-1 and MI3DOR-2 with unlabeled 3D shapes). Moreover, experiments are conducted on Office-31 dataset to validate the generalization capability of our method. Code and pretrained models will be available at https://github.com/honoria0204/SC-IBSR .

Publisher

Association for Computing Machinery (ACM)

Reference70 articles.

1. Mohamed Afham, Isuru Dissanayake, Dinithi Dissanayake, Amaya Dharmasiri, Kanchana Thilakarathna, and Ranga Rodrigo. 2022. CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022. IEEE, 9892–9902. https://doi.org/10.1109/CVPR52688.2022.00967

2. Masaki Aono and Hiroki Iwabuchi. 2012. 3D shape retrieval from a 2D image as query. In Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA 2012, Hollywood, CA, USA, December 3-6, 2012. IEEE, 1–10. https://ieeexplore.ieee.org/document/6411819/

3. Mathieu Aubry and Bryan C. Russell. 2015. Understanding Deep Features with Computer-Generated Imagery. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015. IEEE Computer Society, 2875–2883. https://doi.org/10.1109/ICCV.2015.329

4. David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, and Colin Raffel. 2019. ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring. CoRR abs/1911.09785 (2019). arXiv:1911.09785 http://arxiv.org/abs/1911.09785

5. David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019. MixMatch: A Holistic Approach to Semi-Supervised Learning. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 5050–5060. https://proceedings.neurips.cc/paper/2019/hash/1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3