Transform, Warp, and Dress: A New Transformation-guided Model for Virtual Try-on

Author:

Fincato Matteo1,Cornia Marcella1,Landi Federico1ORCID,Cesari Fabio2,Cucchiara Rita1ORCID

Affiliation:

1. University of Modena and Reggio Emilia, Modena, Italy

2. YOOX NET-A-PORTER GROUP, Bologna, Italy

Abstract

Virtual try-on has recently emerged in computer vision and multimedia communities with the development of architectures that can generate realistic images of a target person wearing a custom garment. This research interest is motivated by the large role played by e-commerce and online shopping in our society. Indeed, the virtual try-on task can offer many opportunities to improve the efficiency of preparing fashion catalogs and to enhance the online user experience. The problem is far to be solved: current architectures do not reach sufficient accuracy with respect to manually generated images and can only be trained on image pairs with a limited variety. Existing virtual try-on datasets have two main limits: they contain only female models, and all the images are available only in low resolution. This not only affects the generalization capabilities of the trained architectures but makes the deployment to real applications impractical. To overcome these issues, we present Dress Code , a new dataset for virtual try-on that contains high-resolution images of a large variety of upper-body clothes and both male and female models. Leveraging this enriched dataset, we propose a new model for virtual try-on capable of generating high-quality and photo-realistic images using a three-stage pipeline. The first two stages perform two different geometric transformations to warp the desired garment and make it fit into the target person’s body pose and shape. Then, we generate the new image of that same person wearing the try-on garment using a generative network. We test the proposed solution on the most widely used dataset for this task as well as on our newly collected dataset and demonstrate its effectiveness when compared to current state-of-the-art methods. Through extensive analyses on our Dress Code dataset, we show the adaptability of our model, which can generate try-on images even with a higher resolution.

Funder

“SUPER—Supercomputing Unified Platform”

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture

Reference72 articles.

1. Robust Cloth Warping via Multi-Scale Patch Adversarial Loss for Virtual Try-On Framework

2. Shane Barratt and Rishi Sharma. 2018. A note on the inception score. In Proceedings of the ICML Workshops.

3. CLOTH3D: Clothed 3D Humans

4. Mikołaj Bińkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. 2018. Demystifying MMD GANs. In Proceedings of the ICLR.

5. A novel cubic-order algorithm for approximating principal direction vectors

Cited by 11 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Appearance and Pose-guided Human Generation: A Survey;ACM Computing Surveys;2024-01-12

2. Limb-Aware Virtual Try-On Network With Progressive Clothing Warping;IEEE Transactions on Multimedia;2024

3. Cloth Interactive Transformer for Virtual Try-On;ACM Transactions on Multimedia Computing, Communications, and Applications;2023-12-11

4. Self-Adaptive Clothing Mapping Based Virtual Try-on;ACM Transactions on Multimedia Computing, Communications, and Applications;2023-10-23

5. Virtual Footwear Try-On in Augmented Reality Using Deep Learning Models;Journal of Computing and Information Science in Engineering;2023-10-09

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3