HAIGEN: Towards Human-AI Collaboration for Facilitating Creativity and Style Generation in Fashion Design
-
Published:2024-08-22
Issue:3
Volume:8
Page:1-27
-
ISSN:2474-9567
-
Container-title:Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
-
language:en
-
Short-container-title:Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.
Author:
Jiang Jianan1ORCID, Wu Di1ORCID, Deng Hanhui1ORCID, Long Yidan2ORCID, Tang Wenyi2ORCID, Li Xiang2ORCID, Liu Can3ORCID, Jin Zhanpeng4ORCID, Zhang Wenlei5ORCID, Qi Tangquan5ORCID
Affiliation:
1. Data Intelligence and Service Collaboration (DISCO) Lab, Hunan University, China 2. College of Engineering and Design, Hunan Normal University, China 3. School of Creative Media, City University of Hong Kong, China 4. School of Future Technology, South China University of Technology, China 5. Wondershare Technology, China
Abstract
The process of fashion design usually involves sketching, refining, and coloring, with designers drawing inspiration from various images to fuel their creative endeavors. However, conventional image search methods often yield irrelevant results, impeding the design process. Moreover, creating and coloring sketches can be time-consuming and demanding, acting as a bottleneck in the design workflow. In this work, we introduce HAIGEN (Human-AI Collaboration for GENeration), an efficient fashion design system for Human-AI collaboration developed to aid designers. Specifically, HAIGEN consists of four modules. T2IM, located in the cloud, generates reference inspiration images directly from text prompts. With three other modules situated locally, the I2SM batch generates the image material library into a certain designer-style sketch material library. The SRM recommends similar sketches in the generated library to designers for further refinement, and the STM colors the refined sketch according to the styles of inspiration images. Through our system, any designer can perform local personalized fine-tuning and leverage the powerful generation capabilities of large models in the cloud, streamlining the entire design development process. Given that our approach integrates both cloud and local model deployment schemes, it effectively safeguards design privacy by avoiding the need to upload personalized data from local designers. We validated the effectiveness of each module through extensive qualitative and quantitative experiments. User surveys also confirmed that HAIGEN offers significant advantages in design efficiency, positioning it as a new generation of aid-tool for designers.
Funder
National Natural Science Foundation of China Key R&D Program of Hunan Province
Publisher
Association for Computing Machinery (ACM)
Reference64 articles.
1. Ankan Kumar Bhunia, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer, Fahad Shahbaz Khan, Jorma Laaksonen, and Michael Felsberg. 2022. Doodleformer: Creative sketch drawing with transformers. In Computer Vision-ECCV 2022:17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVII. Springer, 338--355. 2. A Computational Approach to Edge Detection 3. Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda. 2021. Crossvit: Cross-attention multi-scale vision transformer for image classification. In Proceedings of the IEEE/CVF international conference on computer vision. 357--366. 4. AI-to-Human Actuation 5. CrossGAI
|
|