Affiliation:
1. City University of Hong Kong, China
2. University of Bath, United Kingdom
Abstract
We propose the first unified framework
UniColor
to support colorization in multiple modalities, including both unconditional and conditional ones, such as stroke, exemplar, text, and even a mix of them. Rather than learning a separate model for each type of condition, we introduce a two-stage colorization framework for incorporating various conditions into a single model. In the first stage, multi-modal conditions are converted into a common representation of hint points. Particularly, we propose a novel CLIP-based method to convert the text to hint points. In the second stage, we propose a Transformer-based network composed of
Chroma-VQGAN
and
Hybrid-Transformer
to generate diverse and high-quality colorization results conditioned on hint points. Both qualitative and quantitative comparisons demonstrate that our method outperforms state-of-the-art methods in every control modality and further enables multi-modal colorization that was not feasible before. Moreover, we design an interactive interface showing the effectiveness of our unified framework in practical usage, including automatic colorization, hybrid-control colorization, local recolorization, and iterative color editing. Our code and models are available at
https://luckyhzt.github.io/unicolor
.
Funder
Hong Kong Research Grants Council (RGC) GRF Scheme
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Reference55 articles.
1. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods
2. Dzmitry Bahdanau , Kyunghyun Cho , and Yoshua Bengio . 2015 . Neural Machine Translation by Jointly Learning to Align and Translate . In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015.
3. Coloring with Words;Bahng Hyojin;Guiding Image Colorization Through Text-Based Palette Generation. In ECCV,2018
4. Yun Cao Zhiming Zhou Weinan Zhang and Yong Yu. 2017. Unsupervised Diverse Colorization via Generative Adversarial Networks. In Machine Learning and Knowledge Discovery in Databases. 151--166. Yun Cao Zhiming Zhou Weinan Zhang and Yong Yu. 2017. Unsupervised Diverse Colorization via Generative Adversarial Networks. In Machine Learning and Knowledge Discovery in Databases. 151--166.
5. Jianbo Chen , Yelong Shen , Jianfeng Gao , Jingjing Liu , and Xiaodong Liu . 2018 . Language-Based Image Editing with Recurrent Attentive Models. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), 8721--8729. Jianbo Chen, Yelong Shen, Jianfeng Gao, Jingjing Liu, and Xiaodong Liu. 2018. Language-Based Image Editing with Recurrent Attentive Models. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), 8721--8729.
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Two-stage image colorization via color codebook;Expert Systems with Applications;2024-09
2. Multimodal Semantic-Aware Automatic Colorization with Diffusion Prior;2024 IEEE International Conference on Multimedia and Expo Workshops (ICMEW);2024-07-15
3. Versatile Vision Foundation Model for Image and Video Colorization;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13
4. Shadow-aware image colorization;The Visual Computer;2024-06-04
5. Real-Time User-guided Adaptive Colorization with Vision Transformer;2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV);2024-01-03