Matting Algorithm with Improved Portrait Details for Images with Complex Backgrounds
-
Published:2024-02-27
Issue:5
Volume:14
Page:1942
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Li Rui1234ORCID, Zhang Dan14, Geng Sheng-Ling134, Zhou Ming-Quan134
Affiliation:
1. School of Computer Science, Qinghai Normal University, Xining 810000, China 2. School of Computer and Software, Nanyang Institute of Technology, Nanyang 473000, China 3. Academy of Plateau Science and Sustainability, People’s Government of Qinghai Province & Beijing Normal University, Haihu, Xining 810004, China 4. The State Key Laboratory of Tibetan Intelligent Information Processing and Application, Qinghai Normal University, Hutai, Xining 810008, China
Abstract
With the continuous development of virtual reality, digital image applications, the required complex scene video proliferates. For this reason, portrait matting has become a popular topic. In this paper, a new matting algorithm with improved portrait details for images with complex backgrounds (MORLIPO) is proposed. This work combines the background restoration module (BRM) and the fine-grained matting module (FGMatting) to achieve high-detail matting for images with complex backgrounds. We recover the background by inputting a single image or video, which serves as a priori and aids in generating a more accurate alpha matte. The main framework uses the image matting model MODNet, the MobileNetV2 lightweight network, and the background restoration module, which can both preserve the background information of the current image and provide a more accurate prediction of the alpha matte of the current frame for the video image. It also provides the background prior of the previous frame to predict the alpha matte of the current frame more accurately. The fine-grained matting module is designed to extract fine-grained details of the foreground and retain the features, while combining with the semantic module to achieve more accurate matting. Our design allows training on a single NVIDIA 3090 GPU in an end-to-end manner and experiments on publicly available data sets. Experimental validation shows that our method performs well on both visual effects and objective evaluation metrics.
Funder
Qinghai Province Key R&D and Transformation Programme National Key R&D plan National Nature Science Foundation of China Independent project fund of State Key lab of Tibetan Intelligent Information Processing and Application
Reference43 articles.
1. Huang, L., Liu, X., Wang, X., Li, J., and Tan, B. (2023). Deep Learning Methods in Image Matting: A Survey. Appl. Sci., 13. 2. Li, J., Zhang, J., and Tao, D. (2023). Deep Image Matting: A Comprehensive Survey. arXiv. 3. Liu, J., Yao, Y., Hou, W., Cui, M., Xie, X., Zhang, C., and Hua, X.S. (2020, January 13–19). Boosting semantic human matting with coarse annotations. Proceedings of the IEEE Computer Vision and Pattern Recognition, Seattle, WA, USA. 4. User-guided deep human image matting using arbitrary trimaps;Fang;IEEE Trans. Image Process.,2022 5. Li, J., Zhang, J., and Tao, D. (2023, January 18–22). Referring image mattin. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.
|
|