Progressive-Augmented-Based DeepFill for High-Resolution Image Inpainting

Author:

Cui Muzi1ORCID,Jiang Hao2,Li Chaozhuo3

Affiliation:

1. College of Cyber Security, Jinan University, Guangzhou 511436, China

2. Stevens Institute of Technology, Hoboken, NJ 07030, USA

3. Microsoft Research Asia, Beijing 100080, China

Abstract

Image inpainting aims to synthesize missing regions in images that are coherent with the existing visual content. Generative adversarial networks have made significant strides in the development of image inpainting. However, existing approaches heavily rely on the surrounding pixels while ignoring that the boundaries might be uninformative or noisy, leading to blurred images. As complementary, global visual features from the remote image contexts depict the overall structure and texture of the vanilla images, contributing to generating pixels that blend seamlessly with the existing visual elements. In this paper, we propose a novel model, PA-DeepFill, to repair high-resolution images. The generator network follows a novel progressive learning paradigm, starting with low-resolution images and gradually improving the resolutions by stacking more layers. A novel attention-based module, the gathered attention block, is further integrated into the generator to learn the importance of different distant visual components adaptively. In addition, we have designed a local discriminator that is more suitable for image inpainting tasks, multi-task guided mask-level local discriminator based PatchGAN, which can guide the model to distinguish between regions from the original image and regions completed by the model at a finer granularity. This local discriminator can capture more detailed local information, thereby enhancing the model’s discriminative ability and resulting in more realistic and natural inpainted images. Our proposal is extensively evaluated over popular datasets, and the experimental results demonstrate the superiority of our proposal.

Publisher

MDPI AG

Subject

Information Systems

Reference49 articles.

1. Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., and Wen, F. (2020, January 13–19). Bringing old photos back to. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.

2. Youngjoo, J., and Jongyoul, P. (November, January 27). Sc-fegan: Face editing generative adversarial network with user’s sketch and color. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea.

3. Hierarchical representation via message propagation for robust model fitting;Lin;IEEE Trans. Ind. Electron.,2021

4. Hypergraph optimization for multi-structural geometric model fitting;Lin;Proc. Aaai Conf. Artif. Intell.,2019

5. Co-clustering on bipartite graphs for robust model fitting;Lin;IEEE Trans. Image Process.,2022

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3