Abstract
Over the last few years, deep learning based approaches have achieved outstanding improvements in natural image matting. Many of these methods can generate visually plausible alpha estimations, but typically yield blurry structures or textures in the semitransparent area. This is due to the local ambiguity of transparent objects. One possible solution is to leverage the far-surrounding information to estimate the local opacity. Traditional affinity-based methods often suffer from the high computational complexity, which are not suitable for high resolution alpha estimation. Inspired by affinity-based method and the successes of contextual attention in inpainting, we develop a novel end-to-end approach for natural image matting with a guided contextual attention module, which is specifically designed for image matting. Guided contextual attention module directly propagates high-level opacity information globally based on the learned low-level affinity. The proposed method can mimic information flow of affinity-based methods and utilize rich features learned by deep neural networks simultaneously. Experiment results on Composition-1k testing set and alphamatting.com benchmark dataset demonstrate that our method outperforms state-of-the-art approaches in natural image matting. Code and models are available at https://github.com/Yaoyi-Li/GCA-Matting.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
80 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Multi-behavior recommendation with SVD Graph Neural Networks;Expert Systems with Applications;2024-09
2. Text-Guided Portrait Image Matting;IEEE Transactions on Artificial Intelligence;2024-08
3. Hand Enhanced Video Matting;Proceedings of the 2024 5th International Conference on Computing, Networks and Internet of Things;2024-05-24
4. Color subspace exploring for natural image matting;IET Image Processing;2024-04-24
5. KD-Former: Transformer Knowledge Distillation for Image Matting;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14