Decoupled Cross-Modal Transformer for Referring Video Object Segmentation
Author:
Wu Ao1, Wang Rong12, Tan Quange1, Song Zhenfeng1
Affiliation:
1. School of Information and Cyber Security, People’s Public Security University of China, Beijing 100038, China 2. Key Laboratory of Security Prevention Technology and Risk Assessment of Ministry of Public Security, Beijing 100038, China
Abstract
Referring video object segmentation (R-VOS) is a fundamental vision-language task which aims to segment the target referred by language expression in all video frames. Existing query-based R-VOS methods have conducted in-depth exploration of the interaction and alignment between visual and linguistic features but fail to transfer the information of the two modalities to the query vector with balanced intensities. Furthermore, most of the traditional approaches suffer from severe information loss in the process of multi-scale feature fusion, resulting in inaccurate segmentation. In this paper, we propose DCT, an end-to-end decoupled cross-modal transformer for referring video object segmentation, to better utilize multi-modal and multi-scale information. Specifically, we first design a Language-Guided Visual Enhancement Module (LGVE) to transmit discriminative linguistic information to visual features of all levels, performing an initial filtering of irrelevant background regions. Then, we propose a decoupled transformer decoder, using a set of object queries to gather entity-related information from both visual and linguistic features independently, mitigating the attention bias caused by feature size differences. Finally, the Cross-layer Feature Pyramid Network (CFPN) is introduced to preserve more visual details by establishing direct cross-layer communication. Extensive experiments have been carried out on A2D-Sentences, JHMDB-Sentences and Ref-Youtube-VOS. The results show that DCT achieves competitive segmentation accuracy compared with the state-of-the-art methods.
Funder
Double First-Class Innovation Research Project for the People’s Public Security University of China
Reference46 articles.
1. Caelles, S., Maninis, K.-K., Pont-Tuset, J., Leal-Taixé, L., Cremers, D., and Van Gool, L. (2017, January 21–26). One-shot video object segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 2. Maninis, K.-K., Caelles, S., Pont-Tuset, J., and Van Gool, L. (2018, January 18–23). Deep extreme cut: From extreme points to object segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 3. Yang, Z., Wei, Y., and Yang, Y. (2020, January 23–28). Collaborative video object segmentation by foreground-background integration. Proceedings of the European Conference on Computer Vision, Online. 4. Hu, R., Rohrbach, M., and Darrell, T. (2016). Segmentation from natural language expressions. Computer Vision–ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016, Part I 14, Springer International Publishing. 5. Bellver, M., Ventura, C., Silberer, C., Kazakos, I., Torres, J., and Giro-i-Nieto, X. (2020). Refvos: A closer look at referring expressions for video object segmentation. arXiv.
|
|