SSTNet: Saliency sparse transformers network with tokenized dilation for salient object detection
-
Published:2023-07-29
Issue:13
Volume:17
Page:3759-3776
-
ISSN:1751-9659
-
Container-title:IET Image Processing
-
language:en
-
Short-container-title:IET Image Processing
Author:
Yang Mo1,
Liu Ziyan12ORCID,
Dong Wen1,
Wu Ying1
Affiliation:
1. College of Big Data and Information Engineering Guizhou University Guiyang China
2. State Key Laboratory of Public Big Data Guizhou University Guiyang China
Abstract
AbstractThe vision Transformer structure performs better in salient object detection than the convolutional neural network (CNN)‐based approach. Vision Transformer predicts saliency by modelling long‐range dependencies from sequence to sequence with convolution‐free. It is challenging to distinguish the salient objects' location and obtain structural details for the influence of extracting irrelevant contextual information. A novel saliency sparse Transformer network is proposed to exploit sparse attention to guide saliency prediction. The convolution‐like with dilation in the token to token (T2T) module is replaced to achieve relationships in larger regions and to improve contextual information fusion. An adaptive position bias module is designed for the Vision Transformer to make position bias suitable for variable‐sized RGB images. A saliency sparse Transformer module is designed to improve the concentration of attention on the global context by selecting the Top‐k of the most relevant segments to improve the detection results further. Besides, cross‐modality to exploit the complementary RGB and depth modality fusion module (CMF) is used to take advantage of the complementary RGB image features and spatial depth information to enhance the feature fusion performance. Extensive experiments on multiple benchmark datasets demonstrate this method's effectiveness and superiority that it is suitable for saliency prediction comparable to state‐of‐the‐art RGB and RGB‐D saliency methods.
Funder
Science and Technology Program of Guizhou Province
Publisher
Institution of Engineering and Technology (IET)
Subject
Electrical and Electronic Engineering,Computer Vision and Pattern Recognition,Signal Processing,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Transformer technology in molecular science;WIREs Computational Molecular Science;2024-07