Abstract
AbstractHand gesture segmentation is an essential step to recognize hand gestures for human–robot interaction. However, complex backgrounds and the variety of gesture shapes cause low semantic segmentation accuracy in the existing lightweight methods because of imprecise features and imbalance between branches. To remedy the above problems, we propose a new segmentation structure for hand gestures. Based on the structure, a novel tri-branch lightweight segmentation network (BLSNet), is proposed for gesture segmentation. Corresponding to the structure parts, three branches are employed to achieve local features, boundaries and semantic hand features. In the boundary branch, to extract multiscale features of hand gesture contours, a novel multi-scale depth-wise strip convolution (MDSC) module is proposed based on gesture boundaries for directionality. For hand boundary details, we propose a new boundary weight (BW) module based on boundary attention. To identify hand location, a semantic branch with continuous downsampling is used to address complex backgrounds. We use the Ghost bottleneck as the building block for the entire BLSNet network. To verify the effectiveness of the proposed network, corresponding experiments have been conducted based on OUHANDS and HGR1 datasets, and the experimental results demonstrate that the proposed method is superior to contrast methods.
Funder
Scientic Research Foundation of Hebei University for Distinguished Young Scholars
Scientic Research Foundation of Colleges and Universities in Hebei Province
Science and Technology Program of Hebei Province
Central Government Guides Local Science and Technology Development Fund Projects
Publisher
Springer Science and Business Media LLC
Subject
Computational Mathematics,Engineering (miscellaneous),Information Systems,Artificial Intelligence