Author:
Zhang Shixin,Shan Jianhua,Sun Fuchun,Fang Bin,Yang Yiyong
Abstract
Purpose
The purpose of this paper is to present a novel tactile sensor and a visual-tactile recognition framework to reduce the uncertainty of the visual recognition of transparent objects.
Design/methodology/approach
A multitask learning model is used to recognize intuitive appearance attributes except texture in the visual mode. Tactile mode adopts a novel vision-based tactile sensor via the level-regional feature extraction network (LRFE-Net) recognition framework to acquire high-resolution texture information and temperature information. Finally, the attribute results of the two modes are integrated based on integration rules.
Findings
The recognition accuracy of attributes, such as style, handle, transparency and temperature, is near 100%, and the texture recognition accuracy is 98.75%. The experimental results demonstrate that the proposed framework with a vision-based tactile sensor can improve attribute recognition.
Originality/value
Transparency and visual differences make the texture of transparent glass hard to recognize. Vision-based tactile sensors can improve the texture recognition effect and acquire additional attributes. Integrating visual and tactile information is beneficial to acquiring complete attribute features.
Subject
Industrial and Manufacturing Engineering,Computer Science Applications,Control and Systems Engineering
Reference28 articles.
1. Lightness and perceptual transparency;Perception,2006
2. Detecting transparency of glasses with capsule networks based on deep learning,2021
3. Imagenet: a large-scale hierarchical image database,2009
4. A dual-modal vision-based tactile sensor for robotic hand grasping,2018
5. Multimode Grasping Soft Gripper Achieved by Layer Jamming Structure and Tendon-Driven Mechanism
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献