Author:
Han Dong,Nie Hong,Chen Jinbao,Chen Meng,Deng Zhen,Zhang Jianwei
Abstract
Purpose
This paper aims to improve the diversity and richness of haptic perception by recognizing multi-modal haptic images.
Design/methodology/approach
First, the multi-modal haptic data collected by BioTac sensors from different objects are pre-processed, and then combined into haptic images. Second, a multi-class and multi-label deep learning model is designed, which can simultaneously learn four haptic features (hardness, thermal conductivity, roughness and texture) from the haptic images, and recognize objects based on these features. The haptic images with different dimensions and modalities are provided for testing the recognition performance of this model.
Findings
The results imply that multi-modal data fusion has a better performance than single-modal data on tactile understanding, and the haptic images with larger dimension are conducive to more accurate haptic measurement.
Practical implications
The proposed method has important potential application in unknown environment perception, dexterous grasping manipulation and other intelligent robotics domains.
Originality/value
This paper proposes a new deep learning model for extracting multiple haptic features and recognizing objects from multi-modal haptic images.
Subject
Electrical and Electronic Engineering,Industrial and Manufacturing Engineering
Reference24 articles.
1. Principal component analysis;Wiley interdisciplinary Reviews: Computational Statistics,2010
2. Multi-task CNN model for attribute prediction;IEEE Transactions on Multimedia,2015
3. Using the BioTac as a tumor localization tool,2014
4. A learning algorithm for model-based object detection;Sensor Review,2013
5. Tactile sensing for mobile manipulation;IEEE Transactions on Robotics,2011
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献