Affiliation:
1. SUSTech Institute of Robotics Southern University of Science and Technology Shenzhen China
2. Ocean Science and Engineering Southern University of Science and Technology Shenzhen China
3. School of Design Southern University of Science and Technology Shenzhen China
4. Mechanical and Energy Engineering Southern University of Science and Technology Shenzhen China
Abstract
Incorporating authentic tactile interactions into virtual environments presents a notable challenge for the emerging development of soft robotic metamaterials. In this study, a vision‐based approach is introduced to learning proprioceptive interactions by simultaneously reconstructing the shape and touch of a soft robotic metamaterial (SRM) during physical engagements. The SRM design is optimized to the size of a finger with enhanced adaptability in 3D interactions while incorporating a see‐through viewing field inside, which can be visually captured by a miniature camera underneath to provide a rich set of image features for touch digitization. Employing constrained geometric optimization, the proprioceptive process with aggregated multi‐handles is modeled. This approach facilitates real‐time, precise, and realistic estimations of the finger's mesh deformation within a virtual environment. Herein, a data‐driven learning model is also proposed to estimate touch positions, achieving reliable results with impressive R2 scores of 0.9681, 0.9415, and 0.9541 along the x, y, and z axes. Furthermore, the robust performance of the proposed methods in touch‐based human–cybernetic interfaces and human–robot collaborative grasping is demonstrated. In this study, the door is opened to future applications in touch‐based digital twin interactions through vision‐based soft proprioception.
Funder
National Natural Science Foundation of China