Abstract
The game of Jenga is a benchmark used for developing innovative manipulation solutions for complex tasks. Indeed, it encourages the study of novel robotics methods to successfully extract blocks from a tower. A Jenga game involves many traits of complex industrial and surgical manipulation tasks, requiring a multi-step strategy, the combination of visual and tactile data, and the highly precise motion of a robotic arm to perform a single block extraction. In this work, we propose a novel, cost-effective architecture for playing Jenga with e.Do, a 6DOF anthropomorphic manipulator manufactured by Comau, a standard depth camera, and an inexpensive monodirectional force sensor. Our solution focuses on a visual-based control strategy to accurately align the end-effector with the desired block, enabling block extraction by pushing. To this aim, we trained an instance segmentation deep learning model on a synthetic custom dataset to segment each piece of the Jenga tower, allowing for visual tracking of the desired block’s pose during the motion of the manipulator. We integrated the visual-based strategy with a 1D force sensor to detect whether the block could be safely removed by identifying a force threshold value. Our experimentation shows that our low-cost solution allows e.DO to precisely reach removable blocks and perform up to 14 consecutive extractions in a row.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference67 articles.
1. Sun, X., Zhu, X., Wang, P., and Chen, H. (2018, January 19–23). A review of robot control with visual servoing. Proceedings of the 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Tianjin, China.
2. Object detection with deep learning: A review;Zhao;IEEE Trans. Neural Netw. Learn. Syst.,2019
3. Fast semantic segmentation for scene perception;Zhang;IEEE Trans. Ind. Inform.,2018
4. Martini, M., Cerrato, S., Salvetti, F., Angarano, S., and Chiaberge, M. (2022, January 20–24). Position-Agnostic Autonomous Navigation in Vineyards with Deep Reinforcement Learning. Proceedings of the 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), Mexico City, Mexico.
5. Salvetti, F., Angarano, S., Martini, M., Cerrato, S., and Chiaberge, M. (2022). Waypoint Generation in Row-based Crops with Deep Learning and Contrastive Clustering. arXiv.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献