Author:
Shen Jinglin,Gans Nicholas
Abstract
SUMMARYThis paper presents a novel system for human–robot interaction in object-grasping applications. Consisting of an RGB-D camera, a projector and a robot manipulator, the proposed system provides intuitive information to the human by analyzing the scene, detecting graspable objects and directly projecting numbers or symbols in front of objects. Objects are detected using a visual attention model that incorporates color, shape and depth information. The positions and orientations of the projected numbers are based on the shapes, positions and orientations of the corresponding objects. Users select a grasping target by indicating the corresponding number. Projected arrows are then created on the fly to guide a robotic arm to grasp the selected object using visual servoing and deliver the object to the human user. Experimental results are presented to demonstrate how the system is used in robot grasping tasks.
Publisher
Cambridge University Press (CUP)
Subject
Computer Science Applications,General Mathematics,Software,Control and Systems Engineering
Reference45 articles.
1. Stable Visual Servoing Through Hybrid Switched-System Control
2. J.-Y. Bouguet , “Camera calibration toolbox for matlab,” Available at: http://www.vision.caltech.edu/bouguetj/calibdoc/ (2010).
3. A. Rotenstein , A. Andreopoulos , E. Fazl , D. Jacob , M. Robinson , K. Shubina , Y. Zhu and J. Tsotsos , “Towards the Dream of Intelligent, Visually Guided Wheelchairs,” Proceedings of the 2nd International Conference on Technology and Aging, Toronto, Canada (2007).
4. S.-W. Choi , W.-J. Kim and C. Ho Lee , “Interactive Display Robot: Projector Robot With Natural User Interface,” Proceedings of the 8th ACM/IEEE International Conference on Human–Robot Interaction (HRI), Tokyo, Japan (2013) pp. 109–110.
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献