Integrated vision-based system for efficient, semi-automated control of a robotic manipulator
Author:
Jiang Hairong,P. Wachs Juan,S. Duerstock Bradley
Abstract
Purpose
– The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller.
Design/methodology/approach
– Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face.
Findings
– The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects.
Originality/value
– Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.
Subject
General Computer Science
Reference33 articles.
1. Aach, J.
and
Church, G.M.
(2001), “Aligning gene expression time series with time warping algorithms”, Bioinformatics, Vol. 17 No. 6, pp. 495-508. 2. Amat, J.
(1998), “Intelligent wheelchairs and assistant robots”, in
de Almeida, A.T.
and
Khatib, O.
(Eds), Autonomous Robotic Systems, Springer, London, pp. 211-221. 3. Bailey, M.
,
Chanler, A.
,
Maxwell, B.
,
Micire, M.
,
Tsui, K.
and
Yanco, H.
(2007), “Development of vision-based navigation for a robotic wheelchair”, IEEE 10th International Conference on Rehabilitation Robotics
(ICORR), pp. 951-957. 4. Bay, H.
,
Tuytelaars, T.
and
Gool, L.V.
(2006), “SURF: speeded up robust features”, in
Leonardis, A.
,
Bischof, H.
and
Pinz, A.
(Eds), Computer Vision – ECCV 2006, Springer, Berlin and Heidelberg, pp. 404-417. 5. Black, M.J.
and
Jepson, A.D.
(1998), “A probabilistic framework for matching temporal trajectories: condensation-based recognition of gestures and expressions”, in
Burkhardt, H.
and
Neumann, B.
(Eds), Computer Vision – ECCV’98, Springer, Berlin and Heidelberg, pp. 909-924.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|