Author:
Cruz Patricio J.,Vásconez Juan Pablo,Romero Ricardo,Chico Alex,Benalcázar Marco E.,Álvarez Robin,Barona López Lorena Isabel,Valdivieso Caraguay Ángel Leonardo
Abstract
AbstractHand gesture recognition (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) has been investigated for human-machine applications in the last few years. The information obtained from the HGR systems has the potential to be helpful to control machines such as video games, vehicles, and even robots. Therefore, the key idea of the HGR system is to identify the moment in which a hand gesture was performed and it’s class. Several human-machine state-of-the-art approaches use supervised machine learning (ML) techniques for the HGR system. However, the use of reinforcement learning (RL) approaches to build HGR systems for human-machine interfaces is still an open problem. This work presents a reinforcement learning (RL) approach to classify EMG-IMU signals obtained using a Myo Armband sensor. For this, we create an agent based on the Deep Q-learning algorithm (DQN) to learn a policy from online experiences to classify EMG-IMU signals. The HGR proposed system accuracy reaches up to $$97.45 \pm 1.02\%$$
97.45
±
1.02
%
and $$88.05 \pm 3.10\%$$
88.05
±
3.10
%
for classification and recognition respectively, with an average inference time per window observation of 20 ms. and we also demonstrate that our method outperforms other approaches in the literature. Then, we test the HGR system to control two different robotic platforms. The first is a three-degrees-of-freedom (DOF) tandem helicopter test bench, and the second is a virtual six-degree-of-freedom (DOF) UR5 robot. We employ the designed hand gesture recognition (HGR) system and the inertial measurement unit (IMU) integrated into the Myo sensor to command and control the motion of both platforms. The movement of the helicopter test bench and the UR5 robot is controlled under a PID controller scheme. Experimental results show the effectiveness of using the proposed HGR system based on DQN for controlling both platforms with a fast and accurate response.
Publisher
Springer Science and Business Media LLC
Reference46 articles.
1. Oña, A., Vimos, V., Benalcázar, M. & Cruz, P. J. Adaptive non-linear control for a virtual 3d manipulator, 1–6 (IEEE, 2020).
2. Kofman, J., Wu, X., Luu, T. J. & Verma, S. Teleoperation of a robot manipulator using a vision-based human-robot interface. IEEE Trans. Ind. Electron. 52(5), 1206–1219 (2005).
3. Pillajo, C. & Sierra, J. E. Human machine interface HMI using kinect sensor to control a SCARA robot, 1–5 (IEEE, 2013).
4. Syed, A. A. et al. 6-dof maxillofacial surgical robotic manipulator controlled by haptic device, 71–74 (IEEE, 2012).
5. Park, J.-S. & Na, H.-J. Front-end of vehicle-embedded speech recognition for voice-driven multi-uavs control. Appl. Sci. 10(19), 6876 (2020).
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献