Abstract
Abstract
Growing evidence shows the potential benefits of robot-assisted therapy for children with Autism Spectrum Disorder (ASD). However, when developing new robotics technologies, it must be considered that this condition often causes increased anxiety in unfamiliar settings. Indeed, children with ASD have difficulties accepting changes like introducing multiple new technological devices in their routines, therefore, embedded solutions should be preferred. Also, in this context, robots should be small as children find the bigger ones scary. This leads to limited computing resources onboard as small batteries power them. This article presents a study on gesture recognition using video recorded only by the camera embedded in a NAO robot, while it was leading a clinical procedure. The video is 2D and low quality because of the limits of the NAO-embedded computing resources. The recognition is made more challenging by robot movements, which alter the vision by moving the camera and sometimes by obstructing it with the robot’s arms for short periods. Despite these challenging real-world conditions, in our experiments, we have tuned and improved state-of-the-art algorithms to yield an accuracy higher than $$90\%$$
90
%
in the gesture classification, with the best accuracy being $$94\%$$
94
%
. This level of accuracy is suitable for evaluating the children’s performance and providing information for the diagnosis and continuous assessment of the therapy. We have also considered the performance improvement of using a low-power GPU-AI accelerator embedded system, which could be included in future robots, to enable gesture analysis during the therapy, which could be adapted to the child’s performance.
Graphical abstract
Funder
Horizon 2020 Framework Programme
Engineering and Physical Sciences Research Council
Ministero dell’Universitá e della Ricerca
Universitá di Catania
Publisher
Springer Science and Business Media LLC