Affiliation:
1. Department of Informatics - UFPR and UTFPR, Pato Branco-PR, Brazil
2. Department of Informatics - UFPR, Brazil
Abstract
Augmentative and Alternative Communication (AAC) aims to complement or replace spoken language to compensate for expression difficulties faced by people with speech impairments. Computing systems have been developed to support AAC; however, partially due to technical problems, poor interface, and limited interaction functions, AAC systems are not widespread, adopted, and used, therefore reaching a limited audience. This article proposes a methodology to support AAC for people with motor impairments, using computer vision and machine learning techniques to allow for personalized gestural interaction. The methodology was applied in a pilot system used by both volunteers without disabilities, and by volunteers with motor and speech impairments, to create datasets with personalized gestures. The created datasets and a public dataset were used to evaluate the technologies employed for gesture recognition, namely the Support Vector Machine (SVM) and Convolutional Neural Network (using Transfer Learning), and for motion representation, namely the conventional Motion History Image and Optical Flow-Motion History Image (OF-MHI). Results obtained from the estimation of prediction error using K-fold cross-validation suggest SVM associated with OF-MHI presents slightly better results for gesture recognition. Results indicate the technical feasibility of the proposed methodology, which uses a low-cost approach, and reveals the challenges and specific needs observed during the experiment with the target audience.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Science Applications,Human-Computer Interaction
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献