Empowering Deaf-Hearing Communication: Exploring Synergies between Predictive and Generative AI-Based Strategies towards (Portuguese) Sign Language Interpretation
-
Published:2023-10-25
Issue:11
Volume:9
Page:235
-
ISSN:2313-433X
-
Container-title:Journal of Imaging
-
language:en
-
Short-container-title:J. Imaging
Author:
Adão Telmo12ORCID, Oliveira João3, Shahrabadi Somayeh3, Jesus Hugo3, Fernandes Marco4, Costa Ângelo5, Ferreira Vânia5, Gonçalves Martinho4, Lopéz Miguel26ORCID, Peres Emanuel178ORCID, Magalhães Luís2
Affiliation:
1. Department of Engineering, School of Sciences and Technology, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal 2. ALGORITMI Research Centre/LASI, University of Minho, 4800-058 Guimarães, Portugal 3. Centro de Computação Gráfica-CCG/zgdv, University of Minho, Campus de Azurém, Edifício 14, 4800-058 Guimarães, Portugal 4. Polytechnic Institute of Bragança, School of Communication, Administration and Tourism, Campus do Cruzeiro, 5370-202 Mirandela, Portugal 5. Associação Portuguesa de Surdos (APS), 1600-796 Lisboa, Portugal 6. Instituto Politécnico de Setúbal, Escola Superior de Tecnologia de Setúbal, 2914-508 Setúbal, Portugal 7. Centre for the Research and Technology of Agro-Environmental and Biological Sciences, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal 8. Institute for Innovation, Capacity Building and Sustainability of Agri-Food Production, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
Abstract
Communication between Deaf and hearing individuals remains a persistent challenge requiring attention to foster inclusivity. Despite notable efforts in the development of digital solutions for sign language recognition (SLR), several issues persist, such as cross-platform interoperability and strategies for tokenizing signs to enable continuous conversations and coherent sentence construction. To address such issues, this paper proposes a non-invasive Portuguese Sign Language (Língua Gestual Portuguesa or LGP) interpretation system-as-a-service, leveraging skeletal posture sequence inference powered by long-short term memory (LSTM) architectures. To address the scarcity of examples during machine learning (ML) model training, dataset augmentation strategies are explored. Additionally, a buffer-based interaction technique is introduced to facilitate LGP terms tokenization. This technique provides real-time feedback to users, allowing them to gauge the time remaining to complete a sign, which aids in the construction of grammatically coherent sentences based on inferred terms/words. To support human-like conditioning rules for interpretation, a large language model (LLM) service is integrated. Experiments reveal that LSTM-based neural networks, trained with 50 LGP terms and subjected to data augmentation, achieved accuracy levels ranging from 80% to 95.6%. Users unanimously reported a high level of intuition when using the buffer-based interaction strategy for terms/words tokenization. Furthermore, tests with an LLM—specifically ChatGPT—demonstrated promising semantic correlation rates in generated sentences, comparable to expected sentences.
Funder
Portugal 2020, under the Competitiveness and Internationalization Operational Program RRP—Recovery and Resilience Plan and the European NextGeneration EU Funds National Funds from the FCT-Portuguese Foundation for Science and Technology
Subject
Electrical and Electronic Engineering,Computer Graphics and Computer-Aided Design,Computer Vision and Pattern Recognition,Radiology, Nuclear Medicine and imaging
Reference29 articles.
1. Virtual Sign—A Real Time Bidirectional Translator of Portuguese Sign Language;Escudeiro;Procedia Comput. Sci.,2015 2. Mayea, C., Garcia, D., Guevara Lopez, M.A., Peres, E., Magalhães, L., and Adão, T. (2022, January 3–4). Building Portuguese Sign Language Datasets for Computational Learning Purposes. Proceedings of the 2022 International Conference on Graphics and Interaction (ICGI), Aveiro, Portugal. 3. Podder, K.K., Chowdhury, M.E.H., Tahir, A.M., Mahbub, Z.B., Khandakar, A., Hossain, M.S., and Kadir, M.A. (2022). Bangla Sign Language (BdSL) Alphabets and Numerals Classification Using a Deep Learning Model. Sensors, 22. 4. Abraham, E., Nayak, A., and Iqbal, A. (2019, January 18–20). Real-Time Translation of Indian Sign Language Using LSTM. Proceedings of the 2019 Global Conference for Advancement in Technology (GCAT), Bangaluru, India. 5. Vision-Based Hand Gesture Recognition for Indian Sign Language Using Convolution Neural Network;Gangrade;IETE J. Res.,2023
|
|