Abstract
Human beings usually rely on communication to express their feeling and ideas and to solve disputes among themselves. A major component required for effective communication is language. Language can occur in different forms, including written symbols, gestures, and vocalizations. It is usually essential for all of the communicating parties to be fully conversant with a common language. However, to date this has not been the case between speech-impaired people who use sign language and people who use spoken languages. A number of different studies have pointed out a significant gaps between these two groups which can limit the ease of communication. Therefore, this study aims to develop an efficient deep learning model that can be used to predict British sign language in an attempt to narrow this communication gap between speech-impaired and non-speech-impaired people in the community. Two models were developed in this research, CNN and LSTM, and their performance was evaluated using a multi-class confusion matrix. The CNN model emerged with the highest performance, attaining training and testing accuracies of 98.8% and 97.4%, respectively. In addition, the model achieved average weighted precession and recall of 97% and 96%, respectively. On the other hand, the LSTM model’s performance was quite poor, with the maximum training and testing performance accuracies achieved being 49.4% and 48.7%, respectively. Our research concluded that the CNN model was the best for recognizing and determining British sign language.
Subject
Electrical and Electronic Engineering,Computer Graphics and Computer-Aided Design,Computer Vision and Pattern Recognition,Radiology, Nuclear Medicine and imaging
Reference49 articles.
1. The Origin of Human Language, Studies in Language Origins;Chiarelli,1991
2. Sign Language Recognition, Generation, and Modelling: A Research Effort with Applications in Deaf Communication
3. Real-time Bhutanese Sign Language digits recognition system using Convolutional Neural Network
4. (In)accessibility of the deaf to the television contents through sign language interpreting and sdh in turkey;İmren;Dokuz EylüL Univ. J. Humanit.,2018
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Deep Learning Framework for Sign Language Recognition Using Inception V3 with Transfer Learning;2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE);2024-04-26
2. Action Detection for Sign Language using Machine Learning Algorithms;2024 IEEE 13th International Conference on Communication Systems and Network Technologies (CSNT);2024-04-06
3. Hybrid InceptionNet Based Enhanced Architecture for Isolated Sign Language Recognition;IEEE Access;2024
4. Sign Language Interpreter via Gesture Detection;2023 Third International Conference on Smart Technologies, Communication and Robotics (STCR);2023-12-09
5. Real Time Sign Language Translator for Deaf and Mute;2023 International Conference on Emerging Research in Computational Science (ICERCS);2023-12-07