Affiliation:
1. Department of Software Engineering, Faculty of Engineering and Information Technology, Taiz University, Taiz 6803, Yemen
2. Department of Computer Science, Faculty of Computer Science and Information Technology, International University of Technology Twintech, Sana’a 7201, Yemen
3. Department of Informatics for Business, College of Business, King Khalid University, Abha 61421, Saudi Arabia
4. Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
5. Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
Abstract
Sign language is widely used to facilitate the communication process between deaf people and their surrounding environment. Sign language, like most other languages, is considered a complex language which cannot be mastered easily. Thus, technology can be used as an assistive tool to solve the difficulties and challenges that deaf people face during interactions with society. In this study, an automatic bidirectional translation framework for Arabic Sign Language (ArSL) is designed to assist both deaf and ordinary people to communicate and express themselves easily. Two main modules were intended to translate Arabic sign images into text by utilizing different transfer learning models and to translate the input text into Arabic sign images. A prototype was implemented based on the proposed framework by using several pre-trained convolutional neural network (CNN)-based deep learning models, including the DenseNet121, ResNet152, MobileNetV2, Xception, InceptionV3, NASNetLarge, VGG19, and VGG16 models. A fuzzy string matching score method, as a novel concept, was employed to translate the input text from ordinary people into appropriate sign language images. The dataset was constructed with specific criteria to obtain 7030 images for 14 classes captured from both deaf and ordinary people locally. The prototype was developed to conduct the experiments on the collected ArSL dataset using the utilized CNN deep learning models. The experimental results were evaluated using standard measurement metrics such as accuracy, precision, recall, and F1-score. The performance and efficiency of the ArSL prototype were assessed using a test set of an 80:20 splitting procedure, obtaining accuracy results from the highest to the lowest rates with average classification time in seconds for each utilized model, including (VGG16, 98.65%, 72.5), (MobileNetV2, 98.51%, 100.19), (VGG19, 98.22%, 77.16), (DenseNet121, 98.15%, 80.44), (Xception, 96.44%, 72.54), (NASNetLarge, 96.23%, 84.96), (InceptionV3, 94.31%, 76.98), and (ResNet152, 47.23%, 98.51). The fuzzy matching score is mathematically validated by computing the distance between the input and associative dictionary words. The study results showed the prototype’s ability to successfully translate Arabic sign images into Arabic text and vice versa, with the highest accuracy. This study proves the ability to develop a robust and efficient real-time bidirectional ArSL translation system using deep learning models and the fuzzy string matching score method.
Reference68 articles.
1. Readability of hearing related internet information in traditional Chinese;Hsu;Speech Lang. Hear.,2017
2. Al-Khalifa, H.S. (2020, January 11–15). Introducing Arabic sign language for mobile phones. Proceedings of the International Conference on Computers for Handicapped Persons, Milan, Italy.
3. Effect of physical activity and sedentary sitting time on psychological quality of life of people with and without disabilities: A survey from Saudi Arabia;Zahra;Front. Public Health,2022
4. An Arabic morphological system;Hashish;IBM Syst. J.,1989
5. El-Gayyar, M., Ibrahim, A., and Sallam, A. (2015, January 12–14). The ArSL keyboard for android. Proceedings of the 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献