An Expert System for Indian Sign Language Recognition using Spatial Attention based Feature and Temporal Feature

Author:

Das Soumen1,Biswas Saroj Kr.1,Purkayastha Biswajit1

Affiliation:

1. National Institute of Technology Silchar, Silchar, India

Abstract

Sign Language (SL) is the only means of communication for the hearing-impaired people. Normal people have difficulty understanding SL, resulting in a communication barrier between hearing impaired people and hearing community. However, the Sign Language Recognition System (SLRS) has helped to bridge the communication gap. Many SLRs are proposed for recognizing SL; however, a limited number of works are reported for Indian Sign Language (ISL). Most of the existing SLRS focus on global features other than the Region of Interest (ROI). Focusing more on the hand region and extracting local features from the ROI improves system accuracy. The attention mechanism is a widely used technique for emphasizing the ROI. However, only a few SLRS used the attention method. They employed the Convolution Block Attention Module (CBAM) and temporal attention but Spatial Attention (SA) is not utilized in previous SLRS. Therefore, a novel SA based SLRS named Spatial Attention-based Sign Language Recognition Module (SASLRM) is proposed to recognize ISL words for emergency situations. SASLRM recognizes ISL words by combining convolution features from a pretrained VGG-19 model and attention features from a SA module. The proposed model accomplished an average accuracy of 95.627% on the ISL dataset. The proposed SASLRM is further validated on LSA64, WLASL and Cambridge Hand Gesture Recognition (HGR) datasets where, the proposed model reached an accuracy of 97.84 %, 98.86% and 98.22’% respectively. The results indicate the effectiveness of the proposed SLRS in comparison with the existing SLRS.

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference65 articles.

1. 2018. World Health Organization. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss

2. Subhangi Adhikary, Anjan Kumar Talukdar, and Kandarpa Kumar Sarma. 2021. A vision-based system for recognition of words used in indian sign language using mediapipe. In 2021 Sixth International Conference on Image Information Processing (ICIIP), Vol.  6. IEEE, 390–394.

3. V Adithya and R Rajesh. 2020. Hand gestures for emergency situations: A video dataset based on words from Indian sign language. Data in brief 31(2020) 106016.

4. Deep Learning-Based Approach for Sign Language Gesture Recognition With Efficient Hand Gesture Representation

5. Muneer Al-Hammadi, Ghulam Muhammad, Wadood Abdul, Mansour Alsulaiman, Mohamed A Bencherif, and Mohamed Amine Mekhtiche. 2020. Hand gesture recognition for sign language using 3DCNN. IEEE access 8(2020), 79491–79509.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3