Author:
Maurya Sohan,Doshi Sparsh,Jaiswar Harsh,Karale Sahil,Burnase Sneha,N. Sonar Poonam.
Abstract
Individuals with hearing impairments communicate mostly through sign language. Our goal was to create an American Sign Language recognition dataset and utilize it in a neural network-based machine learning model that can interpret hand gestures and positions into natural language. In our study, we incorporated the SVM, CNN and Resnet-18 models to enhance predictability when interpreting ASL signs through this new dataset, which includes provisions such as lighting and distance limitations. Our research also features comparison results between all the other models implemented under invariant conditions versus those using our proposed CNN model. As demonstrated by its high levels of precision at 95.10% despite changes encountered during testing procedures like varying data sets or scene configurations where losses are minimal (0.545), there exists great potential for future applications in image recognition systems requiring deep learning techniques. Furthermore, these advancements may lead to significant improvements within various fields related explicitly to speech-language therapy sessions designed specifically around helping people overcome challenges associated with deafness while building bridges towards improved social integration opportunities.
Publisher
International Journal of Innovative Science and Research Technology
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Device Closure in Multiple Atrial Septal Defect Secundum Concomitant with Atrial Flutter;International Journal of Innovative Science and Research Technology (IJISRT);2024-05-17