Author:
Jibin Joy ,N Meenakshi ,Thejas Vinodh, ,Abel Thomas ,Shifil S
Abstract
Sign language display software converts text/speech to animated sign language to support the special needs population, aiming to enhance communication comfort, health, and productivity. Advancements in technology, particularly computer systems, enable the development of innovative solutions to address the unique needs of individuals with special requirements, potentially enhancing their mental well-being. Using Python and NLP, a process has been devised to detect text and live speech, converting it into animated sign language in real-time. Blender is utilized for animation and video processing, while datasets and NLP are employed to train and convert text to animation. This project aims to cater to a diverse range of users across different countries where various sign languages are prevalent. By bridging the gap between linguistic and cultural differences, such software not only facilitates communication but also serves as an educational tool. Overall, it offers a cost-effective and widely applicable solution to promote inclusivity and accessibility.
Reference13 articles.
1. Real-Time American Sign Language Recognition from Video Using Hidden Markov Models
2. Automatic estimation of body regions from video images
3. Research of a Sign Language Translation System Based on Deep Learning
4. Herath, H.C.M. & W.A.L.V. Kumari, &Senevirathne, W.A.P.B &Dissanayake, Maheshi. (2013). IMAGE BASED SIGN LANGUAGE RECOGNITION SYSTEM FOR SINHALA SIGN LANGUAGE.
5. M. Geetha and U. C. Manjusha, ―A Vision Based Recognition of Indian Sign