Affiliation:
1. Vels Institute of Science, Technology & Advanced Studies (VISTAS), Chennai, Tamil Nadu
Abstract
Communication is essential to express and receive information, knowledge, ideas, and views among people, but it has been quite a while to be an obstruction for people with hearing and mute disabilities. Sign language is one method of communicating with deaf people. Though there is sign language to communicate with non-sign people it is difficult for everyone to interpret and understand. The performance of existing sign language recognition approaches is typically limited. Developing an assistive device that will translate the sign language to a readable format will help the deaf-mutes to communicate with ease to the common people. Recent advancements in the development of deep learning, deep neural networks, especially Temporal convolutional networks (TCNs) have provided solutions to the communication of deaf and mute individuals. In this project, the main objective is to design Deaf Companion System for that to develop SignNet Model to provide two-way communication of deaf individuals and to implement an automatic speaking system for deaf and mute people. It provides two-way communication for all classes of people (deaf-and-mute, hard of hearing, visually impaired, and non-signers) and can be scaled commercially. The proposed system, consists of three modules; the sign recognition module (SRM) that recognizes the signs of a deaf individual using TCN, the speech recognition using Hidden Marko Model and synthesis module (SRSM) that processes the speech of a non-deaf individual and converts it to text, and an Avatar module (AM) to generate and perform the corresponding sign of the non-deaf speech, which were integrated into the sign translation companion system called deaf companion system to facilitate the communication from the deaf to the hearing and vice versa. The proposed model is trained on Indian Sign Language. Then developed a web-based user interface to deploy SignNet Model for ease of use. Experimental results on MNIST sign language recognition datasets validate the superiority of the proposed framework. The TCN model gives an accuracy of 98.5%..