Author:
Aliyev Samir,Almisreb Ali Abd,Turaev Sherzod
Abstract
Abstract
Sign Language recognition is one of the essential and focal areas for researchers in terms of improving the integration of speech and hearing-impaired people into common society. The main idea is to detect the hand gestures of impaired people and convert them to understandable formats, such as text by leveraging advanced approaches. In this paper, we present our contribution to the improvement of Azerbaijani Sign Language (AzSL). We worked on AzSL Alphabet static signs real-time recognition. The method applied in this work is Object Classification and Recognition by leveraging pre-trained lightweight Convolutional Neural Networks models. At first, a dataset containing near to 1000 images has been collected, then interesting objects on images have been labeled with bounding boxing option. To build, train, evaluate and deploy the relevant model, TensorFlow Object Detection API with Python has been employed. MobileNet v2 pre-trained model has been leveraged for this task. In the trial experiment with four sign classes (A, B, C, E) and 5000 step numbers 15.2% training loss and 83% evaluation mean average precision results have been obtained. In the next step of model deployment experiments with all 24 static signs of AzSL, 49700 and 27700 steps (180 and 100 epochs, respectively) 6.4% and 18.2% training losses, 66.5% and 71.6% mAP outcomes gained, respectively.
Subject
General Physics and Astronomy
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献