Abstract
Currently, an increasing amount of research is directed towards solving tasks using computer vision libraries and artificial intelligence tools. Most common are the solutions and approaches utilizing machine and deep learning models of artificial neural networks for recognizing gestures of the Kazakh sign language based on supervised learning methods and deep learning for processing sequential data. The research object is the Kazakh sign language alphabet aimed at facilitating communication for individuals with limited abilities. The research subject comprises machine learning methods and models of artificial neural networks and deep learning for gesture classification and recognition. The research areas encompass Machine Learning, Deep Learning, Neural Networks, and Computer Vision.
The main challenge lies in recognizing dynamic hand gestures. In the Kazakh sign language alphabet, there are 42 letters, with 12 of them being dynamic. Processing, capturing, and recognizing gestures in motion, particularly in dynamics, pose a highly complex task. It is imperative to employ modern technologies and unconventional approaches by combining various recognition methods/algorithms to develop and construct a hybrid neural network model for gesture recognition. Gesture recognition is a classification task, which is one of the directions of pattern recognition. The fundamental basis of recognition is the theory of pattern recognition. The paper discusses pattern recognition systems, the environment and application areas of these systems, and the requirements for their development and improvement. It presents tasks such as license plate recognition, facial recognition, and gesture recognition. The field of computer vision in image recognition, specifically hand gestures, is also addressed. The development of software will enable the testing of the trained model's effectiveness and its application for laboratory purposes, allowing for adjustments to improve the model.