Author:
Wang Hsuan-min,Huang Ming-Han,Sun Chuen-Tsai
Abstract
Past research on sign language recognition has mostly been based on physical information obtained via wearable devices or depth cameras. However, both types of devices are costly and inconvenient to carry, making it difficult to gain widespread acceptance by potential users. This research aims to use sophisticated and recently developed deep learning technology to build a recognition model for a Taiwanese version of sign language, with a limited focus on RGB images for training and recognition. It is hoped that this research, which makes use of lightweight devices such as mobile phones and webcams, will make a significant contribution to the communication needs of deaf and hard-of-hearing (DHH) individuals.
Reference33 articles.
1. Anderson R, Wiryana F, Ariesta MC, Kusuma GP. Sign language recognition application systems for deaf-mute people: A review based on input-process-output. Procedia Computer Science. 2017;116:441-448
2. Zhang F, Zhu X, Dai H, Ye M, Zhu C. Distribution-aware coordinate representation for human pose estimation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, Utah: IEEE; 2020. pp. 7091-7100
3. Cheok MJ, Omar Z, Jaward MH. A review of hand gesture and sign language recognition techniques. International Journal of Machine Learning and Cybernetics (IJMLC). 2017;10(1):131-153
4. Cheng H, Yang L, Liu Z. Survey on 3D hand gesture recognition. IEEE Transactions on Circuits and Systems for Video Technology. Sept. 2016;26(9):1659-1673
5. Imagawa K, Lu S, Igi S. Color-based hands tracking system for sign language recognition. In: Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition; Nara. 1998. pp. 462-467