Affiliation:
1. School of Integrated Technology, Yonsei University, Yeonsu-Gu, Incheon, South Korea
2. Department of Computer Science and Engineering, Seoul National University, Gwanak-Gu, Seoul, South Korea
Abstract
In this work we present SUGO, a depth video-based system for translating sign language to text using a smartphone's front camera. While exploiting depth-only videos offer benefits such as being less privacy-invasive compared to using RGB videos, it introduces new challenges which include dealing with low video resolutions and the sensors' sensitiveness towards user motion. We overcome these challenges by diversifying our sign language video dataset to be robust to various usage scenarios via data augmentation and design a set of schemes to emphasize human gestures from the input images for effective sign detection. The inference engine of SUGO is based on a 3-dimensional convolutional neural network (3DCNN) to classify a sequence of video frames as a pre-trained word. Furthermore, the overall operations are designed to be light-weight so that sign language translation takes place in real-time using only the resources available on a smartphone, with no help from cloud servers nor external sensing components. Specifically, to train and test SUGO, we collect sign language data from 20 individuals for 50 Korean Sign Language words, summing up to a dataset of ~5,000 sign gestures and collect additional in-the-wild data to evaluate the performance of SUGO in real-world usage scenarios with different lighting conditions and daily activities. Comprehensively, our extensive evaluations show that SUGO can properly classify sign words with an accuracy of up to 91% and also suggest that the system is suitable (in terms of resource usage, latency, and environmental robustness) to enable a fully mobile solution for sign language translation.
Funder
National Research Foundation of Korea
Yonsei University
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Networks and Communications,Hardware and Architecture,Human-Computer Interaction
Reference60 articles.
1. 2020. Deafness and hearing loss. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss 2020. Deafness and hearing loss. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss
2. 2020. Harness AI at the Edge with the Jetson TX2. https://developer.nvidia.com/embedded/jetson-tx2-developer-kit 2020. Harness AI at the Edge with the Jetson TX2. https://developer.nvidia.com/embedded/jetson-tx2-developer-kit
3. 2020. Jetson Nano. https://developer.nvidia.com/embedded/jetson-nano 2020. Jetson Nano. https://developer.nvidia.com/embedded/jetson-nano
4. Finding Small-Bowel Lesions;Ahn J.;Challenges in Endoscopy-Image-Based Learning Systems. Computer,2018
5. DeepArSLR: A Novel Signer-Independent Deep Learning Framework for Isolated Arabic Sign Language Gestures Recognition
Cited by
27 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献