WearSign

Author:

Zhang Qian1,Jing JiaZhen1,Wang Dong1,Zhao Run1

Affiliation:

1. Shanghai Jiao Tong University, China

Abstract

Sign language translation (SLT) is considered as the core technology to break the communication barrier between the deaf and hearing people. However, most studies only focus on recognizing the sequence of sign gestures (sign language recognition (SLR)), ignoring the significant difference of linguistic structures between sign language and spoken language. In this paper, we approach SLT as a spatio-temporal machine translation task and propose a wearable-based system, WearSign, to enable direct translation from the sign-induced sensory signals into spoken texts. WearSign leverages a smartwatch and an armband of ElectroMyoGraphy (EMG) sensors to capture the sophisticated sign gestures. In the design of the translation network, considering the significant modality and linguistic gap between sensory signals and spoken language, we design a multi-task encoder-decoder framework which uses sign glosses (sign gesture labels) for intermediate supervision to guide the end-to-end training. In addition, due to the lack of sufficient training data, the performance of prior studies usually degrades drastically when it comes to sentences with complex structures or unseen in the training set. To tackle this, we borrow the idea of back-translation and leverage the much more available spoken language data to synthesize the paired sign language data. We include the synthetic pairs into the training process, which enables the network to learn better sequence-to-sequence mapping as well as generate more fluent spoken language sentences. We construct an American sign language (ASL) dataset consisting of 250 commonly used sentences gathered from 15 volunteers. WearSign achieves 4.7% and 8.6% word error rate (WER) in user-independent tests and unseen sentence tests respectively. We also implement a real-time version of WearSign which runs fully on the smartphone with a low latency and energy overhead.

Funder

Science and Technology Commission of Shanghai Municipality

NSFC

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture,Human-Computer Interaction

Reference72 articles.

1. World Health Organization 2019. 2019. Deafness and hearing loss. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss World Health Organization 2019. 2019. Deafness and hearing loss. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss

2. Sequence-to-Sequence Contrastive Learning for Text Recognition

3. Dario Amodei , Sundaram Ananthanarayanan , Rishita Anubhai , Jingliang Bai , Eric Battenberg , Carl Case , Jared Casper , Bryan Catanzaro , Qiang Cheng , Guoliang Chen , 2016 . Deep speech 2: End-to-end speech recognition in english and mandarin . In International conference on machine learning. PMLR, 173--182 . Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning. PMLR, 173--182.

4. Sign Language Recognition, Generation, and Translation

5. Diane Brentari. 2009. Effects of language modality on word segmentation: An experimental study of phonological factors in a sign language. In Laboratory Phonology 8. De Gruyter Mouton 155--184. Diane Brentari. 2009. Effects of language modality on word segmentation: An experimental study of phonological factors in a sign language. In Laboratory Phonology 8. De Gruyter Mouton 155--184.

Cited by 13 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Breadth and orientation of pie menus for mid-air interaction: effects on upper extremity biomechanics, performance, and subjective assessment;Behaviour & Information Technology;2024-01-29

2. American Sign Language Recognition and Translation Using Perception Neuron Wearable Inertial Motion Capture System;Sensors;2024-01-11

3. Sign Language Recognition Using the Electromyographic Signal: A Systematic Literature Review;Sensors;2023-10-09

4. Sign-to-911: Emergency Call Service for Sign Language Users with Assistive AR Glasses;Proceedings of the 29th Annual International Conference on Mobile Computing and Networking;2023-10-02

5. SignRing;Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies;2023-09-27

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3