A new framework for sign language alphabet hand posture recognition using geometrical features through artificial neural network (part 1)

Author:

Kolivand HoshangORCID,Joudaki Saba,Sunar Mohd Shahrizal,Tully David

Abstract

AbstractHand pose tracking is essential in sign languages. An automatic recognition of performed hand signs facilitates a number of applications, especially for people with speech impairment to communication with normal people. This framework which is called ASLNN proposes a new hand posture recognition technique for the American sign language alphabet based on the neural network which works on the geometrical feature extraction of hands. A user’s hand is captured by a three-dimensional depth-based sensor camera; consequently, the hand is segmented according to the depth analysis features. The proposed system is called depth-based geometrical sign language recognition as named DGSLR. The DGSLR adopted in easier hand segmentation approach, which is further used in segmentation applications. The proposed geometrical feature extraction framework improves the accuracy of recognition due to unchangeable features against hand orientation compared to discrete cosine transform and moment invariant. The findings of the iterations demonstrate the combination of the extracted features resulted to improved accuracy rates. Then, an artificial neural network is used to drive desired outcomes. ASLNN is proficient to hand posture recognition and provides accuracy up to 96.78% which will be discussed on the additional paper of this authors in this journal.

Funder

Liverpool John Moores University

Publisher

Springer Science and Business Media LLC

Subject

Artificial Intelligence,Software

Reference63 articles.

1. Garg P, Aggarwal N, Sofat S (2009) Vision based hand gesture recognition. World Acad Sci Eng Technol 49:972–977

2. Chai X, Li G, Lin Y, Xu Z, Tang Y, Chen X, Zhou M (2013). Sign language recognition and translation with Kinect

3. Kishore P, Kumar PR (2012) Segment, track, extract, recognize and convert sign language videos to voice/text. Int J. https://doi.org/10.14569/IJACSA.2012.030608

4. Zhu Q-S, Xie Y-Q, Wang L (2010) Video object segmentation by fusion of spatio-temporal information based on Gaussian mixture model. Bull Adv Technol Res 5:38–43

5. Prasad MVD, Raghava PC, Rahul R (2015) 4-Camera model for sign language recognition using elliptical fourier descriptors and ANN. SPACES-2015, Department of ECE, K L University

Cited by 21 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Evaluation of load-settlement behavior of shallow footings using hybrid MLP-evolutionary AI approach with ER-WCA optimization;Innovative Infrastructure Solutions;2024-05-14

2. A Review of Sign Language Systems;2023 16th International Conference on Developments in eSystems Engineering (DeSE);2023-12-18

3. A survey on sign language literature;Machine Learning with Applications;2023-12

4. Double handed dynamic Turkish Sign Language recognition using Leap Motion with meta learning approach;Expert Systems with Applications;2023-10

5. Indian Sign Language to Speech Conversion Using Deep Learning;Advances in Systems Analysis, Software Engineering, and High Performance Computing;2023-09-07

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3