A Wearable Visually Impaired Assistive System Based on Semantic Vision SLAM for Grasping Operation

Author:

Fei Fei1,Xian Sifan1,Yang Ruonan1,Wu Changcheng1ORCID,Lu Xiong1ORCID

Affiliation:

1. College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China

Abstract

Because of the absence of visual perception, visually impaired individuals encounter various difficulties in their daily lives. This paper proposes a visual aid system designed specifically for visually impaired individuals, aiming to assist and guide them in grasping target objects within a tabletop environment. The system employs a visual perception module that incorporates a semantic visual SLAM algorithm, achieved through the fusion of ORB-SLAM2 and YOLO V5s, enabling the construction of a semantic map of the environment. In the human–machine cooperation module, a depth camera is integrated into a wearable device worn on the hand, while a vibration array feedback device conveys directional information of the target to visually impaired individuals for tactile interaction. To enhance the system’s versatility, a Dobot Magician manipulator is also employed to aid visually impaired individuals in grasping tasks. The performance of the semantic visual SLAM algorithm in terms of localization and semantic mapping was thoroughly tested. Additionally, several experiments were conducted to simulate visually impaired individuals’ interactions in grasping target objects, effectively verifying the feasibility and effectiveness of the proposed system. Overall, this system demonstrates its capability to assist and guide visually impaired individuals in perceiving and acquiring target objects.

Funder

China Postdoctoral Science Foundation

Natural Science Foundation of the Jiangsu Higher Education Institutions of China

Graduate Research and Practical Innovation Program at Nanjing University of Aeronautics and Astronautics

Natural Science Foundation of Jiangsu Province

the Fundamental Research Funds for the Central Universities

Experimental Technology Research and Development Project at Nanjing University of Aeronautics and Astronautics

Publisher

MDPI AG

Reference19 articles.

1. A survey on an intelligent system for persons with visual disabilities;Hassan;Aust. J. Eng. Innov. Technol.,2021

2. Balachandar, A., Santhosh, E., Suriyakrishnan, A., Vigensh, N., Usharani, S., and Bala, P.M. (2021, January 13–14). Deep learning technique based visually impaired people using YOLO V3 framework mechanism. Proceedings of the 2021 3rd International Conference on Signal Processing and Communication (ICPSC), Coimbatore, India.

3. Chai, Y., and Cao, Y. (2017, January 9–14). Exploring of the barrier-free design for visual impairment in graphical user interface design. Proceedings of the HCI International 2017–Posters’ Extended Abstracts: 19th International Conference, HCI International 2017, Proceedings, Part II 19, Vancouver, BC, Canada.

4. Tarakanov, V.V., Inshakova, A.O., and Dolinskaya, V.V. (2019). Ubiquitous Computing and the Internet of Things: Prerequisites for the Development of ICT, Springer.

5. Petsiuk, A.L., and Pearce, J.M. (2019). Low-cost open source ultrasound-sensing based navigational support for the visually impaired. Sensors, 19.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3