A Wearable Visually Impaired Assistive System Based on Semantic Vision SLAM for Grasping Operation
Author:
Fei Fei1, Xian Sifan1, Yang Ruonan1, Wu Changcheng1ORCID, Lu Xiong1ORCID
Affiliation:
1. College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China
Abstract
Because of the absence of visual perception, visually impaired individuals encounter various difficulties in their daily lives. This paper proposes a visual aid system designed specifically for visually impaired individuals, aiming to assist and guide them in grasping target objects within a tabletop environment. The system employs a visual perception module that incorporates a semantic visual SLAM algorithm, achieved through the fusion of ORB-SLAM2 and YOLO V5s, enabling the construction of a semantic map of the environment. In the human–machine cooperation module, a depth camera is integrated into a wearable device worn on the hand, while a vibration array feedback device conveys directional information of the target to visually impaired individuals for tactile interaction. To enhance the system’s versatility, a Dobot Magician manipulator is also employed to aid visually impaired individuals in grasping tasks. The performance of the semantic visual SLAM algorithm in terms of localization and semantic mapping was thoroughly tested. Additionally, several experiments were conducted to simulate visually impaired individuals’ interactions in grasping target objects, effectively verifying the feasibility and effectiveness of the proposed system. Overall, this system demonstrates its capability to assist and guide visually impaired individuals in perceiving and acquiring target objects.
Funder
China Postdoctoral Science Foundation Natural Science Foundation of the Jiangsu Higher Education Institutions of China Graduate Research and Practical Innovation Program at Nanjing University of Aeronautics and Astronautics Natural Science Foundation of Jiangsu Province the Fundamental Research Funds for the Central Universities Experimental Technology Research and Development Project at Nanjing University of Aeronautics and Astronautics
Reference19 articles.
1. A survey on an intelligent system for persons with visual disabilities;Hassan;Aust. J. Eng. Innov. Technol.,2021 2. Balachandar, A., Santhosh, E., Suriyakrishnan, A., Vigensh, N., Usharani, S., and Bala, P.M. (2021, January 13–14). Deep learning technique based visually impaired people using YOLO V3 framework mechanism. Proceedings of the 2021 3rd International Conference on Signal Processing and Communication (ICPSC), Coimbatore, India. 3. Chai, Y., and Cao, Y. (2017, January 9–14). Exploring of the barrier-free design for visual impairment in graphical user interface design. Proceedings of the HCI International 2017–Posters’ Extended Abstracts: 19th International Conference, HCI International 2017, Proceedings, Part II 19, Vancouver, BC, Canada. 4. Tarakanov, V.V., Inshakova, A.O., and Dolinskaya, V.V. (2019). Ubiquitous Computing and the Internet of Things: Prerequisites for the Development of ICT, Springer. 5. Petsiuk, A.L., and Pearce, J.M. (2019). Low-cost open source ultrasound-sensing based navigational support for the visually impaired. Sensors, 19.
|
|