Abstract
In recent years, with the development of mobile robotics technology, robotic vacuum cleaners have become part of people's daily lives. These robots have the ability to autonomously recognize their environment and navigate rooms for cleaning, demonstrating significant practical applications. However, in the absence of highly detailed maps, the challenge of how robots can autonomously identify their current location and accurately reach specified destinations remains an unsolved problem. The emergence of visual language navigation methods has made it possible for robots to receive natural language instructions about navigation paths and autonomously move to their target locations under the guidance of these instructions. However, effectively fusing information between the visual and language modalities remains a significant challenge. To achieve deep integration of natural language and visual information, this research introduces a multimodal fusion neural network model, which combines visual information (RGB images and depth maps) with language information (natural language navigation instructions). Firstly, we used Faster R-CNN and ResNet50 to extract image features and attention mechanism to further extract effective information. Secondly, GRU model is used to extract language features. Finally, another GRU model is used to fuse the visual- language features, and then the history information is retained to give the next action instruction to the robot. Experimental results demonstrate that the proposed method effectively addresses the localization and decision-making challenges for robotic vacuum cleaners.