Abstract
AbstractAutomatic analysis of our daily lives and activities through a first-person lifelog camera provides us with opportunities to improve our life rhythms or to support our limited visual memories. Notably, to express the visual experiences, the task of generating captions from first-person lifelog images has been actively studied in recent years. First-person images involve scenes approximating what users actually see; therein, the visual cues are not enough to express the user’s context since the images are limited by his/her intention. Our challenge is to generate lifelog captions using a meta-perspective called “fourth-person vision”. The “fourth-person vision” is a novel concept which complementary exploits the visual information from the first-, second-, and third-person perspectives. First, we assume human–robot symbiotic scenarios that provide a second-person perspective from the camera mounted on the robot and a third-person perspective from the camera fixed in the symbiotic room. To validate our approach in this scenario, we collect perspective-aware lifelog videos and corresponding caption annotations. Subsequently, we propose a multi-perspective image captioning model composed of an image-wise salient region encoder, an attention module that adaptively fuses the salient regions, and a caption decoder that generates scene descriptions. We demonstrate that our proposed model based on the fourth-person concept can greatly improve the captioning performance against single- and double-perspective models.
Funder
Japan Society for the Promotion of Science
Core Research for Evolutional Science and Technology
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Control and Optimization,Mechanical Engineering,Instrumentation,Modeling and Simulation
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献