Author:
Nagano Masatoshi,Nakamura Tomoaki,Nagai Takayuki,Mochihashi Daichi,Kobayashi Ichiro
Abstract
In this study, HcVGH, a method that learns spatio-temporal categories by segmenting first-person-view (FPV) videos captured by mobile robots, is proposed. Humans perceive continuous high-dimensional information by dividing and categorizing it into significant segments. This unsupervised segmentation capability is considered important for mobile robots to learn spatial knowledge. The proposed HcVGH combines a convolutional variational autoencoder (cVAE) with HVGH, a past method, which follows the hierarchical Dirichlet process-variational autoencoder-Gaussian process-hidden semi-Markov model comprising deep generative and statistical models. In the experiment, FPV videos of an agent were used in a simulated maze environment. FPV videos contain spatial information, and spatial knowledge can be learned by segmenting them. Using the FPV-video dataset, the segmentation performance of the proposed model was compared with previous models: HVGH and hierarchical recurrent state space model. The average segmentation F-measure achieved by HcVGH was 0.77; therefore, HcVGH outperformed the baseline methods. Furthermore, the experimental results showed that the parameters that represent the movability of the maze environment can be learned.
Subject
Artificial Intelligence,Computer Science Applications
Reference49 articles.
1. Deep explicit duration switching models for time series;Ansari;Adv. Neural Inf. Process. Syst.,2021
2. Vector-based navigation using grid-like representations in artificial agents;Banino;Nature,2018
3. The infinite hidden markov model;Beal;Adv. neural Inf. Process. Syst.,2002
4. Object goal navigation using goal-oriented semantic exploration;Chaplot;Adv. Neural Inf. Process. Syst.,2020
5. Espresso: Entropy and shape aware time-series segmentation for processing heterogeneous sensor data;Deldari;Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.,2020
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献