Author:
Ribeiro Ricardo,Trifan Alina,Neves António J. R.
Abstract
AbstractGlobal positioning system data play a crucial role in comprehending an individual’s life due to its ability to provide geographic positions and timestamps. However, it is a challenge to identify the transportation mode used during a trajectory due to the large amount of spatiotemporal data generated, and the distinct spatial characteristics exhibited. This paper introduces a novel approach for transportation mode identification by transforming trajectory data features into image representations and employing these images to train a neural network based on vision transformers architectures. Existing approaches require predefined temporal intervals or trajectory sizes, limiting their adaptability to real-world scenarios characterized by several trajectory lengths and inconsistent data intervals. The proposed approach avoids segmenting or changing trajectories and directly extracts features from the data. By mapping the trajectory features into pixel location generated using a dimensionality reduction technique, images are created to train a deep learning model to predict five transport modes. Experimental results demonstrate a state-of-the-art accuracy of 92.96% on the Microsoft GeoLife dataset. Additionally, a comparative analysis was performed using a traditional machine learning approach and neural network architectures. The proposed method offers accurate and reliable transport mode identification applicable in real-world scenarios, facilitating the understanding of individual’s mobility.
Publisher
Springer Science and Business Media LLC