Affiliation:
1. Department of Artificial Intelligence University of Science and Technology Daejeon Republic of Korea
2. Field Robotics Research Section Electronics and Telecommunications Research Institute Daejeon Republic of Korea
3. Interdisciplinary Centre for Security, Reliability and Trust University of Luxembourg Luxembourg Luxembourg
Abstract
AbstractDetermining whether an autonomous self‐driving agent is in the middle of an intersection can be extremely difficult when relying on visual input taken from a single camera. In such a problem setting, a wider range of views is essential, which drives us to use three cameras positioned in the front, left, and right of an agent for better intersection recognition. However, collecting adequate training data with three cameras poses several practical difficulties; hence, we propose using data collected from one camera to train a three‐camera model, which would enable us to more easily compile a variety of training data to endow our model with improved generalizability. In this work, we provide three separate fusion methods (feature, early, and late) of combining the information from three cameras. Extensive pedestrian‐view intersection classification experiments show that our feature fusion model provides an area under the curve and F1‐score of 82.00 and 46.48, respectively, which considerably outperforms contemporary three‐ and one‐camera models.
Funder
Ministry of Science and ICT, South Korea
Subject
Electrical and Electronic Engineering,General Computer Science,Electronic, Optical and Magnetic Materials