A Comprehensive Exploration of Fidelity Quantification in Computer-Generated Images
Author:
Duminil Alexandra1ORCID, Ieng Sio-Song1ORCID, Gruyer Dominique1ORCID
Affiliation:
1. Department of Components and Systems (COSYS)/Perceptions, Interactions, Behaviour and Simulations of Road and Street Users Laboratory (PICS-L)/Gustave Eiffel University, F-77454 Marne-la-Vallée, France
Abstract
Generating realistic road scenes is crucial for advanced driving systems, particularly for training deep learning methods and validation. Numerous efforts aim to create larger and more realistic synthetic datasets using graphics engines or synthetic-to-real domain adaptation algorithms. In the realm of computer-generated images (CGIs), assessing fidelity is challenging and involves both objective and subjective aspects. Our study adopts a comprehensive conceptual framework to quantify the fidelity of RGB images, unlike existing methods that are predominantly application-specific. This is probably due to the data complexity and huge range of possible situations and conditions encountered. In this paper, a set of distinct metrics assessing the level of fidelity of virtual RGB images is proposed. For quantifying image fidelity, we analyze both local and global perspectives of texture and the high-frequency information in images. Our focus is on the statistical characteristics of realistic and synthetic road datasets, using over 28,000 images from at least 10 datasets. Through a thorough examination, we aim to reveal insights into texture patterns and high-frequency components contributing to the objective perception of data realism in road scenes. This study, exploring image fidelity in both virtual and real conditions, takes the perspective of an embedded camera rather than the human eye. The results of this work, including a pioneering set of objective scores applied to real, virtual, and improved virtual data, offer crucial insights and are an asset for the scientific community in quantifying fidelity levels.
Funder
Europe AUGMENTED_CCAM project PRISSMA project
Reference45 articles.
1. Ros, G., Sellart, L., Materzynska, J., Vazquez, D., and Lopez, A.M. (2016, January 27–30). The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 2. Richter, S.R., Vineet, V., Roth, S., and Koltun, V. (2016, January 11–14). Playing for data: Ground truth from computer games. Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands. Proceedings, Part II 14. 3. Cabon, Y., Murray, N., and Humenberger, M. (2020). Virtual KITTI 2. arXiv. 4. Gruyer, D., Pechberti, S., and Glaser, S. (2013, January 23–26). Development of full speed range ACC with SiVIC, a virtual platform for ADAS prototyping, test and evaluation. Proceedings of the 2013 IEEE Intelligent Vehicles Symposium (IV), Gold Coast, Australia. 5. Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., and Koltun, V. (2017, January 13–15). CARLA: An open urban driving simulator. Proceedings of the Conference on Robot Learning, PMLR, Mountain View, CA, USA.
|
|