A Framework for Realistic Virtual Representation for Immersive Training Environments.
Author:
Plumb Caolan1, Pour Rahimian Farzad1ORCID, Pandit Diptangshu1ORCID, Thomas Hannah2, Clark Nigel2
Affiliation:
1. Teesside University, GB 2. The Faraday Centre LTD, GB
Abstract
As mixed-reality (XR) technology becomes more available, virtually simulated training scenarios have shown great potential in enhancing training effectiveness. Realistic virtual representation plays a crucial role in creating immersive experiences that closely mimic real-world scenarios. With reference to previous methodological developments in the creation of information-rich digital reconstructions, this paper proposes a framework encompassing key components of the 3D scanning pipeline. While 3D scanning techniques have advanced significantly, several challenges persist in the field. These challenges include data acquisition, noise reduction, mesh and texture optimisation, and separation of components for independent interaction. These complexities necessitate the search for an optimised framework that addresses these challenges and provides practical solutions for creating realistic virtual representations in immersive training environments. The following exploration acknowledges and addresses challenges presented by the photogrammetry and laser-scanning pipeline, seeking to prepare scanned assets for real-time virtual simulation in a games-engine. This methodology employs both a camera and handheld laser-scanner for accurate data acquisition. Reality Capture is used to combine the geometric data and surface detail of the equipment. To clean the scanned asset, Blender is used for mesh retopology and reprojection of scanned textures, and attention given to correct lighting details and normal mapping, thus preparing the equipment to be interacted with by Virtual Reality (VR) users within Unreal Engine. By combining these elements, the proposed framework enables realistic representation of industrial equipment for the creation of training scenarios that closely resemble real-world contexts
Publisher
Firenze University Press
Reference28 articles.
1. Abulrub, A. H. G., Attridge, A. N., & Williams, M. A. (2011, 4-6 April 2011). Virtual reality in engineering education: The future of creative learning. 2011 IEEE Global Engineering Education Conference (EDUCON), 2. Effectiveness of VR-based training on improving construction workers’ knowledge, skills, and safety behavior in robotic teleoperation 3. Alexander, O., Rogers, M., Lambeth, W., Chiang, M., & Debevec, P. (2009, 12-13 Nov. 2009). Creating a Photoreal Digital Actor: The Digital Emily Project. 2009 Conference for Visual Media Production, 4. Bot, J. A., Irschick, D. J., Grayburn, J., Lischer-Katz, Z., Golubiewski-Davis, K., & Ikeshoji-Orlati, V. (2019). Using 3D photogrammetry to create open-access models of live animals: 2D and 3D software solutions. Grayburn et al., eds. D, 3, 54-72. 5. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2017). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4), 834-848.
|
|