Saliency-Guided Point Cloud Compression for 3D Live Reconstruction
-
Published:2024-05-03
Issue:5
Volume:8
Page:36
-
ISSN:2414-4088
-
Container-title:Multimodal Technologies and Interaction
-
language:en
-
Short-container-title:MTI
Author:
Ruiu Pietro1ORCID, Mascia Lorenzo1ORCID, Grosso Enrico1ORCID
Affiliation:
1. Department of Biomedical Sciences, University of Sassari, 07100 Sassari, Italy
Abstract
3D modeling and reconstruction are critical to creating immersive XR experiences, providing realistic virtual environments, objects, and interactions that increase user engagement and enable new forms of content manipulation. Today, 3D data can be easily captured using off-the-shelf, specialized headsets; very often, these tools provide real-time, albeit low-resolution, integration of continuously captured depth maps. This approach is generally suitable for basic AR and MR applications, where users can easily direct their attention to points of interest and benefit from a fully user-centric perspective. However, it proves to be less effective in more complex scenarios such as multi-user telepresence or telerobotics, where real-time transmission of local surroundings to remote users is essential. Two primary questions emerge: (i) what strategies are available for achieving real-time 3D reconstruction in such systems? and (ii) how can the effectiveness of real-time 3D reconstruction methods be assessed? This paper explores various approaches to the challenge of live 3D reconstruction from typical point cloud data. It first introduces some common data flow patterns that characterize virtual reality applications and shows that achieving high-speed data transmission and efficient data compression is critical to maintaining visual continuity and ensuring a satisfactory user experience. The paper thus introduces the concept of saliency-driven compression/reconstruction and compares it with alternative state-of-the-art approaches.
Funder
Italian Ministry for Research and Education National Recovery and Resilience Plan European Union
Reference47 articles.
1. Orts-Escolano, S., Rhemann, C., Fanello, S., Chang, W., Kowdle, A., Degtyarev, Y., Kim, D., Davidson, P.L., Khamis, S., and Dou, M. (2016, January 16–19). Holoportation: Virtual 3d teleportation in real-time. Proceedings of the 29th Annual Symposium on User Interface Software and Technology, Tokyo, Japan. 2. Fernandez, S., Montagud, M., Rincón, D., Moragues, J., and Cernigliaro, G. (November, January 29). Addressing Scalability for Real-time Multiuser Holo-portation: Introducing and Assessing a Multipoint Control Unit (MCU) for Volumetric Video. Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada. 3. Geiger, A., Lenz, P., and Urtasun, R. (December, January 30). Are we ready for autonomous driving? the kitti vision benchmark suite. Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Kolkata, India. 4. Multimodal Multi-User Mixed Reality Human–Robot Interface for Remote Operations in Hazardous Environments;Szczurek;IEEE Access,2023 5. A mixed reality telepresence system for collaborative space operation;Fairchild;IEEE Trans. Circuits Syst. Video Technol.,2016
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|