Abstract
AbstractImmersive video stored in multiview video-plus-depth format can provide viewers with vivid immersive experiences. However, rendering such video in real time in immersive environments remains a challenging task due to the high resolution and refresh rate demanded by recent extended reality displays. An essential issue in this immersive rendering is the disocclusion problem that inevitably occurs when virtual views are synthesized via the de facto standard 3D warping technique. In this paper, we present a novel virtual view synthesis framework that, from a live immersive video stream, renders stereoscopic images in real time for a freely moving virtual viewer. The main difference from previous approaches is that the surrounding background environment of the immersive video’s virtual scene is progressively reproduced on the fly directly in the 3D space while the input stream is being rendered. To allow this, we propose a new 3D background modeling scheme that, based on GPU-accelerated real-time ray tracing, efficiently and incrementally builds the background model in compact 3D triangular mesh. Then, we demonstrate that the 3D background environment can effectively alleviate the critical disocclusion problem in the immersive rendering, eventually reducing spatial and temporal aliasing artifacts. It is also suggested that the 3D representation of background environment enables extension of the virtual environment of immersive video by interactively adding 3D visual effects during rendering.
Funder
National Research Foundation of Korea
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献