Abstract
Free-viewpoint rendering has always been one of the key motivations of image-based rendering and has broad application prospects in the field of virtual reality and augmented reality (VR/AR). The existing methods mainly adopt the traditional image-based rendering or learning-based frameworks, which have limited viewpoint freedom and poor time performance. In this paper, the cube surface light field is utilized to encode the scenes implicitly, and an interactive free-viewpoint rendering method is proposed to solve the above two problems simultaneously. The core of this method is a pure light ray-based representation using the cube surface light field. Using a fast single-layer ray casting algorithm to compute the light ray’s parameters, the rendering is achieved by a GPU-based three-dimensional (3D) compressed texture mapping that converts the corresponding light rays to the desired image. Experimental results show that the proposed method can real-time render the novel views at arbitrary viewpoints outside the cube surface, and the rendering results preserve high image quality. This research provides a valid experimental basis for the potential application value of content generation in VR/AR.
Funder
Zhejiang Provincial Science and Technology Program
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference37 articles.
1. A survey on image-based rendering—representation, sampling and compression
2. High-quality streamable free-viewpoint video
3. Deep blending for free-viewpoint image-based rendering
4. The Plenoptic Function and the Elements of Early Vision;Adelson,1991
5. Light Field Rendering;Levoy;Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques,1996
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献