QuadStream

Author:

Hladky Jozef1,Stengel Michael2,Vining Nicholas3,Kerbl Bernhard4,Seidel Hans-Peter5,Steinberger Markus6

Affiliation:

1. Max Planck Institute for Informatics, Germany and NVIDIA, Germany

2. NVIDIA

3. University of British Columbia, Canada and NVIDIA, Canada

4. TU Wien, Austria

5. Max Planck Institute for Informatics, Germany

6. Graz University of Technology, Austria

Abstract

Streaming rendered 3D content over a network to a thin client device, such as a phone or a VR/AR headset, brings high-fidelity graphics to platforms where it would not normally possible due to thermal, power, or cost constraints. Streamed 3D content must be transmitted with a representation that is both robust to latency and potential network dropouts. Transmitting a video stream and reprojecting to correct for changing viewpoints fails in the presence of disocclusion events; streaming scene geometry and performing high-quality rendering on the client is not possible on limited-power mobile GPUs. To balance the competing goals of disocclusion robustness and minimal client workload, we introduce QuadStream , a new streaming content representation that reduces motion-to-photon latency by allowing clients to efficiently render novel views without artifacts caused by disocclusion events. Motivated by traditional macroblock approaches to video codec design, we decompose the scene seen from positions in a view cell into a series of quad proxies , or view-aligned quads from multiple views. By operating on a rasterized G-Buffer, our approach is independent of the representation used for the scene itself; the resulting QuadStream is an approximate geometric representation of the scene that can be reconstructed by a thin client to render both the current view and nearby adjacent views. Our technical contributions are an efficient parallel quad generation, merging, and packing strategy for proxy views covering potential client movement in a scene; a packing and encoding strategy that allows masked quads with depth information to be transmitted as a frame-coherent stream; and an efficient rendering approach for rendering our QuadStream representation into entirely novel views on thin clients. We show that our approach achieves superior quality compared both to video data streaming methods, and to geometry-based streaming.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Reference71 articles.

1. Pontus Andersson , Jim Nilsson , Tomas Akenine-Möller , Magnus Oskarsson , Kalle Åström , and Mark D. Fairchild . 2020. FLIP: A Difference Evaluator for Alternating Images . Proc. ACM Comput. Graph. Interact. Tech. 3 , 2 ( 2020 ), 15:1--15:23. Pontus Andersson, Jim Nilsson, Tomas Akenine-Möller, Magnus Oskarsson, Kalle Åström, and Mark D. Fairchild. 2020. FLIP: A Difference Evaluator for Alternating Images. Proc. ACM Comput. Graph. Interact. Tech. 3, 2 (2020), 15:1--15:23.

2. Remote walkthrough over mobile networks using 3-D image warping and streaming

3. Kevin Boos David Chu and Eduardo Cuervo. 2016. FlashBack: Immersive Virtual Reality on Mobile Devices via Rendering Memoization. In MobiSys. 291--304. Kevin Boos David Chu and Eduardo Cuervo. 2016. FlashBack: Immersive Virtual Reality on Mobile Devices via Rendering Memoization. In MobiSys. 291--304.

4. Huw Bowles , Kenny Mitchell , Robert W. Sumner , Jeremy Moore , and Markus Gross . 2012. Iterative Image Warping. Computer Graphics Forum 31, 2pt1 ( 2012 ), 237--246. Huw Bowles, Kenny Mitchell, Robert W. Sumner, Jeremy Moore, and Markus Gross. 2012. Iterative Image Warping. Computer Graphics Forum 31, 2pt1 (2012), 237--246.

5. Michael Broxton , John Flynn , Ryan Overbeck , Daniel Erickson , Peter Hedman , Matthew DuVall , Jason Dourgarian , Jay Busch , Matt Whalen , and Paul Debevec . 2020. Immersive Light Field Video with a Layered Mesh Representation. 39, 4 ( 2020 ), 86:1--86:15. Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, and Paul Debevec. 2020. Immersive Light Field Video with a Layered Mesh Representation. 39, 4 (2020), 86:1--86:15.

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. SRSSIS: Super-Resolution Screen Space Irradiance Sampling for Lightweight Collaborative Web3D Rendering Architecture;Computer-Aided Design and Computer Graphics;2024

2. A Blind Streaming System for Multi-client Online 6-DoF View Touring;Proceedings of the 31st ACM International Conference on Multimedia;2023-10-26

3. Effect-based Multi-viewer Caching for Cloud-native Rendering;ACM Transactions on Graphics;2023-07-26

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3