Abstract
Understanding and perceiving three-dimensional scientific visualizations, such as volume rendering, benefit from visual cues produced by the shading models. The conventional approaches are local shading models since they are computationally inexpensive and straightforward to implement. However, the local shading models do not always provide proper visual cues since non-local information is not sufficiently taken into account for the shading. Global illumination models achieve better visual cues, but they are often computationally expensive. It has been shown that alternative illumination models, such as ambient occlusion, multidirectional shading, and shadows, provide decent perceptual cues. Although these models improve upon local shading models, they still require expensive preprocessing, extra GPU memory, and a high computational cost, which cause a lack of interactivity during the transfer function manipulations and light position changes. In this paper, we proposed an approximate image-space multidirectional occlusion shading model for the volume rendering. Our model was computationally less expensive compared to the global illumination models and did not require preprocessing. Moreover, interactive transfer function manipulations and light position changes were achievable. Our model simulated a wide range of shading behaviors, such as ambient occlusion and soft and hard shadows, and can be effortlessly applied to existing rendering systems such as direct volume rendering. We showed that the suggested model enhanced the visual cues with modest computational costs.
Funder
National Research Foundation of Korea
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献