Affiliation:
1. University of Cambridge, UK
2. University of Liverpool, UK
3. Reality Labs
Abstract
A contrast sensitivity function, or CSF, is a cornerstone of many visual models. It explains whether a contrast pattern is visible to the human eye. The existing CSFs typically account for a subset of relevant dimensions describing a stimulus, limiting the use of such functions to either static or foveal content but not both. In this paper, we propose a unified CSF, stelaCSF, which accounts for all major dimensions of the stimulus: spatial and temporal frequency, eccentricity, luminance, and area. To model the 5-dimensional space of contrast sensitivity, we combined data from 11 papers, each of which studied a subset of this space. While previously proposed CSFs were fitted to a single dataset, stelaCSF can predict the data from all these studies using the same set of parameters. The predictions are accurate in the entire domain, including low frequencies. In addition, stelaCSF relies on psychophysical models and experimental evidence to explain the major interactions between the 5 dimensions of the CSF. We demonstrate the utility of our new CSF in a flicker detection metric and in foveated rendering.
Funder
European Research Council
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Reference55 articles.
1. Albert Ahumada and Heidi A . Peterson . 1992 . Luminance-model-based DCT quantization for color image compression. In SPIE 1666, Human vision, visual processing, and digital display III. Albert Ahumada and Heidi A. Peterson. 1992. Luminance-model-based DCT quantization for color image compression. In SPIE 1666, Human vision, visual processing, and digital display III.
2. A Dual Channel Spatial-Temporal Detection Model
3. A New Look at the Statistical Model Identification
4. Human peripheral spatial resolution for achromatic and chromatic stimuli: limits imposed by optical and retinal factors.
5. FLIP
Cited by
23 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献