Abstract
AbstractOur capacity to become aware of visual stimuli is limited. Investigating these limits, Cohen et al. (2015, Journal of Cognitive Neuroscience) found that certain object categories (e.g., faces) were more effective in blocking awareness of other categories (e.g., buildings) than other combinations (e.g., cars/chairs) in the continuous flash suppression (CFS) task. They also found that more category-pair representational similarity in higher visual cortex was related to longer category-pair breakthrough times suggesting a high-level representational architecture bottleneck for visual awareness. As the cortical representations of hands and tools overlap, these categories are ideal to test this further. We conducted CFS experiments and predicted longer breakthrough times for hands/tools compared to other pairs. In contrast to these predictions, participants were generally faster at detecting targets masked by hands or tools compared to other mask categories when giving manual (Experiment 1) or vocal responses (Experiment 2). Furthermore, we found the same inefficient mask effect for hands in the context of the categories used by Cohen et al. (2015) and found a similar behavioural pattern as the original paper (Experiment 3). Exploring potential low-level explanations, we found that the category average for edges (e.g. hands have less detail compared to cars) was the best predictor for the data. However, these category-specific image characteristics could not completely account for the Cohen et al. (2015) category pattern or for the hand/tool effects. Thus, several low- and high-level object category-specific limits for visual awareness are plausible and more investigations are needed to further tease these apart.
Publisher
Cold Spring Harbor Laboratory