Abstract
AbstractWhen we internally generate mental images, we need to combine multiple features into a whole. Direct evidence for such feature integration during visual imagery is still lacking. Moreover, cognitive control mechanisms, including memory and attention, exert top-down influences on the perceptual system during mental images generation. However, it is unclear whether such top-down processing is content-specific or not. Feature integration and top-down processing involve short-range connectivity within visual areas, and long-range connectivity between control and visual areas, respectively. Here, we used a minimally constrained experimental paradigm wherein imagery categories were prompted using visual word cues only, and we decoded face versus place imagery based on their underlying connectivity patterns. Our results show that face and place imagery can be decoded from both short-range and long-range connections. These findings suggest that feature integration does not require an external stimulus but occurs also for purely internally generated images. Furthermore, control and visual areas exchange information specifically tailored to imagery content.TeaserDecoding visual imagery from brain connectivity reveals a content-specific interconnected neural code for internal image generation.
Publisher
Cold Spring Harbor Laboratory