Abstract
AbstractAfter hearing the words Little Red Riding Hood, many humans instantly visualize a girl wearing a red hood in the woods. However, whether nonhuman primates also evoke such visual imagery from sounds remains an open question. We explored this from direct behavioral measurements from two rhesus macaques trained in a delayed crossmodal equivalence task. In each trial, they listened to a sound, such as a monkey vocalization or a word, and three seconds later, selected a visual equivalent out of a pool of 2 to 4 pictures appearing on a touchscreen. We show that monkeys can be trained to discriminate perceptual objects of numerous properties and furthermore that they perceive as invariant different versions of the learned sounds. We propose two potential mechanisms for the brain to solve this task: acoustic memory or visual imagery. After analyzing the monkeys’ choice accuracies and reaction times in the task, we find that they experience visual imagery when listening to sounds. Therefore, the ability of rhesus monkeys to perceive crossmodal equivalences between learned categories poses rhesus monkeys as an ideal model organism for studying high-order cognitive processes like semantics and conceptual thinking at the single-neuron level.
Publisher
Cold Spring Harbor Laboratory