Abstract
AbstractStatistical learning (SL) refers to the ability to extract statistical regularities from the environment. Previous research has suggested that regularity extraction is modality-specific, occurring within but not between sensory modalities (Frost et al., 2015). The present study investigates the circumstances under which SL can occur between modalities. In the first experiment, participants were presented with a stream of meaningless visual fractals and synthetic sounds while performing an oddball detection task. Stimuli were grouped into unimodal (AA, VV) or crossmodal (VA, AV) pairs based on higher transitional probability between the elements. Using implicit and explicit measures of SL, we found that participants only learned the unimodal pairs. In a second experiment, we presented the pairs in separate unimodal (VVVV, AAAA) and crossmodal (AVAV, VAVA) blocks, allowing participants to anticipate which modality would be presented next. We found that SL for the crossmodal pairs outperformed that of unimodal pairs. This result suggests that modality predictability facilitates a correct crossmodal attention deployment that is crucial for learning crossmodal transitional probabilities. Finally, a third experiment demonstrated that participants can explicitly learn the statistical regularities between crossmodal pairs even when the upcoming modality is not predictable, as long as the pairs contain semantic information. This finding suggests that SL between crossmodal pairs can occur when sensory-level limitations are bypassed, and when learning can unfold at a supramodal level of representation. This study demonstrates that SL is not a modality-specific mechanism and compels revision of the current neurobiological model of SL in which learning of statistical regularities between low-level stimuli features relies on hard-wired learning computations that take place in their respective sensory cortices.
Publisher
Cold Spring Harbor Laboratory