Abstract
AbstractThe application of multivariate pattern analysis (MVPA) to electroencephalography (EEG) data allows neuroscientists to track neural representations at temporally fine-grained scales. This approach has been leveraged to study the locus and evolution of long-term memory contents in the brain, but a limiting factor is that decoding performance remains low. A key reason for this is that processes like encoding and retrieval are intrinsically dynamic across trials and participants, and this runs in tension with MVPA and other techniques that rely on consistently unfolding neural codes to generate predictions about memory contents. The presentation of visually perturbing stimuli may experimentally regularize brain dynamics, making neural codes more stable across measurements to enhance representational readouts. Such enhancements, which have repeatedly been demonstrated in working memory contexts, remain to our knowledge unexplored in long-term memory tasks. In this study, we evaluated whether visual perturbations—orpings—improve our ability to predict the category of retrieved images from EEG activity during cued recall. Overall, our findings suggest that while pings evoked a prominent neural response, they did not reliably produce improvements in MVPA-based classification across several analyses. We discuss possibilities that could explain these results, including the role of experimental and analysis parameter choices and mechanistic differences between working and long-term memory.
Publisher
Cold Spring Harbor Laboratory