Abstract
AbstractThe perception of two (or more) simultaneous musical notes, depending on their pitch interval(s), could be broadly categorized as consonant or dissonant. Previous studies have suggested that musicians and non-musicians adopt different strategies when discerning music intervals: the frequency ratio (perfect fifth or tritone) for the former, and frequency differences (e.g., roughness vs. non-roughness) for the latter. To extend and replicate this previous finding, in this follow-up study we reran the ElectroEncephaloGraphy (EEG) experiment, and separately collected functional magnetic resonance imaging (fMRI) data of the same protocol. The behavioral results replicated our previous findings that musicians used pitch intervals and nonmusicians roughness for consonant judgments. And the ERP amplitude differences between groups in both frequency ratio and frequency differences were primarily around N1 and P2 periods along the midline channels. The fMRI results, with the joint analyses by univariate, multivariate, and connectivity approaches, further reinforce the involvement of midline and related-brain regions in consonant/dissonance judgments. Additional representational similarity analysis (or RSA), and the final spatio-temporal searchlight RSA (or ss-RSA), jointly combined the fMRI-EEG into the same representational space, providing final support on the neural substrates of neurophysiological signatures. Together, these analyses not just exemplify the importance of replication, that musicians rely more on top-down knowledge for consonance/dissonance perception; but also demonstrate the advantages of multiple analyses in constraining the findings from both EEG and fMRI.Significance StatementIn this study, the neural correlates of consonant and dissonant perception has been revisited with both EEG and fMRI. Behavioral results of the current study well replicated the pattern of our earlier work (Kung et al., 2014), and the ERP results, though showing that both musicians and nonmusicians processed rough vs. non-rough notes similarly, still supported the top-down modulation in musicians likely through long-term practice. The fMRI results, combining univariate (GLM contrast and functional connectivity) and multivariate (MVPA searchlight and RSA on voxel-, connectivity-, and spatio-temporal RSA searchlight-level) analyses, commonly speak to lateralized and midline regions, at different time windows, as the core brain networks that underpin both musicians’ and nonmusicians’ consonant/dissonant perceptions.
Publisher
Cold Spring Harbor Laboratory