Abstract
AbstractSpeaking elicits a suppressed neural response when compared to listening to others’ speech, a phenomenon known as speaker-induced suppression (SIS). Previous research has focused on investigating SIS at constrained levels of linguistic representation, such as the individual phoneme and word level. Here we present scalp EEG data from a dual speech perception and production task where participants read sentences aloud then listened to playback of themselves reading those sentences. Playback was separated into predictable repetition of the previous trial and unpredictable, randomized repetition of a former trial to investigate the role predictive processing plays in SIS. Concurrent EMG was recorded to control for movement artifact during speech production. In line with previous research, event-related potential analyses at the sentence level demonstrated suppression of early auditory components of the EEG for production compared to perception. To evaluate whether specific neural representations contribute to SIS (in contrast with a global gain change), we fit linear encoding models that predicted scalp EEG based on phonological features, EMG activity, and task condition. We found that phonological features were encoded similarly between production and perception. However, this similarity was only observed when controlling for movement by using the EMG response as an additional regressor. Our results suggest SIS is at the representational level a global gain change between perception and production, not the suppression of specific characteristics of the neural response. We also detail some important considerations when analyzing EEG during continuous speech production.
Publisher
Cold Spring Harbor Laboratory