Abstract
AbstractWe typically encounter objects in a context, for example, a sofa in a living room or a car in the street, and this context influences how we recognize objects. Objects that are congruent with a scene context are recognised faster and more accurately than objects that are incongruent. Furthermore, objects that are incongruent with a scene elicit a stronger negativity of the N300/N400 EEG component compared to objects that are congruent with the scene. However, exactly how context modulates access to semantic object information is unknown. Here, we used a modelling-based approach with EEG to directly test how context influences the processing of semantic object information. Using representational similarity analysis, we first asked whether EEG patterns dissociated objects in congruent or incongruent scenes, finding that representational differences between the conditions emerged towards 300 ms. Next, we tested the relationship between EEG patterns and a semantic model based on property norms, revealing that the processing of semantic information for both conditions started around 150 ms, while after around 275 ms, semantic effects were stronger and lasted longer for objects in incongruent scenes compared to objects in congruent scenes. The timing of these effects overlapped with known N300/N400, suggesting previous congruency effects might be explained by differences in processing semantic object information. This suggests that scene contexts can provide a prior expectation about what kind of objects could appear, which might allow for more efficient semantic processing if the object is congruent with the scene, and extended semantic effects for incongruent objects.
Publisher
Cold Spring Harbor Laboratory