Abstract
AbstractFacial repetition suppression, a well-studied phenomenon characterized by decreased neural responses to repeated faces in visual cortices, remains a subject of ongoing debate regarding its underlying neural mechanisms. Recent advancements have seen deep convolutional neural networks (DCNNs) achieve human-level performance in face recognition. In our present study, we sought to compare brain activation patterns derived from human electroencephalogram (EEG) data with those generated by DCNNs. Employing reverse engineering techniques, we aimed to provide a novel perspective on the neural mechanisms underlying facial repetition suppression. Our approach involved employing brain decoding methods to investigate how representations of faces change with their familiarity in the human brain. Subsequently, we constructed two models for repetition suppression within DCNNs: the Fatigue model, which posits that stronger activation leads to greater suppression, and the Sharpening model, which suggests that weaker activation results in more pronounced suppression. To elucidate the neural mechanisms at play, we conducted cross-modal representational similarity analysis (RSA) comparisons between human EEG signals and DCNN activations. Our results revealed a striking similarity between human brain representations and those of the Fatigue DCNN, favoring the Fatigue model over the Sharpening hypothesis in explaining the facial repetition suppression effect. These representation analyses, bridging the human brain and DCNNs, offer a promising tool for simulating brain activity and making inferences regarding the neural mechanisms underpinning complex human behaviors.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献