Abstract
AbstractRecent evidence suggests that readers optimize low-level visual information following the principles of predictive coding. Based on a transparent neurocognitive model, we postulated that readers optimize their percept by removing redundant visual signals, which allows them to focus on the informative aspects of the sensory input, i.e., the orthographic prediction error (oPE). Here, we test alternative oPE implementations by assuming all-or-nothing signaling units based on multiple thresholds and compare them to the original oPE implementation. For model evaluation, we implemented the comparison based on behavioral and electrophysiological data (EEG at 230, 430 ms). We found the highest model fit for the oPE with a 50% threshold integrating multiple prediction units for behavior and the late EEG component. The early EEG component was still explained best by the original hypothesis. In the final evaluation, we used image representations of both oPE implementations as input to a deep-neuronal network model (DNN). We compared the lexical decision performance of the DNN in two tasks (words vs. consonant strings; words vs. pseudowords) to the performance after training with unaltered word images and found better DNN performance when trained with the 50% oPE representations in both tasks. Thus, the new formulation is adequate for late but not early neuronal signals and lexical decision behavior in humans and machines. The change from early to late neuronal processing likely reflects a transformation in the representational structure over time that relates to accessing the meaning of words.
Publisher
Cold Spring Harbor Laboratory
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献