Abstract
AbstractComputational models and metrics, including measures like surprisal and semantic relevance, have been developed to accurately predict and explain language comprehension and processing. However, their efficacy is hindered by their inadequate integration of contextual information. Drawing inspiration from the attention mechanism in transformers and human forgetting mechanism, this study introduces an attention-aware method that thoroughly incorporates contextual information, updating surprisal and semantic relevance into attention-aware metrics respectively. Furthermore, by employing the quantum superposition principle, the study proposes an enhanced approach for integrating and encoding diverse information sources based on the two attention-aware metrics. Both attention-aware and enhanced metrics demonstrate superior effectiveness in comparison to the currently available metrics, leading to improved predictions of eye-movements during naturalistic discourse reading across 13 languages. The proposed approaches are fairly capable of facilitating simulation and evaluation of existing reading models and language processing theories. The metrics computed by the proposed approaches are highly interpretable and exhibit cross-language generalizations in predicting language comprehension. The innovative computational methods proposed in this study hold the great potential to enhance our under-standing of human working memory mechanisms, human reading behavior and cognitive modeling in language processing. Moreover, they have the capacity to revolutionize ongoing research in computational cognition for language processing, offering valuable insights for computational neuroscience, quantum cognition and optimizing the design of AI systems.
Publisher
Cold Spring Harbor Laboratory