Abstract
ABSTRACTOne of the most impressive achievements of the AI revolution is the development of large language models that can generate meaningful text and respond to instructions in plain English with no additional training necessary. Here we show that language models can be used as a scientific instrument for studying human memory for meaningful material. We developed a pipeline for designing large scale memory experiments and analyzing the obtained results. We performed online memory experiments with a large number of participants and collected recognition and recall data for narratives of different lengths. We found that both recall and recognition performance scale linearly with narrative length. Furthermore, in order to investigate the role of narrative comprehension in memory, we repeated these experiments using scrambled versions of the presented stories. We found that even though recall performance declined significantly, recognition remained largely unaffected. Interestingly, recalls in this condition seem to follow the original narrative order rather than the scrambled presentation, pointing to a contextual reconstruction of the story in memory.
Publisher
Cold Spring Harbor Laboratory
Reference39 articles.
1. Representation of Real-World Event Schemas during Narrative Perception
2. Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge university press.
3. Episodes as chunks in narrative memory;Journal of verbal learning and verbal behavior,1979
4. Narrative stories as mediators for serial learning