Abstract
Abstract
Objective:
Brain areas implicated in semantic memory can be damaged in patients with epilepsy (PWE). However, it is challenging to delineate semantic processing deficits from acoustic, linguistic, and other verbal aspects in current neuropsychological assessments. We developed a new Visual-based Semantic Association Task (ViSAT) to evaluate nonverbal semantic processing in PWE.
Method:
The ViSAT was adapted from similar predecessors (Pyramids & Palm Trees test, PPT; Camels & Cactus Test, CCT) comprised of 100 unique trials using real-life color pictures that avoid demographic, cultural, and other potential confounds. We obtained performance data from 23 PWE participants and 24 control participants (Control), along with crowdsourced normative data from 54 Amazon Mechanical Turk (Mturk) workers.
Results:
ViSAT reached a consensus >90% in 91.3% of trials compared to 83.6% in PPT and 82.9% in CCT. A deep learning model demonstrated that visual features of the stimulus images (color, shape; i.e., non-semantic) did not influence top answer choices (p = 0.577). The PWE group had lower accuracy than the Control group (p = 0.019). PWE had longer response times than the Control group in general and this was augmented for the semantic processing (trial answer) stage (both p < 0.001).
Conclusions:
This study demonstrated performance impairments in PWE that may reflect dysfunction of nonverbal semantic memory circuits, such as seizure onset zones overlapping with key semantic regions (e.g., anterior temporal lobe). The ViSAT paradigm avoids confounds, is repeatable/longitudinal, captures behavioral data, and is open-source, thus we propose it as a strong alternative for clinical and research assessment of nonverbal semantic memory.
Publisher
Cambridge University Press (CUP)