SHAP value-based ERP analysis (SHERPA): Increasing the sensitivity of EEG signals with explainable AI methods

Author:

Sylvester Sophia,Sagehorn Merle,Gruber Thomas,Atzmueller Martin,Schöne Benjamin

Abstract

AbstractConventionally, event-related potential (ERP) analysis relies on the researcher to identify the sensors and time points where an effect is expected. However, this approach is prone to bias and may limit the ability to detect unexpected effects or to investigate the full range of the electroencephalography (EEG) signal. Data-driven approaches circumvent this limitation, however, the multiple comparison problem and the statistical correction thereof affect both the sensitivity and specificity of the analysis. In this study, we present SHERPA – a novel approach based on explainable artificial intelligence (XAI) designed to provide the researcher with a straightforward and objective method to find relevant latency ranges and electrodes. SHERPA is comprised of a convolutional neural network (CNN) for classifying the conditions of the experiment and SHapley Additive exPlanations (SHAP) as a post hoc explainer to identify the important temporal and spatial features. A classical EEG face perception experiment is employed to validate the approach by comparing it to the established researcher- and data-driven approaches. Likewise, SHERPA identified an occipital cluster close to the temporal coordinates for the N170 effect expected. Most importantly, SHERPA allows quantifying the relevance of an ERP for a psychological mechanism by calculating an ”importance score”. Hence, SHERPA suggests the presence of a negative selection process at the early and later stages of processing. In conclusion, our new method not only offers an analysis approach suitable in situations with limited prior knowledge of the effect in question but also an increased sensitivity capable of distinguishing neural processes with high precision.

Funder

MWK Niedersachsen and the VolkswagenStiftung

Publisher

Springer Science and Business Media LLC

Reference59 articles.

1. Agarwal, N., & Das, S. (2020). Interpretable machine learning tools: A survey. IEEE Symposium Series on Computational Intelligence (SSCI), 2020, 1528–1534. https://doi.org/10.1109/SSCI47803.2020.9308260

2. Akhter, R., Lawal, K., Rahman, M. T., & Mazumder, S. A. (2020). Classification of common and uncommon tones by P300 feature extraction and identification of accurate P300 wave by machine learning algorithms. International Journal of Advanced Computer Science and Applications (IJACSA), 11(10). https://doi.org/10.14569/IJACSA.2020.0111080

3. Alsuradi, H., Park, W., & Eid, M. (2020). Explainable classification of EEG data for an active touch task using shapley values. In C. Stephanidis, M. Kurosu, H. Degen, & L. Reinerman-Jones (Eds.), HCI international 2020 - late breaking papers: Multimodality and intelligence (pp. 406–416). Springer International Publishing. https://doi.org/10.1007/978-3-030-60117-130

4. Atzmueller, M., & Roth-Berghofer, T. (2011). The Mining and Analysis Continuum of Explaining Uncovered. In Bramer, M., Petridis, M., Hopgood, A. (Eds.) Research and development in intelligent systems XXVII. SGAI 2010. Springer, London. https://doi.org/10.1007/978-0-85729-130-120

5. Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3