Selective peripheral nerve recording using simulated human median nerve activity and convolutional neural networks

Author:

Jawad Taseen,Koh Ryan G. L.,Zariffa José

Abstract

Abstract Background It is difficult to create intuitive methods of controlling prosthetic limbs, often resulting in abandonment. Peripheral nerve interfaces can be used to convert motor intent into commands to a prosthesis. The Extraneural Spatiotemporal Compound Action Potentials Extraction Network (ESCAPE-NET) is a convolutional neural network (CNN) that has previously been demonstrated to be effective at discriminating neural sources in rat sciatic nerves. ESCAPE-NET was designed to operate using data from multi-channel nerve cuff arrays, and use the resulting spatiotemporal signatures to classify individual naturally evoked compound action potentials (nCAPs) based on differing source fascicles. The applicability of this approach to larger and more complex nerves is not well understood. To support future translation to humans, the objective of this study was to characterize the performance of this approach in a computational model of the human median nerve. Methods Using a cross-sectional immunohistochemistry image of a human median nerve, a finite-element model was generated and used to simulate extraneural recordings. ESCAPE-NET was used to classify nCAPs based on source location, for varying numbers of sources and noise levels. The performance of ESCAPE-NET was also compared to ResNet-50 and MobileNet-V2 in the context of classifying human nerve cuff data. Results Classification accuracy was found to be inversely related to the number of nCAP sources in ESCAPE-NET (3-class: 97.8% ± 0.1%; 10-class: 89.3% ± 5.4% in low-noise conditions, 3-class: 70.3% ± 0.1%; 10-class: 52.5% ± 0.3% in high-noise conditions). ESCAPE-NET overall outperformed both MobileNet-V2 (3-class: 96.5% ± 1.1%; 10-class: 84.9% ± 1.7% in low-noise conditions, 3-class: 86.0% ± 0.6%; 10-class: 41.4% ± 0.9% in high-noise conditions) and ResNet-50 (3-class: 71.2% ± 18.6%; 10-class: 40.1% ± 22.5% in low-noise conditions, 3-class: 81.3% ± 4.4%; 10-class: 31.9% ± 4.4% in high-noise conditions). Conclusion All three networks were found to learn to differentiate nCAPs from different sources, as evidenced by performance levels well above chance in all cases. ESCAPE-NET was found to have the most robust performance, despite decreasing performance as the number of classes increased, and as noise was varied. These results provide valuable translational guidelines for designing neural interfaces for human use.

Funder

Natural Sciences and Engineering Research Council of Canada

Publisher

Springer Science and Business Media LLC

Subject

Radiology, Nuclear Medicine and imaging,Biomedical Engineering,General Medicine,Biomaterials,Radiological and Ultrasound Technology

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3