Author:
Gholinezhad Shima,Farina Dario,Dosen Strahinja,Dideriksen Jakob
Abstract
AbstractBidirectional human–machine interfaces involve commands from the central nervous system to an external device and feedback characterizing device state. Such feedback may be elicited by electrical stimulation of somatosensory nerves, where a task-relevant variable is encoded in stimulation amplitude or frequency. Recently, concurrent modulation in amplitude and frequency (multimodal encoding) was proposed. We hypothesized that feedback with multimodal encoding may effectively be processed by the central nervous system as two independent inputs encoded in amplitude and frequency, respectively, thereby increasing state estimate quality in accordance with maximum-likelihood estimation. Using an adaptation paradigm, we tested this hypothesis during a grasp force matching task where subjects received electrotactile feedback encoding instantaneous force in amplitude, frequency, or both, in addition to their natural force feedback. The results showed that adaptations in grasp force with multimodal encoding could be accurately predicted as the integration of three independent inputs according to maximum-likelihood estimation: amplitude modulated electrotactile feedback, frequency modulated electrotactile feedback, and natural force feedback (r2 = 0.73). These findings show that multimodal electrotactile feedback carries an intrinsic advantage for state estimation accuracy with respect to single-variable modulation and suggest that this scheme should be the preferred strategy for bidirectional human–machine interfaces with electrotactile feedback.
Funder
Danmarks Frie Forskningsfond
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献