Affiliation:
1. Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA
Abstract
Sighted blindfolded individuals can successfully classify basic facial expressions of emotion (FEEs) by manually exploring simple 2-D raised-line drawings (Lederman et al 2008, IEEE Transactions on Haptics1 27–38). The effect of training on classification accuracy was assessed by sixty sighted blindfolded participants (experiment 1) and by three adventitiously blind participants (experiment 2). We further investigated whether the underlying learning process(es) constituted token-specific learning and/or generalization. A hybrid learning paradigm comprising pre/post and old/new test comparisons was used. For both participant groups, classification accuracy for old (ie trained) drawings markedly increased over study trials (mean improvement = 76%, and 88%, respectively). Additionally, RT decreased by a mean of 30% for the sighted, and 31% for the adventitiously blind. Learning was mostly token-specific, but some generalization was also observed for both groups. The sighted classified novel drawings of all six FEEs faster with training (mean RT decrease = 20%). Accuracy also improved significantly (mean improvement = 20%), but this improvement was restricted to two FEEs (anger and sadness). Two of three adventitiously blind participants classified new drawings more accurately (mean improvement = 30%); however, RTs for this group did not reflect generalization. Based on a limited number of blind subjects, our results tentatively suggest that adventitiously blind individuals learn to haptically classify FEEs as well as, or even better than, sighted persons.
Subject
Artificial Intelligence,Sensory Systems,Experimental and Cognitive Psychology,Ophthalmology
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献