Abstract
Lack of suitable substitute assistive technology is a roadblock for students and scientists who are blind or visually impaired (BVI) from advancing in careers in science, technology, engineering, and mathematics (STEM) fields. It is challenging for persons who are BVI to interpret real-time visual scientific data which is commonly generated during lab experimentation, such as performing light microscopy, spectrometry, and observing chemical reactions. To address this problem, a real-time multimodal image perception system was developed to allow standard laboratory blood smear images to be perceived by BVI individuals by employing a combination of auditory, haptic, and vibrotactile feedback. These sensory feedback modalities were used to convey visual information through alternative perceptual channels, thus creating a palette of multimodal, sensory information. Two sets of image features of interest (primary and peripheral features) were applied to characterize images. A Bayesian network was applied to construct causal relations between these two groups of features. In order to match primary features with sensor modalities, two methods were conceived. Experimental results confirmed that this real-time approach produced higher accuracy in recognizing and analyzing objects within images compared to conventional tactile images.
Funder
National Institute of Health Director's ARRA Pathfinder Award to Promote Diversity in the Scientific Workplace
State of Indiana through the Center for Paralysis Research at Purdue University
Purdue Discovery Park
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Science Applications,Human-Computer Interaction
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献