Abstract
AbstractA fundamental challenge in neuroengineering is determining a proper input to a sensory system that yields the desired functional output. In neuroprosthetics, this process is known as sensory encoding, and it holds a crucial role in prosthetic devices restoring sensory perception in individuals with disabilities. For example, in visual prostheses, one key aspect of image encoding is to down-sample the images captured by a camera to a size matching the number of inputs and resolution of the prosthesis. Here, we show that down-sampling an image using the inherent computation of the retinal network yields better performance compared to a learning-free down-sampling encoding. We validated a learning-based approach (actor-model framework) that exploits the signal transformation from photoreceptors to retinal ganglion cells measured in explanted retinas. The actor-model framework generates down-sampled images eliciting a neuronal response in-silico and ex-vivo with higher neuronal reliability to the one produced by original images compared to a learning-free approach (i.e. pixel averaging). In addition, the actor-model learned that contrast is a crucial feature for effective down-sampling. This methodological approach could serve as a template for future image encoding strategies. Ultimately, it can be exploited to improve encoding strategies in visual prostheses or other sensory prostheses such as cochlear or limb.
Publisher
Cold Spring Harbor Laboratory