Abstract
Abstract
Both biological and artificial neural networks inherently balance their performance with their operational cost, which characterizes their computational abilities. Typically, an efficient neuromorphic neural network is one that learns representations that reduce the redundancies and dimensionality of its input. For instance, in the case of sparse coding (SC), sparse representations derived from natural images yield representations that are heterogeneous, both in their sampling of input features and in the variance of those features. Here, we focused on this notion, and sought correlations between natural images’ structure, particularly oriented features, and their corresponding sparse codes. We show that representations of input features scattered across multiple levels of variance substantially improve the sparseness and resilience of sparse codes, at the cost of reconstruction performance. This echoes the structure of the model’s input, allowing to account for the heterogeneously aleatoric structures of natural images. We demonstrate that learning kernel from natural images produces heterogeneity by balancing between approximate and dense representations, which improves all reconstruction metrics. Using a parametrized control of the kernels’ heterogeneity of a convolutional SC algorithm, we show that heterogeneity emphasizes sparseness, while homogeneity improves representation granularity. In a broader context, this encoding strategy can serve as inputs to deep convolutional neural networks. We prove that such variance-encoded sparse image datasets enhance computational efficiency, emphasizing the benefits of kernel heterogeneity to leverage naturalistic and variant input structures and possible applications to improve the throughput of neuromorphic hardware.