Author:
Machidon Alina L.,Pejović Veljko
Abstract
AbstractIn this paper we present a novel data-driven subsampling method that can be seamlessly integrated into any neural network architecture to identify the most informative subset of samples within the original acquisition domain for a variety of tasks that rely on deep learning inference from sampled signals. In contrast to existing methods that require signal transformation into a sparse basis, expensive signal reconstruction as an intermediate step, and that can support a single predefined sampling rate only, our approach allows the sampling inference pipeline to adapt to multiple sampling rates directly in the original signal domain. The key innovations enabling such operation are a custom subsampling layer and a novel training mechanism. Through extensive experiments with four data sets and four different network architectures, our method demonstrates a simple yet powerful sampling strategy that allows the given network to be efficiently utilized at any given sampling rate, while the inference accuracy degrades smoothly and gradually as the sampling rate is reduced. Experimental comparison with state-of-the-art sparse sensing and learning techniques demonstrates competitive inference accuracy at different sampling rates, coupled with a significant improvement in computational efficiency, and the crucial ability to operate at arbitrary sampling rates without the need for retraining.
Funder
Javna Agencija za Raziskovalno Dejavnost RS
Publisher
Springer Science and Business Media LLC
Reference20 articles.
1. Pramanik, P.K.D., et al.: Power consumption analysis, measurement, management, and issues: a state-of-the-art review of smartphone battery and energy usage. IEEE Access 7, 182113–182172 (2019)
2. Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)
3. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)
4. Adler, A., Elad, M., Zibulevsky, M.: Compressed learning: a deep neural network approach (2016). arXiv preprint arXiv:1610.09615
5. Machidon, A.L., Pejović, V.: Deep learning for compressive sensing: a ubiquitous systems perspective. Artif. Intell. Rev. 56, 3619–3658 (2023)