Abstract
AbstractReservoir computing (RC) can efficiently process time-series data by mapping the input signal into a high-dimensional space via randomly connected recurrent neural networks (RNNs), which are referred to as a reservoir. The high-dimensional representation of time-series data in the reservoir simplifies subsequent learning tasks. Although this simple architecture allows fast learning and facile physical implementation, the learning performance is inferior to that of other state-of-the-art RNN models. In this study, to improve the learning ability of RC, we propose self-modulated RC (SM-RC) that extends RC by adding a self-modulation mechanism. SM-RC can perform attention tasks where input information is retained or discarded depending on the input signal. We find that a chaotic state can emerge as a result of learning in SM-RC. Furthermore, we demonstrate that SM-RC outperforms RC in NARMA and Lorenz model tasks. Because the SM-RC architecture only requires two additional gates, it is physically implementable as RC, thereby providing a direction for realizing edge artificial intelligence.
Funder
MEXT | Japan Science and Technology Agency
Secom Science and Technology Foundation
MEXT | Japan Society for the Promotion of Science
Japan Agency for Medical Research and Development
Institute of AI and Beyond of UTokyo the International Research Center for Neurointelligence (WPI-IRCN) at The University of Tokyo Institutes for Advanced Study
Publisher
Springer Science and Business Media LLC
Subject
General Physics and Astronomy
Reference74 articles.
1. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L. & Muller, P.-A. Deep learning for time series classification: a review. Data Min. Knowl. Discov. 33, 917–963 (2019).
2. Dong, S., Wang, P. & Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 40, 100379 (2021).
3. Thompson, N. C., Greenewald, K., Lee, K. & Manso, G. F. The computational limits of deep learning. arXiv:2007.05558 (2020).
4. Patterson, D. et al. Carbon emissions and large neural network training. arXiv:2104.10350 (2021).
5. Murshed, M. G. S. et al. Machine learning at the network edge: A survey. ACM Comput. Surv. 54, 1–37 (2021).