Abstract
AbstractPhysical neuromorphic computing, exploiting the complex dynamics of physical systems, has seen rapid advancements in sophistication and performance. Physical reservoir computing, a subset of neuromorphic computing, faces limitations due to its reliance on single systems. This constrains output dimensionality and dynamic range, limiting performance to a narrow range of tasks. Here, we engineer a suite of nanomagnetic array physical reservoirs and interconnect them in parallel and series to create a multilayer neural network architecture. The output of one reservoir is recorded, scaled and virtually fed as input to the next reservoir. This networked approach increases output dimensionality, internal dynamics and computational performance. We demonstrate that a physical neuromorphic system can achieve an overparameterised state, facilitating meta-learning on small training sets and yielding strong performance across a wide range of tasks. Our approach’s efficacy is further demonstrated through few-shot learning, where the system rapidly adapts to new tasks.
Funder
RCUK | Engineering and Physical Sciences Research Council
Royal Academy of Engineering
Publisher
Springer Science and Business Media LLC
Reference64 articles.
1. Zou, D., Cao, Y., Zhou, D. & Gu, Q. Gradient descent optimizes over-parameterized deep relu networks. Mach. Learn. 109, 467–492 (2020).
2. Zou, D. & Gu, Q. An improved analysis of training over-parameterized deep neural networks. Adv. Neural Inf. Process. Syst. 32 https://arxiv.org/abs/1906.04688 (2019).
3. Marković, D., Mizrahi, A., Querlioz, D. & Grollier, J. Physics for neuromorphic computing. Nat. Rev. Phys. 2, 499–510 (2020).
4. Mizrahi, A. et al. Neural-like computing with populations of superparamagnetic basis functions. Nat. Commun. 9, 1–11 (2018).
5. Gartside, J. C. et al. Reconfigurable training and reservoir computing in an artificial spin-vortex ice via spin-wave fingerprinting. Nat. Nanotechnol. 17, 460–469 (2022).