Author:
Ababei Razvan V.,Ellis Matthew O. A.,Vidamour Ian T.,Devadasan Dhilan S.,Allwood Dan A.,Vasilaki Eleni,Hayward Thomas J.
Abstract
AbstractMachine learning techniques are commonly used to model complex relationships but implementations on digital hardware are relatively inefficient due to poor matching between conventional computer architectures and the structures of the algorithms they are required to simulate. Neuromorphic devices, and in particular reservoir computing architectures, utilize the inherent properties of physical systems to implement machine learning algorithms and so have the potential to be much more efficient. In this work, we demonstrate that the dynamics of individual domain walls in magnetic nanowires are suitable for implementing the reservoir computing paradigm in hardware. We modelled the dynamics of a domain wall placed between two anti-notches in a nickel nanowire using both a 1D collective coordinates model and micromagnetic simulations. When driven by an oscillating magnetic field, the domain exhibits non-linear dynamics within the potential well created by the anti-notches that are analogous to those of the Duffing oscillator. We exploit the domain wall dynamics for reservoir computing by modulating the amplitude of the applied magnetic field to inject time-multiplexed input signals into the reservoir, and show how this allows us to perform machine learning tasks including: the classification of (1) sine and square waves; (2) spoken digits; and (3) non-temporal 2D toy data and hand written digits. Our work lays the foundation for the creation of nanoscale neuromorphic devices in which individual magnetic domain walls are used to perform complex data analysis tasks.
Funder
Leverhulme Trust
Engineering and Physical Sciences Research Council
Publisher
Springer Science and Business Media LLC
Reference44 articles.
1. Ludik, J., Prins, W., Meert, K. & Catfolis, T. A comparative study of fully and partially recurrent networks. In Proceedings of International Conference on Neural Networks (ICNN’97), Vol. 1, 292–297 (1997).
2. Jaeger, H. The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. Vol. 148, 13 (German National Research Center for Information Technology GMD Technical Report, 2001).
3. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature https://doi.org/10.1038/nature14539 (2015).
4. Larger, L. et al. High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification. Phys. Rev. X 7, 011015. https://doi.org/10.1103/PhysRevX.7.011015 (2017).
5. Lukoševičius, M. & Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149. https://doi.org/10.1016/j.cosrev.2009.03.005 (2009).
Cited by
32 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献