Abstract
Abstract
Unsupervised machine learning models build an internal representation of their training data without the need for explicit human guidance or feature engineering. This learned representation provides insights into which features of the data are relevant for the task at hand. In the context of quantum physics, training models to describe quantum states without human intervention offers a promising approach to gaining insight into how machines represent complex quantum states. The ability to interpret the learned representation may offer a new perspective on non-trivial features of quantum systems and their efficient representation. We train a generative model on two-qubit density matrices generated by a parameterized quantum circuit. In a series of computational experiments, we investigate the learned representation of the model and its internal understanding of the data. We observe that the model learns an interpretable representation which relates the quantum states to their underlying entanglement characteristics. In particular, our results demonstrate that the latent representation of the model is directly correlated with the entanglement measure concurrence. The insights from this study represent proof of concept toward interpretable machine learning of quantum states. Our approach offers insight into how machines learn to represent small-scale quantum systems autonomously.
Funder
Dutch National Growth Fund
Subject
Artificial Intelligence,Human-Computer Interaction,Software
Reference38 articles.
1. Representation learning: a review and new perspectives;Bengio;IEEE Trans. Pattern Anal. Mach. Intell.,2013
2. Kernel methods in machine learning;Hofmann;Ann. Stat.,2008
3. Progressive growing of GANs for improved quality, stability, and variation;Karras,2018
4. BERT: pre-training of deep bidirectional transformers for language understanding;Devlin,2019
5. Generating sentences from a continuous space;Bowman,2016
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献