Author:
Nguyen Nga T. T.,Kenyon Garrett T.,Yoon Boram
Abstract
AbstractWe propose a regression algorithm that utilizes a learned dictionary optimized for sparse inference on a D-Wave quantum annealer. In this regression algorithm, we concatenate the independent and dependent variables as a combined vector, and encode the high-order correlations between them into a dictionary optimized for sparse reconstruction. On a test dataset, the dependent variable is initialized to its average value and then a sparse reconstruction of the combined vector is obtained in which the dependent variable is typically shifted closer to its true value, as in a standard inpainting or denoising task. Here, a quantum annealer, which can presumably exploit a fully entangled initial state to better explore the complex energy landscape, is used to solve the highly non-convex sparse coding optimization problem. The regression algorithm is demonstrated for a lattice quantum chromodynamics simulation data using a D-Wave 2000Q quantum annealer and good prediction performance is achieved. The regression test is performed using six different values for the number of fully connected logical qubits, between 20 and 64. The scaling results indicate that a larger number of qubits gives better prediction accuracy.
Funder
Department of Energy, Office of Science
Los Alamos National Laboratory,United States
Publisher
Springer Science and Business Media LLC
Reference34 articles.
1. Olshausen, B. & Field, D. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996).
2. Olshausen, B. A. & Field, D. J. Sparse coding with an overcomplete basis set: A strategy employed by v1?. Vis. Res. 37, 3311–3325. https://doi.org/10.1016/S0042-6989(97)00169-7 (1997).
3. Yang, J., Yu, K., Gong, Y. & Huang, T. Linear spatial pyramid matching using sparse coding for image classification. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009, pp. 1794–1801. https://doi.org/10.1109/CVPR.2009.5206757 (2009).
4. Coates, A. & Ng, A. Y. The importance of encoding versus training with sparse coding and vector quantization. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pp. 921–928 (Omnipress, USA, 2011).
5. Watkins, Y., Sayeh, M., Iaroshenko, O. & Kenyon, G. T. Image compression: Sparse coding vs. bottleneck autoencoders. (2017). arXiv:1710.09926.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献