Abstract
Abstract
Constructing more expressive ansatz has been a primary focus for quantum Monte Carlo, aimed at more accurate ab initio calculations. However, with more powerful ansatz, e.g. various recent developed models based on neural-network architectures, the training becomes more difficult and expensive, which may have a counterproductive effect on the accuracy of calculation. In this work, we propose to make use of the training data to perform empirical variance extrapolation when using neural-network ansatz in variational Monte Carlo. We show that this approach can speed up the convergence and surpass the ansatz limitation to obtain an improved estimation of the energy. Moreover, variance extrapolation greatly enhances the error cancellation capability, resulting in significantly improved relative energy outcomes, which are the keys to chemistry and physics problems.
Funder
Strategic Priority Research Program of Chinese Academy of Sciences
National Natural Science Foundation of China
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献