Abstract
AbstractQuantum computing promises advantages over classical computing in many problems. Nevertheless, noise in quantum devices prevents most quantum algorithms from achieving the quantum advantage. Quantum error mitigation provides a variety of protocols to handle such noise using minimal qubit resources. While some of those protocols have been implemented in experiments for a few qubits, it remains unclear whether error mitigation will be effective in quantum circuits with tens to hundreds of qubits. In this paper, we apply statistics principles to quantum error mitigation and analyse the scaling behaviour of its intrinsic error. We find that the error increases linearly O(ϵN) with the gate number N before mitigation and sublinearly $$O({\epsilon }^{{\prime} }{N}^{\gamma })$$
O
(
ϵ
′
N
γ
)
after mitigation, where γ ≈ 0.5, ϵ is the error rate of a quantum gate, and $${\epsilon }^{{\prime} }$$
ϵ
′
is a protocol-dependent factor. The $$\sqrt{N}$$
N
scaling is a consequence of the law of large numbers, and it indicates that error mitigation can suppress the error by a larger factor in larger circuits. We propose the importance Clifford sampling as a key technique for error mitigation in large circuits to obtain this result.
Funder
National Natural Science Foundation of China
National Science Foundation of China | NSAF Joint Fund
Publisher
Springer Science and Business Media LLC
Subject
Computational Theory and Mathematics,Computer Networks and Communications,Statistical and Nonlinear Physics,Computer Science (miscellaneous)
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献