Abstract
Abstract
In the last decade quantum machine learning has provided fascinating and fundamental improvements to supervised, unsupervised and reinforcement learning (RL). In RL, a so-called agent is challenged to solve a task given by some environment. The agent learns to solve the task by exploring the environment and exploiting the rewards it gets from the environment. For some classical task environments, an analogue quantum environment can be constructed which allows to find rewards quadratically faster by applying quantum algorithms. In this paper, we analytically analyze the behavior of a hybrid agent which combines this quadratic speedup in exploration with the policy update of a classical agent. This leads to a faster learning of the hybrid agent compared to the classical agent. We demonstrate that if the classical agent needs on average ⟨J⟩ rewards and ⟨T⟩cl epochs to learn how to solve the task, the hybrid agent will take
⟨
T
⟩
q
⩽
α
s
α
o
⟨
T
⟩
c
l
⟨
J
⟩
epochs on average. Here, α
s
and α
o
denote constants depending on details of the quantum search and are independent of the problem size. Additionally, we prove that if the environment allows for maximally α
o
k
max sequential coherent interactions, e.g. due to noise effects, an improvement given by ⟨T⟩q ≈ α
o
⟨T⟩cl/(4k
max) is still possible.
Subject
General Physics and Astronomy
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献