Abstract
We consider an infinite horizon optimal control problem for a pure jump Markov process X, taking values in a complete and separable metric space I, with noise-free partial observation. The observation process is defined as Yt = h(Xt), t ≥ 0, where h is a given map defined on I. The observation is noise-free in the sense that the only source of randomness is the process X itself. The aim is to minimize a discounted cost functional. In the first part of the paper we write down an explicit filtering equation and characterize the filtering process as a Piecewise Deterministic Process. In the second part, after transforming the original control problem with partial observation into one with complete observation (the separated problem) using filtering equations, we prove the equivalence of the original and separated problems through an explicit formula linking their respective value functions. The value function of the separated problem is also characterized as the unique fixed point of a suitably defined contraction mapping.
Funder
Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni
Ministero dell’Istruzione, dell’Università e della Ricerca
Subject
Computational Mathematics,Control and Optimization,Control and Systems Engineering
Reference52 articles.
1. A Dynamic Programming Algorithm for the Optimal Control of Piecewise Deterministic Markov Processes
2. Portfolio Optimization for a Large Investor Controlling Market Sentiment Under Partial Information
3. Asmussen S., Applied Probability and Queues (Stochastic Modelling and Applied Probability). Vol. 51 of
Applications of Mathematics, 2nd edn.
Springer-Verlag,
New York
(2003).
4. Bain A. and
Crisan D.,
Fundamentals of Stochastic Filtering.
Springer,
New York
(2009).
5. Bandini E.,
Constrained BSDEs driven by a non quasi-left-continuous random measure and optimal control of PDMPs on bounded domains. Preprint arXiv:1712.05205
(2017).
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献