Author:
Ferrer-Mestres Jonathan,Dietterich Thomas G.,Buffet Olivier,Chadès Iadine
Abstract
AbstractIn conservation of biodiversity, natural resource management and behavioural ecology, stochastic dynamic programming, and its mathematical framework, Markov decision processes (MDPs), are used to inform sequential decision-making under uncertainty. Models and solutions of Markov decision problems should be interpretable to derive useful guidance for managers and applied ecologists. However, MDP solutions that have thousands of states are often difficult to understand. Difficult to interpret solutions are unlikely to be applied, and thus we are missing an opportunity to improve decision-making. One way of increasing interpretability is to decrease the number of states.Building on recent artificial intelligence advances, we introduce a novel approach to compute more compact representations of MDP models and solutions as an attempt at improving interpretability. This approach reduces the size of the number of states to a maximum numberKwhile minimising the loss of performance compared to the original larger number of states. The reduced MDP is called aK-MDP. We present an algorithm to computeK-MDPs and assess its performance on three case studies of increasing complexity from the literature. We provide the code as a MATLAB package along with a set of illustrative problems.We found thatK-MDPs can achieve a substantial reduction of the number of states with a small loss of performance for all case studies. For example, for a conservation problem involving Northern Abalone and Sea Otters, we reduce the number of states from 819 to 5 states while incurring a loss of performance of only 1%. For a dynamic reserve selection problem with seven dimensions, while an impressive reduction in the number of states was achieved, interpreting the optimal solutions remained challenging.Modelling problems as Markov decision processes requires experience. While several models may represent the same problem, reducing the number of states is likely to make solutions and models more interpretable and facilitate the extraction of meaningful recommendations. We hope that this approach will contribute to the uptake of stochastic dynamic programming applications and stimulate further research to increase interpretability of stochastic dynamic programming solutions.
Publisher
Cold Spring Harbor Laboratory
Reference38 articles.
1. Bellman, R. (1957) Dynamic programming. Princeton University Press, John Wiley & Sons.
2. Bestelmeyer, B. T. , Ash, A. , Brown, J. R. , Densambuu, B. , Fernández-Giménez, M. , Johanson, J. , Levi, M. , Lopez, D. , Peinetti, R. , Rumpff, L. et al. (2017) State and transition models: theory, applications, and challenges. In Rangeland systems, 303–345. Springer, Cham.
3. Avoiding tipping points in fisheries management through Gaussian process dynamic programming