Affiliation:
1. Centrum Wiskunde & Informatica, Amsterdam 1098 XG, Netherlands;
2. Department of Mathematics, London School of Economics and Political Science, London WC2A 2AE, United Kingdom
Abstract
We present an accelerated or “look-ahead” version of the Newton–Dinkelbach method, a well-known technique for solving fractional and parametric optimization problems. This acceleration halves the Bregman divergence between the current iterate and the optimal solution within every two iterations. Using the Bregman divergence as a potential in conjunction with combinatorial arguments, we obtain strongly polynomial algorithms in three applications domains. (i) For linear fractional combinatorial optimization, we show a convergence bound of [Formula: see text] iterations; the previous best bound was [Formula: see text] by Wang, Yang, and Zhang from 2006. (ii) We obtain a strongly polynomial label-correcting algorithm for solving linear feasibility systems with two variables per inequality (2VPI). For a 2VPI system with n variables and m constraints, our algorithm runs in O(mn) iterations. Every iteration takes O(mn) time for general 2VPI systems and [Formula: see text] time for the special case of deterministic Markov decision processes (DMDPs). This extends and strengthens a previous result by Madani from 2002 that showed a weakly polynomial bound for a variant of the Newton–Dinkelbach method for solving DMDPs. (iii) We give a simplified variant of the parametric submodular function minimization result from 2017 by Goemans, Gupta, and Jaillet. Funding: This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme [Grants 757481-ScaleOpt and 805241-QIP].
Publisher
Institute for Operations Research and the Management Sciences (INFORMS)
Subject
Management Science and Operations Research,Computer Science Applications,General Mathematics
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献