Author:
Tönshoff Jan,Ritzert Martin,Wolf Hinrikus,Grohe Martin
Abstract
Many combinatorial optimization problems can be phrased in the language of constraint satisfaction problems. We introduce a graph neural network architecture for solving such optimization problems. The architecture is generic; it works for all binary constraint satisfaction problems. Training is unsupervised, and it is sufficient to train on relatively small instances; the resulting networks perform well on much larger instances (at least 10-times larger). We experimentally evaluate our approach for a variety of problems, including Maximum Cut and Maximum Independent Set. Despite being generic, we show that our approach matches or surpasses most greedy and semi-definite programming based algorithms and sometimes even outperforms state-of-the-art heuristics for the specific problems.
Funder
Deutsche Forschungsgemeinschaft
Reference44 articles.
1. Learning to reason: leveraging neural networks for approximate DNF counting;Abboud,2019
2. A discrete stochastic neural network algorithm for constraint satisfaction problems;Adorf,1990
3. Branch-and-reduce exponential/FPT algorithms in practice: a case study of vertex cover;Akiba;Theor. Comput. Sci.,2016
4. Learning to solve circuit-SAT: an unsupervised differentiable approach;Amizadeh,2019
5. Principles of Constraint Programming
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献