Following the analysis given by Alan Turing in 1951, one must expect that AI capabilities will eventually exceed those of humans across a wide range of real-world-decision making scenarios. Should this be a cause for concern, as Turing, Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue that the problem is real: we have to work out how to design AI systems that are far more powerful than ourselves while ensuring that they never have power over us. I believe the technical aspects of this problem are solvable. Whereas the standard model of AI proposes to build machines that optimize known, exogenously specified objectives, a preferable approach would be to build machines that are of provable benefit to humans. I introduce assistance games as a formal class of problems whose solution, under certain assumptions, has the desired property.