We consider a class of elliptic partial differential equations (PDE) that can be understood as the Euler–Lagrange equations of an associated convex optimization problem. Discretizing this optimization problem, we present a strategy for a numerical solution that is based on the popular primal-dual hybrid gradients (PDHG) approach: we reformulate the optimization as a saddle-point problem with a dual variable addressing the quadratic term, introduce the PDHG optimization steps, and analytically eliminate the dual variable. The resulting scheme resembles explicit gradient descent; however, the eliminated dual variable shows up as a boosting term that substantially accelerates the scheme. We introduce the proposed strategy for a simple Laplace problem and then illustrate the technique on a variety of more complicated and relevant PDE, both on Cartesian domains and graphs. The proposed numerical method is easily implementable, computationally efficient, and applicable to relevant computing tasks across science and engineering.