Journal Papers

Fast Distributed Gradient Methods

Abstract:
We study distributed optimization problems when nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant ), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the per-node communications and the per-node gradient evaluations . Our first method, Distributed Nesterov Gradient, achieves rates and . Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of and – the second largest singular value of the
doubly stochastic weight matrix . It achieves rates
and ( arbitrarily small). Further, we give for both methods explicit dependence of the convergence constants on and . Simulation examples illustrate our findings.
Impact factor:

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 59, NO. 5, MAY