Conference abstracts

Session B4 - Learning Theory - Semi-plenary talk

July 13, 14:30 ~ 15:20 - Room B6

Optimal algorithms for smooth and strongly convex distributed optimization in networks

Francis Bach

INRIA - ENS, France   -   francis.bach@ens.fr

In this work, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (\ie \emph{master/slave}) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon>0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp. $1$) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision $\varepsilon>0$ in time $O(\sqrt{\kappa_l}(1+\frac{\tau}{\sqrt{\gamma}})\ln(1/\varepsilon))$, where $\kappa_l$ is the condition number of the local functions and $\gamma$ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.

Joint work with Kevin Scaman (INRIA), Sébastien Bubeck (Microsoft Research), Yin-Tat Lee (Microsoft Research) and Laurent Massoulié (INRIA).

View abstract PDF



FoCM 2017, based on a nodethirtythree design.