Article Contents
Article Contents

Incremental gradient-free method for nonsmooth distributed optimization

This research was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) through GCRC-SOP (No. 2011-0030013), the Natural Science Foundation of China (11501070,11401064,11471062 and 61473326), by the Natural Science Foundation Projection of Chongqing (cstc2015jcyjA00011, cstc2013jjB00001 and cstc2013jcyjA00029), by the Chongqing Municipal Education Commission under Grant KJ1500301 and KJ1500302, and by the Chongqing Normal University Research Foundation 15XLB005.
• In this paper we consider the minimization of the sum of local convex component functions distributed over a multi-agent network. We first extend the Nesterov's random gradient-free method to the incremental setting. Then we propose the incremental gradient-free methods, including a cyclic order and a randomized order in the selection of component function. We provide the convergence and iteration complexity analysis of the proposed methods under some suitable stepsize rules. To illustrate our proposed methods, extensive numerical results on a distributed $l_1$-regression problem are presented. Compared with existing incremental subgradient-based methods, our methods only require the evaluation of the function values rather than subgradients, which may be preferred by practical engineers.

Mathematics Subject Classification: Primary: 47N10; Secondary: 49J52.

 Citation:

• Figure 1.  Function value error versus number of cycles $K$ with a constant stepsize $\alpha=0.001$ for algorithms CIGF and CISG

Figure 2.  Function value error versus number of iterations $N$ with a constant stepsize $\alpha=0.001$ for algorithms RIGF and RISG

Figure 3.  (a) Function value error versus number of cycles $K$ with diminishing stepsize choices: $\alpha_{1}(k)={1}/{(m(k-1)+i)}, ~ \alpha_{2}(k)={1}/{(m(k-1)+i)^{\frac{2}{3}}}, k=0, 1, \ldots, i=1, \ldots, m$; (b) Function value error versus number of iterations $N$ with diminishing stepsize choices: $\theta_{1}(k)={1}/{k}, ~\theta_{2}(k)={0.1}/{k^{\frac{2}{3}}}, k=1, 2, \ldots$

Figure 4.  For a fixed target accuracy $\epsilon=0.01$ and a constant stepsize $\alpha=0.001$, comparisons between algorithms CIGF and CISG: (a) number of iterations $N$ versus dimensions of the agent $d$ for fixed $m=100$; (b) number of iterations $N$ versus number of agents $m$ for fixed $d=2$

Figures(4)