January  2016, 12(1): 389-402. doi: 10.3934/jimo.2016.12.389

On convergence analysis of dual proximal-gradient methods with approximate gradient for a class of nonsmooth convex minimization problems

1. 

Department of Mathematics and Physics, Shanghai Dianji University, Shanghai, 200240, China

2. 

School of Electric Engineering, Shanghai Dianji University, Shanghai, 200240, China

3. 

School of Mathematics and Information Science, Shandong Institute of Business and Technology, Yantai, 264005, China

Received  January 2014 Revised  February 2015 Published  April 2015

In this paper, we consider the problem of minimizing a nonsmooth convex objective which is the sum of a proper, nonsmooth, closed, strongly convex extend real-valued function with a proper, nonsmooth, closed, convex extend real-valued function which is a composition of a proper closed convex function and a nonzero affine map. We first establish its dual problem which consists of minimizing the sum of a smooth convex function with a closed proper nonsmooth convex function. Then we apply first order proximal gradient methods on the dual problem, where an error is present in the calculation of the gradient of the smooth term. Further we present a dual proximal-gradient methods with approximate gradient. We show that when the errors are summable although the dual lowest objective function sequence generated by the proximal-gradient method with the errors converges to the optimal value with the rate $O(\frac{1}{k})$, the rate of convergence of the primal sequence is of the order $O(\frac{1}{\sqrt{k}}$).
Citation: Sanming Liu, Zhijie Wang, Chongyang Liu. On convergence analysis of dual proximal-gradient methods with approximate gradient for a class of nonsmooth convex minimization problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 389-402. doi: 10.3934/jimo.2016.12.389
References:
[1]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems,, SIAM Journal on Imaging Sciences, 2 (2009), 183.  doi: 10.1137/080716542.  Google Scholar

[2]

A. Beck, M. Teboulle, A Fast Dual Proximal Gradient Algorithm for Convex Minimization and Applications,, Operations Research Letters, 42 (2014), 1.  doi: 10.1016/j.orl.2013.10.007.  Google Scholar

[3]

D. P. Bertsekas, Constrained Optimization and Lagrangian Multipliers,, New York, (1982).   Google Scholar

[4]

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers,, Foundations and Trends in Machine Learning, 3 (2011), 1.  doi: 10.1561/2200000016.  Google Scholar

[5]

P. L. Combettes and J. C. Presquet, Proximal splitting methods in signal processing,, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering (eds. H. H. Bauschke, 49 (2011), 185.  doi: 10.1007/978-1-4419-9569-8_10.  Google Scholar

[6]

D. Gabay, Applications of the method of multipliers to variational inequalities,, in Augmented Lagrangian Methods: Applications to the Solution of Boundary Value Problems, (1983), 299.   Google Scholar

[7]

R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator Splitting Methods in Nonlinear Mechanics,, volume 9, (1989).  doi: 10.1137/1.9781611970838.  Google Scholar

[8]

P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators,, SIAM Journal on Numerical Analysis, 16 (1979), 964.  doi: 10.1137/0716071.  Google Scholar

[9]

J. J. Moreau., Proximité et dualité dans un espace hilbertien,, Bull. Soc. Math. France, 93 (1965), 273.   Google Scholar

[10]

Y. Nesterov, Gradient Methods for Minimizing Composite Objective Function,, CORE Discussion Papers, (2007).   Google Scholar

[11]

G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space,, J. Math. Anal. Appl., 72 (1979), 383.  doi: 10.1016/0022-247X(79)90234-8.  Google Scholar

[12]

R. T. Rockafellar, Monotone operators and the proximal point algorithm,, SIAM Journal on Control and Optimization, 14 (1976), 877.  doi: 10.1137/0314056.  Google Scholar

[13]

R. T. Rockafellar, Convex Analysis,, Princeton NJ: Princeton Univ. Press, (1970).   Google Scholar

[14]

M. Schmidt, N. Le Roux, and F. Bach, Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization,, NIPS'11 - 25 th Annual Conference on Neural Information Processing Systems, (2011).   Google Scholar

[15]

J. Yang and Y. Zhang, Alternating direction algorithms for l1-problems in compressive sensing,, SIAM journal on scientific computing, 33 (2011), 250.  doi: 10.1137/090777761.  Google Scholar

show all references

References:
[1]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems,, SIAM Journal on Imaging Sciences, 2 (2009), 183.  doi: 10.1137/080716542.  Google Scholar

[2]

A. Beck, M. Teboulle, A Fast Dual Proximal Gradient Algorithm for Convex Minimization and Applications,, Operations Research Letters, 42 (2014), 1.  doi: 10.1016/j.orl.2013.10.007.  Google Scholar

[3]

D. P. Bertsekas, Constrained Optimization and Lagrangian Multipliers,, New York, (1982).   Google Scholar

[4]

S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers,, Foundations and Trends in Machine Learning, 3 (2011), 1.  doi: 10.1561/2200000016.  Google Scholar

[5]

P. L. Combettes and J. C. Presquet, Proximal splitting methods in signal processing,, in Fixed-Point Algorithms for Inverse Problems in Science and Engineering (eds. H. H. Bauschke, 49 (2011), 185.  doi: 10.1007/978-1-4419-9569-8_10.  Google Scholar

[6]

D. Gabay, Applications of the method of multipliers to variational inequalities,, in Augmented Lagrangian Methods: Applications to the Solution of Boundary Value Problems, (1983), 299.   Google Scholar

[7]

R. Glowinski and P. Le Tallec, Augmented Lagrangian and Operator Splitting Methods in Nonlinear Mechanics,, volume 9, (1989).  doi: 10.1137/1.9781611970838.  Google Scholar

[8]

P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators,, SIAM Journal on Numerical Analysis, 16 (1979), 964.  doi: 10.1137/0716071.  Google Scholar

[9]

J. J. Moreau., Proximité et dualité dans un espace hilbertien,, Bull. Soc. Math. France, 93 (1965), 273.   Google Scholar

[10]

Y. Nesterov, Gradient Methods for Minimizing Composite Objective Function,, CORE Discussion Papers, (2007).   Google Scholar

[11]

G. B. Passty, Ergodic convergence to a zero of the sum of monotone operators in Hilbert space,, J. Math. Anal. Appl., 72 (1979), 383.  doi: 10.1016/0022-247X(79)90234-8.  Google Scholar

[12]

R. T. Rockafellar, Monotone operators and the proximal point algorithm,, SIAM Journal on Control and Optimization, 14 (1976), 877.  doi: 10.1137/0314056.  Google Scholar

[13]

R. T. Rockafellar, Convex Analysis,, Princeton NJ: Princeton Univ. Press, (1970).   Google Scholar

[14]

M. Schmidt, N. Le Roux, and F. Bach, Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization,, NIPS'11 - 25 th Annual Conference on Neural Information Processing Systems, (2011).   Google Scholar

[15]

J. Yang and Y. Zhang, Alternating direction algorithms for l1-problems in compressive sensing,, SIAM journal on scientific computing, 33 (2011), 250.  doi: 10.1137/090777761.  Google Scholar

[1]

Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115

[2]

Martin Heida, Stefan Neukamm, Mario Varga. Stochastic homogenization of $ \Lambda $-convex gradient flows. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 427-453. doi: 10.3934/dcdss.2020328

[3]

Haodong Yu, Jie Sun. Robust stochastic optimization with convex risk measures: A discretized subgradient scheme. Journal of Industrial & Management Optimization, 2021, 17 (1) : 81-99. doi: 10.3934/jimo.2019100

[4]

Thomas Frenzel, Matthias Liero. Effective diffusion in thin structures via generalized gradient systems and EDP-convergence. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 395-425. doi: 10.3934/dcdss.2020345

[5]

Elena Nozdrinova, Olga Pochinka. Solution of the 33rd Palis-Pugh problem for gradient-like diffeomorphisms of a two-dimensional sphere. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1101-1131. doi: 10.3934/dcds.2020311

[6]

Hedy Attouch, Aïcha Balhag, Zaki Chbani, Hassan Riahi. Fast convex optimization via inertial dynamics combining viscous and Hessian-driven damping with time rescaling. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021010

[7]

Lingfeng Li, Shousheng Luo, Xue-Cheng Tai, Jiang Yang. A new variational approach based on level-set function for convex hull problem with outliers. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020070

[8]

Wei Ouyang, Li Li. Hölder strong metric subregularity and its applications to convergence analysis of inexact Newton methods. Journal of Industrial & Management Optimization, 2021, 17 (1) : 169-184. doi: 10.3934/jimo.2019105

[9]

Gabrielle Nornberg, Delia Schiera, Boyan Sirakov. A priori estimates and multiplicity for systems of elliptic PDE with natural gradient growth. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3857-3881. doi: 10.3934/dcds.2020128

[10]

George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003

[11]

Tomasz Szostok. Inequalities of Hermite-Hadamard type for higher order convex functions, revisited. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020296

[12]

Peter Frolkovič, Karol Mikula, Jooyoung Hahn, Dirk Martin, Branislav Basara. Flux balanced approximation with least-squares gradient for diffusion equation on polyhedral mesh. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 865-879. doi: 10.3934/dcdss.2020350

[13]

Hassan Mohammad. A diagonal PRP-type projection method for convex constrained nonlinear monotone equations. Journal of Industrial & Management Optimization, 2021, 17 (1) : 101-116. doi: 10.3934/jimo.2019101

[14]

Elimhan N. Mahmudov. Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions. Evolution Equations & Control Theory, 2021, 10 (1) : 37-59. doi: 10.3934/eect.2020051

[15]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076

[16]

Min Chen, Olivier Goubet, Shenghao Li. Mathematical analysis of bump to bucket problem. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5567-5580. doi: 10.3934/cpaa.2020251

[17]

Xin Guo, Lei Shi. Preface of the special issue on analysis in data science: Methods and applications. Mathematical Foundations of Computing, 2020, 3 (4) : i-ii. doi: 10.3934/mfc.2020026

[18]

Matúš Tibenský, Angela Handlovičová. Convergence analysis of the discrete duality finite volume scheme for the regularised Heston model. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1181-1195. doi: 10.3934/dcdss.2020226

[19]

Shun Zhang, Jianlin Jiang, Su Zhang, Yibing Lv, Yuzhen Guo. ADMM-type methods for generalized multi-facility Weber problem. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020171

[20]

Qiang Long, Xue Wu, Changzhi Wu. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. Journal of Industrial & Management Optimization, 2021, 17 (2) : 1001-1023. doi: 10.3934/jimo.2020009

2019 Impact Factor: 1.366

Metrics

  • PDF downloads (40)
  • HTML views (0)
  • Cited by (1)

Other articles
by authors

[Back to Top]