2012, 6(2): 357-372. doi: 10.3934/ipi.2012.6.357

Strongly convex programming for exact matrix completion and robust principal component analysis

1. 

Department of Mathematics and Systems Science, National University of Defense Technology, Changsha, Hunan 410073, China, China, China

2. 

Department of Mathematics, University of Iowa, Iowa City, IA 52242, United States

Received  July 2011 Revised  January 2012 Published  May 2012

The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a low-rank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès et al.. One fundamental result in MC and RPCA is that nuclear norm based convex optimizations lead to the exact low-rank matrix recovery under suitable conditions. In this paper, we extend this result by showing that strongly convex optimizations can guarantee the exact low-rank matrix recovery as well. The result in this paper not only provides sufficient conditions under which the strongly convex models lead to the exact low-rank matrix recovery, but also guides us on how to choose suitable parameters in practical algorithms.
Citation: Hui Zhang, Jian-Feng Cai, Lizhi Cheng, Jubo Zhu. Strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2012, 6 (2) : 357-372. doi: 10.3934/ipi.2012.6.357
References:
[1]

A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning,, Advances in Neural Information Processing Systems (NIPS), (2007).

[2]

J.-F. Cai, E.J. Candès and Z. Shen, A singular value thresholding algorithm for matrixcompletion,, SIAM J. on Optimization, 20 (2010), 1956. doi: 10.1137/080738970.

[3]

J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing,, Math. Comp., 78 (2009), 1515. doi: 10.1090/S0025-5718-08-02189-3.

[4]

J.-F. Cai, S. Osher and Z. Shen, Convergence of the linearized Bregman iteration for $_1$-norm minimization,, Math. Comp., 78 (2009), 2127. doi: 10.1090/S0025-5718-09-02242-X.

[5]

J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for frame-based image deblurring,, SIAM J. Imaging Sciences, 2 (2009), 226.

[6]

E. J. Candès and B. Recht, Exact matrix completion via convexoptimization,, Foundations of Computational Mathematics, 9 (2009), 717.

[7]

E.J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion,, IEEE Transactions on InformationTheory, 56 (2010), 2053. doi: 10.1109/TIT.2010.2044061.

[8]

E. J. Candès and Y. Plan, Matrix completion with noise,, Proceeding of the IEEE, 98 (2010), 925. doi: 10.1109/JPROC.2009.2035722.

[9]

E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?,, Journal of ACM, 58 (2011), 1.

[10]

E. J. Candès and T. Tao, Decoding by linear programming,, IEEE Transactions on Information Theory, 51 (2006), 4203.

[11]

E.J. Candès, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,, IEEE Trans. Inform. Theory, 52 (2006), 489. doi: 10.1109/TIT.2005.862083.

[12]

V. Chandrasekharan, S. Sanghavi, P. Parillo and A. Wilsky, Rank-sparsity incoherence for matrix decomposition,, SIAM Journal on Optimization, 21 (2011), 572. doi: 10.1137/090761793.

[13]

S. S. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit,, SIAM J. Sci. Comput., 20 (1998), 33. doi: 10.1137/S1064827596304010.

[14]

D. L. Donoho, Compressed sensing,, IEEE Trans. Inform. Theory, 52 (2006), 1289. doi: 10.1109/TIT.2006.871582.

[15]

M. Fazel, "Matrix Rank Minimization with Applications,", Ph.D. thesis, (2002).

[16]

M. P. Friedlander and P. Tseng, Exact regularization of convex programs,, SIAM J. Optim., 18 (2007), 1326. doi: 10.1137/060675320.

[17]

H. Gao, J.-F. Cai, Z. Shen and H. Zhao, Robust principalcomponent analysis-based four-dimensional computed tomography,, Physics in Medicine and Biology, 56 (2011), 3181.

[18]

D. Gross, Recovering low-rank matrices from few coefficients in any basis,, IEEE Trans. Inform. Theory, 57 (2011), 1548. doi: 10.1109/TIT.2011.2104999.

[19]

H. Ji, S. Huang, Z. Shen and Y. Xu, Robust video restoration by joint sparse and low rank matrix approximation,, SIAM Journal on Imaging Sciences, 4 (2011), 1122.

[20]

H. Ji, C. Liu, Z. W. Shen and Y. Xu, Robust video denoising using low rank matrix completion,, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), (2010), 1791.

[21]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification,, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235. doi: 10.1137/090755436.

[22]

Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen and Y. Ma, Fastconvex optimization algorithms for exact recovery of a corrupted low-rank matrix,, Technical Report UILU-ENG-09-2214, (2010), 09.

[23]

Z. Lin, M. Chen, L. Wu and Y. Ma, The augmentedLagrange multiplier method for exact recovery of corrupted low-rankmatrices,, Technical Report UILU-ENG-09-2215, (2010), 09.

[24]

K. Min, Z. D. Zhang, J. Writht and Y. Ma, Decomposing background topic from keywords by principle component pursuit,, Proceeding of the 19th ACM Interational Conference on Information and Knowledge, (2010), 269.

[25]

J. Meng, W. Yin, E. Houssain and Z. Han, Collaborative spectrum sensing from sparse observations using matrix completion for cognitive radio networks,, Proceedings of ICASSP, (2010), 3114.

[26]

S. Ma, D. Goldfarb and L. Chen, Fixed point and Bregman iterativemethods for matrix rank minimization,, Mathematical Programming Series A, 128 (2011), 321. doi: 10.1007/s10107-009-0306-5.

[27]

M. Mesbahi and G. P. Papavassilopoulos, On the rank minimization problem over a positive semidefinite linear matrix inequality,, IEEE Transactions on Automatic Control, 42 (1997), 239. doi: 10.1109/9.554402.

[28]

Y. Nesterov, Smooth minimization of non-smooth functions,, Math. Program. Serie A, 103 (2005), 127.

[29]

S. Osher, Y. Mao, B. Dong and W. Yin, Fast linearized Bregman iteration for compressive sensing and sparse denoising,, Comm. in Math. Sciences, 1 (2010), 93.

[30]

B. Recht, M. Fazel and P. Parrilo, Guaranteed minimum rank solutionsof matrix equations via nuclear norm minimization,, SIAM Review, 52 (2010), 471. doi: 10.1137/070697835.

[31]

B. Recht, W. Xu and B. Hassibi, Necessary and sufficient conditions forsuccess of the nuclear norm heuristic for rank minimization,, Proceedings of the 47th IEEE Conference on Decision and Control, (2008), 3065.

[32]

B. Recht, W. Xu and B. Hassibi, Null space conditions and thresholds for rank minimization,, Mathematics Programming, 127 (2011), 175. doi: 10.1007/s10107-010-0422-2.

[33]

B. Recht, A simpler approach to matrix completion,, J. Machine Learning Res., 12 (2011), 3413.

[34]

R. T. Rockafellar, "Convex Analysis,", Princeton University Press, (1970).

[35]

K. C. Toh and S. Yun, An accelerated proximal gradient algorithm fornuclear norm regularized least squares problems,, Pacific Journal of Optimization, 6 (2009), 615.

[36]

C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method,, International Journal of Computer Vision, 9 (1992), 137.

[37]

J. Wright, A. Ganesh, S. Rao and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization,, submitted to Journal of the ACM, (2009).

[38]

W. Yin, Analysis and generalizations of the linearized Bregman method,, SIAM Journal on Image Science, 3 (2010), 856.

[39]

W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for $l_1$ minimization with applications to compressed sensing,, SIAM J. Imaging Sciences, 1 (2008), 143.

[40]

Z. D. Zhang, X. Liang, A. Ganesh and Y. Ma, TILT: Transform invariant low-rank textures,, Computer Vision-ACCV, (2010), 314.

[41]

H. Zhang and L. Z. Cheng, Projected Landweber iteration for matrix completion,, Journal of Computational and Applied Mathematics, 235 (2010), 593. doi: 10.1016/j.cam.2010.06.010.

[42]

H. Zhang, L. Z. Cheng and W. Zhu, A lower bound guaranteeing exact matrix completion via singular value thresholding algorithm,, Applied and Computational Harmonic Analysis, 31 (2011), 454. doi: 10.1016/j.acha.2011.04.004.

[43]

H. Zhang, L. Z. Cheng and W. Zhu, Nuclear norm regularization with low-rank constraint for matrix completion,, Inverse Problems, 26 (2010). doi: 10.1088/0266-5611/26/11/115009.

show all references

References:
[1]

A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning,, Advances in Neural Information Processing Systems (NIPS), (2007).

[2]

J.-F. Cai, E.J. Candès and Z. Shen, A singular value thresholding algorithm for matrixcompletion,, SIAM J. on Optimization, 20 (2010), 1956. doi: 10.1137/080738970.

[3]

J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing,, Math. Comp., 78 (2009), 1515. doi: 10.1090/S0025-5718-08-02189-3.

[4]

J.-F. Cai, S. Osher and Z. Shen, Convergence of the linearized Bregman iteration for $_1$-norm minimization,, Math. Comp., 78 (2009), 2127. doi: 10.1090/S0025-5718-09-02242-X.

[5]

J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for frame-based image deblurring,, SIAM J. Imaging Sciences, 2 (2009), 226.

[6]

E. J. Candès and B. Recht, Exact matrix completion via convexoptimization,, Foundations of Computational Mathematics, 9 (2009), 717.

[7]

E.J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion,, IEEE Transactions on InformationTheory, 56 (2010), 2053. doi: 10.1109/TIT.2010.2044061.

[8]

E. J. Candès and Y. Plan, Matrix completion with noise,, Proceeding of the IEEE, 98 (2010), 925. doi: 10.1109/JPROC.2009.2035722.

[9]

E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?,, Journal of ACM, 58 (2011), 1.

[10]

E. J. Candès and T. Tao, Decoding by linear programming,, IEEE Transactions on Information Theory, 51 (2006), 4203.

[11]

E.J. Candès, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,, IEEE Trans. Inform. Theory, 52 (2006), 489. doi: 10.1109/TIT.2005.862083.

[12]

V. Chandrasekharan, S. Sanghavi, P. Parillo and A. Wilsky, Rank-sparsity incoherence for matrix decomposition,, SIAM Journal on Optimization, 21 (2011), 572. doi: 10.1137/090761793.

[13]

S. S. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit,, SIAM J. Sci. Comput., 20 (1998), 33. doi: 10.1137/S1064827596304010.

[14]

D. L. Donoho, Compressed sensing,, IEEE Trans. Inform. Theory, 52 (2006), 1289. doi: 10.1109/TIT.2006.871582.

[15]

M. Fazel, "Matrix Rank Minimization with Applications,", Ph.D. thesis, (2002).

[16]

M. P. Friedlander and P. Tseng, Exact regularization of convex programs,, SIAM J. Optim., 18 (2007), 1326. doi: 10.1137/060675320.

[17]

H. Gao, J.-F. Cai, Z. Shen and H. Zhao, Robust principalcomponent analysis-based four-dimensional computed tomography,, Physics in Medicine and Biology, 56 (2011), 3181.

[18]

D. Gross, Recovering low-rank matrices from few coefficients in any basis,, IEEE Trans. Inform. Theory, 57 (2011), 1548. doi: 10.1109/TIT.2011.2104999.

[19]

H. Ji, S. Huang, Z. Shen and Y. Xu, Robust video restoration by joint sparse and low rank matrix approximation,, SIAM Journal on Imaging Sciences, 4 (2011), 1122.

[20]

H. Ji, C. Liu, Z. W. Shen and Y. Xu, Robust video denoising using low rank matrix completion,, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), (2010), 1791.

[21]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification,, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235. doi: 10.1137/090755436.

[22]

Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen and Y. Ma, Fastconvex optimization algorithms for exact recovery of a corrupted low-rank matrix,, Technical Report UILU-ENG-09-2214, (2010), 09.

[23]

Z. Lin, M. Chen, L. Wu and Y. Ma, The augmentedLagrange multiplier method for exact recovery of corrupted low-rankmatrices,, Technical Report UILU-ENG-09-2215, (2010), 09.

[24]

K. Min, Z. D. Zhang, J. Writht and Y. Ma, Decomposing background topic from keywords by principle component pursuit,, Proceeding of the 19th ACM Interational Conference on Information and Knowledge, (2010), 269.

[25]

J. Meng, W. Yin, E. Houssain and Z. Han, Collaborative spectrum sensing from sparse observations using matrix completion for cognitive radio networks,, Proceedings of ICASSP, (2010), 3114.

[26]

S. Ma, D. Goldfarb and L. Chen, Fixed point and Bregman iterativemethods for matrix rank minimization,, Mathematical Programming Series A, 128 (2011), 321. doi: 10.1007/s10107-009-0306-5.

[27]

M. Mesbahi and G. P. Papavassilopoulos, On the rank minimization problem over a positive semidefinite linear matrix inequality,, IEEE Transactions on Automatic Control, 42 (1997), 239. doi: 10.1109/9.554402.

[28]

Y. Nesterov, Smooth minimization of non-smooth functions,, Math. Program. Serie A, 103 (2005), 127.

[29]

S. Osher, Y. Mao, B. Dong and W. Yin, Fast linearized Bregman iteration for compressive sensing and sparse denoising,, Comm. in Math. Sciences, 1 (2010), 93.

[30]

B. Recht, M. Fazel and P. Parrilo, Guaranteed minimum rank solutionsof matrix equations via nuclear norm minimization,, SIAM Review, 52 (2010), 471. doi: 10.1137/070697835.

[31]

B. Recht, W. Xu and B. Hassibi, Necessary and sufficient conditions forsuccess of the nuclear norm heuristic for rank minimization,, Proceedings of the 47th IEEE Conference on Decision and Control, (2008), 3065.

[32]

B. Recht, W. Xu and B. Hassibi, Null space conditions and thresholds for rank minimization,, Mathematics Programming, 127 (2011), 175. doi: 10.1007/s10107-010-0422-2.

[33]

B. Recht, A simpler approach to matrix completion,, J. Machine Learning Res., 12 (2011), 3413.

[34]

R. T. Rockafellar, "Convex Analysis,", Princeton University Press, (1970).

[35]

K. C. Toh and S. Yun, An accelerated proximal gradient algorithm fornuclear norm regularized least squares problems,, Pacific Journal of Optimization, 6 (2009), 615.

[36]

C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method,, International Journal of Computer Vision, 9 (1992), 137.

[37]

J. Wright, A. Ganesh, S. Rao and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization,, submitted to Journal of the ACM, (2009).

[38]

W. Yin, Analysis and generalizations of the linearized Bregman method,, SIAM Journal on Image Science, 3 (2010), 856.

[39]

W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for $l_1$ minimization with applications to compressed sensing,, SIAM J. Imaging Sciences, 1 (2008), 143.

[40]

Z. D. Zhang, X. Liang, A. Ganesh and Y. Ma, TILT: Transform invariant low-rank textures,, Computer Vision-ACCV, (2010), 314.

[41]

H. Zhang and L. Z. Cheng, Projected Landweber iteration for matrix completion,, Journal of Computational and Applied Mathematics, 235 (2010), 593. doi: 10.1016/j.cam.2010.06.010.

[42]

H. Zhang, L. Z. Cheng and W. Zhu, A lower bound guaranteeing exact matrix completion via singular value thresholding algorithm,, Applied and Computational Harmonic Analysis, 31 (2011), 454. doi: 10.1016/j.acha.2011.04.004.

[43]

H. Zhang, L. Z. Cheng and W. Zhu, Nuclear norm regularization with low-rank constraint for matrix completion,, Inverse Problems, 26 (2010). doi: 10.1088/0266-5611/26/11/115009.

[1]

Qingshan You, Qun Wan, Yipeng Liu. A short note on strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2013, 7 (1) : 305-306. doi: 10.3934/ipi.2013.7.305

[2]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[3]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[4]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[5]

Zhouchen Lin. A review on low-rank models in data analysis. Big Data & Information Analytics, 2016, 1 (2&3) : 139-161. doi: 10.3934/bdia.2016001

[6]

Yuhong Dai, Nobuo Yamashita. Convergence analysis of sparse quasi-Newton updates with positive definite matrix completion for two-dimensional functions. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 61-69. doi: 10.3934/naco.2011.1.61

[7]

Yi Yang, Jianwei Ma, Stanley Osher. Seismic data reconstruction via matrix completion. Inverse Problems & Imaging, 2013, 7 (4) : 1379-1392. doi: 10.3934/ipi.2013.7.1379

[8]

Vladimir Pozdyayev. 2D system analysis via dual problems and polynomial matrix inequalities. Numerical Algebra, Control & Optimization, 2016, 6 (4) : 491-504. doi: 10.3934/naco.2016022

[9]

Azam Moradi, Jafar Razmi, Reza Babazadeh, Ali Sabbaghnia. An integrated Principal Component Analysis and multi-objective mathematical programming approach to agile supply chain network design under uncertainty. Journal of Industrial & Management Optimization, 2018, 13 (5) : 1-25. doi: 10.3934/jimo.2018074

[10]

Behrouz Kheirfam. Multi-parametric sensitivity analysis of the constraint matrix in piecewise linear fractional programming. Journal of Industrial & Management Optimization, 2010, 6 (2) : 347-361. doi: 10.3934/jimo.2010.6.347

[11]

Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 289-326. doi: 10.3934/naco.2015.5.289

[12]

Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127

[13]

Katherine Morrison. An enumeration of the equivalence classes of self-dual matrix codes. Advances in Mathematics of Communications, 2015, 9 (4) : 415-436. doi: 10.3934/amc.2015.9.415

[14]

Cristian Dobre. Mathematical properties of the regular *-representation of matrix $*$-algebras with applications to semidefinite programming. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 367-378. doi: 10.3934/naco.2013.3.367

[15]

Jiang-Xia Nan, Deng-Feng Li. Linear programming technique for solving interval-valued constraint matrix games. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1059-1070. doi: 10.3934/jimo.2014.10.1059

[16]

Yaguang Huangfu, Guanqing Liang, Jiannong Cao. MatrixMap: Programming abstraction and implementation of matrix computation for big data analytics. Big Data & Information Analytics, 2016, 1 (4) : 349-376. doi: 10.3934/bdia.2016015

[17]

Adel Alahmadi, Hamed Alsulami, S.K. Jain, Efim Zelmanov. On matrix wreath products of algebras. Electronic Research Announcements, 2017, 24: 78-86. doi: 10.3934/era.2017.24.009

[18]

Sun Yi, Patrick W. Nelson, A. Galip Ulsoy. Delay differential equations via the matrix lambert w function and bifurcation analysis: application to machine tool chatter. Mathematical Biosciences & Engineering, 2007, 4 (2) : 355-368. doi: 10.3934/mbe.2007.4.355

[19]

Sanming Liu, Zhijie Wang, Chongyang Liu. On convergence analysis of dual proximal-gradient methods with approximate gradient for a class of nonsmooth convex minimization problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 389-402. doi: 10.3934/jimo.2016.12.389

[20]

Andrzej Nowakowski, Jan Sokolowski. On dual dynamic programming in shape control. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2473-2485. doi: 10.3934/cpaa.2012.11.2473

2017 Impact Factor: 1.465

Metrics

  • PDF downloads (2)
  • HTML views (0)
  • Cited by (9)

Other articles
by authors

[Back to Top]