# American Institute of Mathematical Sciences

May  2012, 6(2): 357-372. doi: 10.3934/ipi.2012.6.357

## Strongly convex programming for exact matrix completion and robust principal component analysis

 1 Department of Mathematics and Systems Science, National University of Defense Technology, Changsha, Hunan 410073, China, China, China 2 Department of Mathematics, University of Iowa, Iowa City, IA 52242, United States

Received  July 2011 Revised  January 2012 Published  May 2012

The common task in matrix completion (MC) and robust principle component analysis (RPCA) is to recover a low-rank matrix from a given data matrix. These problems gained great attention from various areas in applied sciences recently, especially after the publication of the pioneering works of Candès et al.. One fundamental result in MC and RPCA is that nuclear norm based convex optimizations lead to the exact low-rank matrix recovery under suitable conditions. In this paper, we extend this result by showing that strongly convex optimizations can guarantee the exact low-rank matrix recovery as well. The result in this paper not only provides sufficient conditions under which the strongly convex models lead to the exact low-rank matrix recovery, but also guides us on how to choose suitable parameters in practical algorithms.
Citation: Hui Zhang, Jian-Feng Cai, Lizhi Cheng, Jubo Zhu. Strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2012, 6 (2) : 357-372. doi: 10.3934/ipi.2012.6.357
##### References:
 [1] A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning,, Advances in Neural Information Processing Systems (NIPS), (2007).   Google Scholar [2] J.-F. Cai, E.J. Candès and Z. Shen, A singular value thresholding algorithm for matrixcompletion,, SIAM J. on Optimization, 20 (2010), 1956.  doi: 10.1137/080738970.  Google Scholar [3] J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing,, Math. Comp., 78 (2009), 1515.  doi: 10.1090/S0025-5718-08-02189-3.  Google Scholar [4] J.-F. Cai, S. Osher and Z. Shen, Convergence of the linearized Bregman iteration for ℓ$_1$-norm minimization,, Math. Comp., 78 (2009), 2127.  doi: 10.1090/S0025-5718-09-02242-X.  Google Scholar [5] J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for frame-based image deblurring,, SIAM J. Imaging Sciences, 2 (2009), 226.   Google Scholar [6] E. J. Candès and B. Recht, Exact matrix completion via convexoptimization,, Foundations of Computational Mathematics, 9 (2009), 717.   Google Scholar [7] E.J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion,, IEEE Transactions on InformationTheory, 56 (2010), 2053.  doi: 10.1109/TIT.2010.2044061.  Google Scholar [8] E. J. Candès and Y. Plan, Matrix completion with noise,, Proceeding of the IEEE, 98 (2010), 925.  doi: 10.1109/JPROC.2009.2035722.  Google Scholar [9] E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?,, Journal of ACM, 58 (2011), 1.   Google Scholar [10] E. J. Candès and T. Tao, Decoding by linear programming,, IEEE Transactions on Information Theory, 51 (2006), 4203.   Google Scholar [11] E.J. Candès, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,, IEEE Trans. Inform. Theory, 52 (2006), 489.  doi: 10.1109/TIT.2005.862083.  Google Scholar [12] V. Chandrasekharan, S. Sanghavi, P. Parillo and A. Wilsky, Rank-sparsity incoherence for matrix decomposition,, SIAM Journal on Optimization, 21 (2011), 572.  doi: 10.1137/090761793.  Google Scholar [13] S. S. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit,, SIAM J. Sci. Comput., 20 (1998), 33.  doi: 10.1137/S1064827596304010.  Google Scholar [14] D. L. Donoho, Compressed sensing,, IEEE Trans. Inform. Theory, 52 (2006), 1289.  doi: 10.1109/TIT.2006.871582.  Google Scholar [15] M. Fazel, "Matrix Rank Minimization with Applications,", Ph.D. thesis, (2002).   Google Scholar [16] M. P. Friedlander and P. Tseng, Exact regularization of convex programs,, SIAM J. Optim., 18 (2007), 1326.  doi: 10.1137/060675320.  Google Scholar [17] H. Gao, J.-F. Cai, Z. Shen and H. Zhao, Robust principalcomponent analysis-based four-dimensional computed tomography,, Physics in Medicine and Biology, 56 (2011), 3181.   Google Scholar [18] D. Gross, Recovering low-rank matrices from few coefficients in any basis,, IEEE Trans. Inform. Theory, 57 (2011), 1548.  doi: 10.1109/TIT.2011.2104999.  Google Scholar [19] H. Ji, S. Huang, Z. Shen and Y. Xu, Robust video restoration by joint sparse and low rank matrix approximation,, SIAM Journal on Imaging Sciences, 4 (2011), 1122.   Google Scholar [20] H. Ji, C. Liu, Z. W. Shen and Y. Xu, Robust video denoising using low rank matrix completion,, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), (2010), 1791.   Google Scholar [21] Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification,, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235.  doi: 10.1137/090755436.  Google Scholar [22] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen and Y. Ma, Fastconvex optimization algorithms for exact recovery of a corrupted low-rank matrix,, Technical Report UILU-ENG-09-2214, (2010), 09.   Google Scholar [23] Z. Lin, M. Chen, L. Wu and Y. Ma, The augmentedLagrange multiplier method for exact recovery of corrupted low-rankmatrices,, Technical Report UILU-ENG-09-2215, (2010), 09.   Google Scholar [24] K. Min, Z. D. Zhang, J. Writht and Y. Ma, Decomposing background topic from keywords by principle component pursuit,, Proceeding of the 19th ACM Interational Conference on Information and Knowledge, (2010), 269.   Google Scholar [25] J. Meng, W. Yin, E. Houssain and Z. Han, Collaborative spectrum sensing from sparse observations using matrix completion for cognitive radio networks,, Proceedings of ICASSP, (2010), 3114.   Google Scholar [26] S. Ma, D. Goldfarb and L. Chen, Fixed point and Bregman iterativemethods for matrix rank minimization,, Mathematical Programming Series A, 128 (2011), 321.  doi: 10.1007/s10107-009-0306-5.  Google Scholar [27] M. Mesbahi and G. P. Papavassilopoulos, On the rank minimization problem over a positive semidefinite linear matrix inequality,, IEEE Transactions on Automatic Control, 42 (1997), 239.  doi: 10.1109/9.554402.  Google Scholar [28] Y. Nesterov, Smooth minimization of non-smooth functions,, Math. Program. Serie A, 103 (2005), 127.   Google Scholar [29] S. Osher, Y. Mao, B. Dong and W. Yin, Fast linearized Bregman iteration for compressive sensing and sparse denoising,, Comm. in Math. Sciences, 1 (2010), 93.   Google Scholar [30] B. Recht, M. Fazel and P. Parrilo, Guaranteed minimum rank solutionsof matrix equations via nuclear norm minimization,, SIAM Review, 52 (2010), 471.  doi: 10.1137/070697835.  Google Scholar [31] B. Recht, W. Xu and B. Hassibi, Necessary and sufficient conditions forsuccess of the nuclear norm heuristic for rank minimization,, Proceedings of the 47th IEEE Conference on Decision and Control, (2008), 3065.   Google Scholar [32] B. Recht, W. Xu and B. Hassibi, Null space conditions and thresholds for rank minimization,, Mathematics Programming, 127 (2011), 175.  doi: 10.1007/s10107-010-0422-2.  Google Scholar [33] B. Recht, A simpler approach to matrix completion,, J. Machine Learning Res., 12 (2011), 3413.   Google Scholar [34] R. T. Rockafellar, "Convex Analysis,", Princeton University Press, (1970).   Google Scholar [35] K. C. Toh and S. Yun, An accelerated proximal gradient algorithm fornuclear norm regularized least squares problems,, Pacific Journal of Optimization, 6 (2009), 615.   Google Scholar [36] C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method,, International Journal of Computer Vision, 9 (1992), 137.   Google Scholar [37] J. Wright, A. Ganesh, S. Rao and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization,, submitted to Journal of the ACM, (2009).   Google Scholar [38] W. Yin, Analysis and generalizations of the linearized Bregman method,, SIAM Journal on Image Science, 3 (2010), 856.   Google Scholar [39] W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for $l_1$ minimization with applications to compressed sensing,, SIAM J. Imaging Sciences, 1 (2008), 143.   Google Scholar [40] Z. D. Zhang, X. Liang, A. Ganesh and Y. Ma, TILT: Transform invariant low-rank textures,, Computer Vision-ACCV, (2010), 314.   Google Scholar [41] H. Zhang and L. Z. Cheng, Projected Landweber iteration for matrix completion,, Journal of Computational and Applied Mathematics, 235 (2010), 593.  doi: 10.1016/j.cam.2010.06.010.  Google Scholar [42] H. Zhang, L. Z. Cheng and W. Zhu, A lower bound guaranteeing exact matrix completion via singular value thresholding algorithm,, Applied and Computational Harmonic Analysis, 31 (2011), 454.  doi: 10.1016/j.acha.2011.04.004.  Google Scholar [43] H. Zhang, L. Z. Cheng and W. Zhu, Nuclear norm regularization with low-rank constraint for matrix completion,, Inverse Problems, 26 (2010).  doi: 10.1088/0266-5611/26/11/115009.  Google Scholar

show all references

##### References:
 [1] A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning,, Advances in Neural Information Processing Systems (NIPS), (2007).   Google Scholar [2] J.-F. Cai, E.J. Candès and Z. Shen, A singular value thresholding algorithm for matrixcompletion,, SIAM J. on Optimization, 20 (2010), 1956.  doi: 10.1137/080738970.  Google Scholar [3] J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing,, Math. Comp., 78 (2009), 1515.  doi: 10.1090/S0025-5718-08-02189-3.  Google Scholar [4] J.-F. Cai, S. Osher and Z. Shen, Convergence of the linearized Bregman iteration for ℓ$_1$-norm minimization,, Math. Comp., 78 (2009), 2127.  doi: 10.1090/S0025-5718-09-02242-X.  Google Scholar [5] J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for frame-based image deblurring,, SIAM J. Imaging Sciences, 2 (2009), 226.   Google Scholar [6] E. J. Candès and B. Recht, Exact matrix completion via convexoptimization,, Foundations of Computational Mathematics, 9 (2009), 717.   Google Scholar [7] E.J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion,, IEEE Transactions on InformationTheory, 56 (2010), 2053.  doi: 10.1109/TIT.2010.2044061.  Google Scholar [8] E. J. Candès and Y. Plan, Matrix completion with noise,, Proceeding of the IEEE, 98 (2010), 925.  doi: 10.1109/JPROC.2009.2035722.  Google Scholar [9] E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?,, Journal of ACM, 58 (2011), 1.   Google Scholar [10] E. J. Candès and T. Tao, Decoding by linear programming,, IEEE Transactions on Information Theory, 51 (2006), 4203.   Google Scholar [11] E.J. Candès, J. Romberg and T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,, IEEE Trans. Inform. Theory, 52 (2006), 489.  doi: 10.1109/TIT.2005.862083.  Google Scholar [12] V. Chandrasekharan, S. Sanghavi, P. Parillo and A. Wilsky, Rank-sparsity incoherence for matrix decomposition,, SIAM Journal on Optimization, 21 (2011), 572.  doi: 10.1137/090761793.  Google Scholar [13] S. S. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit,, SIAM J. Sci. Comput., 20 (1998), 33.  doi: 10.1137/S1064827596304010.  Google Scholar [14] D. L. Donoho, Compressed sensing,, IEEE Trans. Inform. Theory, 52 (2006), 1289.  doi: 10.1109/TIT.2006.871582.  Google Scholar [15] M. Fazel, "Matrix Rank Minimization with Applications,", Ph.D. thesis, (2002).   Google Scholar [16] M. P. Friedlander and P. Tseng, Exact regularization of convex programs,, SIAM J. Optim., 18 (2007), 1326.  doi: 10.1137/060675320.  Google Scholar [17] H. Gao, J.-F. Cai, Z. Shen and H. Zhao, Robust principalcomponent analysis-based four-dimensional computed tomography,, Physics in Medicine and Biology, 56 (2011), 3181.   Google Scholar [18] D. Gross, Recovering low-rank matrices from few coefficients in any basis,, IEEE Trans. Inform. Theory, 57 (2011), 1548.  doi: 10.1109/TIT.2011.2104999.  Google Scholar [19] H. Ji, S. Huang, Z. Shen and Y. Xu, Robust video restoration by joint sparse and low rank matrix approximation,, SIAM Journal on Imaging Sciences, 4 (2011), 1122.   Google Scholar [20] H. Ji, C. Liu, Z. W. Shen and Y. Xu, Robust video denoising using low rank matrix completion,, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), (2010), 1791.   Google Scholar [21] Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification,, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235.  doi: 10.1137/090755436.  Google Scholar [22] Z. Lin, A. Ganesh, J. Wright, L. Wu, M. Chen and Y. Ma, Fastconvex optimization algorithms for exact recovery of a corrupted low-rank matrix,, Technical Report UILU-ENG-09-2214, (2010), 09.   Google Scholar [23] Z. Lin, M. Chen, L. Wu and Y. Ma, The augmentedLagrange multiplier method for exact recovery of corrupted low-rankmatrices,, Technical Report UILU-ENG-09-2215, (2010), 09.   Google Scholar [24] K. Min, Z. D. Zhang, J. Writht and Y. Ma, Decomposing background topic from keywords by principle component pursuit,, Proceeding of the 19th ACM Interational Conference on Information and Knowledge, (2010), 269.   Google Scholar [25] J. Meng, W. Yin, E. Houssain and Z. Han, Collaborative spectrum sensing from sparse observations using matrix completion for cognitive radio networks,, Proceedings of ICASSP, (2010), 3114.   Google Scholar [26] S. Ma, D. Goldfarb and L. Chen, Fixed point and Bregman iterativemethods for matrix rank minimization,, Mathematical Programming Series A, 128 (2011), 321.  doi: 10.1007/s10107-009-0306-5.  Google Scholar [27] M. Mesbahi and G. P. Papavassilopoulos, On the rank minimization problem over a positive semidefinite linear matrix inequality,, IEEE Transactions on Automatic Control, 42 (1997), 239.  doi: 10.1109/9.554402.  Google Scholar [28] Y. Nesterov, Smooth minimization of non-smooth functions,, Math. Program. Serie A, 103 (2005), 127.   Google Scholar [29] S. Osher, Y. Mao, B. Dong and W. Yin, Fast linearized Bregman iteration for compressive sensing and sparse denoising,, Comm. in Math. Sciences, 1 (2010), 93.   Google Scholar [30] B. Recht, M. Fazel and P. Parrilo, Guaranteed minimum rank solutionsof matrix equations via nuclear norm minimization,, SIAM Review, 52 (2010), 471.  doi: 10.1137/070697835.  Google Scholar [31] B. Recht, W. Xu and B. Hassibi, Necessary and sufficient conditions forsuccess of the nuclear norm heuristic for rank minimization,, Proceedings of the 47th IEEE Conference on Decision and Control, (2008), 3065.   Google Scholar [32] B. Recht, W. Xu and B. Hassibi, Null space conditions and thresholds for rank minimization,, Mathematics Programming, 127 (2011), 175.  doi: 10.1007/s10107-010-0422-2.  Google Scholar [33] B. Recht, A simpler approach to matrix completion,, J. Machine Learning Res., 12 (2011), 3413.   Google Scholar [34] R. T. Rockafellar, "Convex Analysis,", Princeton University Press, (1970).   Google Scholar [35] K. C. Toh and S. Yun, An accelerated proximal gradient algorithm fornuclear norm regularized least squares problems,, Pacific Journal of Optimization, 6 (2009), 615.   Google Scholar [36] C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method,, International Journal of Computer Vision, 9 (1992), 137.   Google Scholar [37] J. Wright, A. Ganesh, S. Rao and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization,, submitted to Journal of the ACM, (2009).   Google Scholar [38] W. Yin, Analysis and generalizations of the linearized Bregman method,, SIAM Journal on Image Science, 3 (2010), 856.   Google Scholar [39] W. Yin, S. Osher, D. Goldfarb and J. Darbon, Bregman iterative algorithms for $l_1$ minimization with applications to compressed sensing,, SIAM J. Imaging Sciences, 1 (2008), 143.   Google Scholar [40] Z. D. Zhang, X. Liang, A. Ganesh and Y. Ma, TILT: Transform invariant low-rank textures,, Computer Vision-ACCV, (2010), 314.   Google Scholar [41] H. Zhang and L. Z. Cheng, Projected Landweber iteration for matrix completion,, Journal of Computational and Applied Mathematics, 235 (2010), 593.  doi: 10.1016/j.cam.2010.06.010.  Google Scholar [42] H. Zhang, L. Z. Cheng and W. Zhu, A lower bound guaranteeing exact matrix completion via singular value thresholding algorithm,, Applied and Computational Harmonic Analysis, 31 (2011), 454.  doi: 10.1016/j.acha.2011.04.004.  Google Scholar [43] H. Zhang, L. Z. Cheng and W. Zhu, Nuclear norm regularization with low-rank constraint for matrix completion,, Inverse Problems, 26 (2010).  doi: 10.1088/0266-5611/26/11/115009.  Google Scholar
 [1] Qingshan You, Qun Wan, Yipeng Liu. A short note on strongly convex programming for exact matrix completion and robust principal component analysis. Inverse Problems & Imaging, 2013, 7 (1) : 305-306. doi: 10.3934/ipi.2013.7.305 [2] Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601 [3] Yitong Guo, Bingo Wing-Kuen Ling. Principal component analysis with drop rank covariance matrix. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020072 [4] Ke Wei, Jian-Feng Cai, Tony F. Chan, Shingyu Leung. Guarantees of riemannian optimization for low rank matrix completion. Inverse Problems & Imaging, 2020, 14 (2) : 233-265. doi: 10.3934/ipi.2020011 [5] Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006 [6] Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030 [7] Zhouchen Lin. A review on low-rank models in data analysis. Big Data & Information Analytics, 2016, 1 (2&3) : 139-161. doi: 10.3934/bdia.2016001 [8] Paul Skerritt, Cornelia Vizman. Dual pairs for matrix groups. Journal of Geometric Mechanics, 2019, 11 (2) : 255-275. doi: 10.3934/jgm.2019014 [9] Yuhong Dai, Nobuo Yamashita. Convergence analysis of sparse quasi-Newton updates with positive definite matrix completion for two-dimensional functions. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 61-69. doi: 10.3934/naco.2011.1.61 [10] Yi Yang, Jianwei Ma, Stanley Osher. Seismic data reconstruction via matrix completion. Inverse Problems & Imaging, 2013, 7 (4) : 1379-1392. doi: 10.3934/ipi.2013.7.1379 [11] Yuan Shen, Xin Liu. An alternating minimization method for matrix completion problems. Discrete & Continuous Dynamical Systems - S, 2020, 13 (6) : 1757-1772. doi: 10.3934/dcdss.2020103 [12] Simon Foucart, Richard G. Lynch. Recovering low-rank matrices from binary measurements. Inverse Problems & Imaging, 2019, 13 (4) : 703-720. doi: 10.3934/ipi.2019032 [13] Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015 [14] Vladimir Pozdyayev. 2D system analysis via dual problems and polynomial matrix inequalities. Numerical Algebra, Control & Optimization, 2016, 6 (4) : 491-504. doi: 10.3934/naco.2016022 [15] Azam Moradi, Jafar Razmi, Reza Babazadeh, Ali Sabbaghnia. An integrated Principal Component Analysis and multi-objective mathematical programming approach to agile supply chain network design under uncertainty. Journal of Industrial & Management Optimization, 2019, 15 (2) : 855-879. doi: 10.3934/jimo.2018074 [16] Behrouz Kheirfam. Multi-parametric sensitivity analysis of the constraint matrix in piecewise linear fractional programming. Journal of Industrial & Management Optimization, 2010, 6 (2) : 347-361. doi: 10.3934/jimo.2010.6.347 [17] Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 289-326. doi: 10.3934/naco.2015.5.289 [18] Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127 [19] Katherine Morrison. An enumeration of the equivalence classes of self-dual matrix codes. Advances in Mathematics of Communications, 2015, 9 (4) : 415-436. doi: 10.3934/amc.2015.9.415 [20] Wei Wan, Weihong Guo, Jun Liu, Haiyang Huang. Non-local blind hyperspectral image super-resolution via 4d sparse tensor factorization and low-rank. Inverse Problems & Imaging, 2020, 14 (2) : 339-361. doi: 10.3934/ipi.2020015

2019 Impact Factor: 1.373