Advanced Search
Article Contents
Article Contents

An alternating minimization method for matrix completion problems

  • * Corresponding author

    * Corresponding author

The first author is supported by NSFC grants 11401295 and 11726618, and by Major Program of the National Social Science Foundation of China under Grant 12 & ZD114 and by National Social Science Foundation of China under Grant 15BGL158, 17BTQ063 and by Qinglan Project of Jiangsu Province, and Social Science Foundation of Jiangsu Province under Grant 18GLA002. The second author is supported by NSFC grants 11622112, 11471325, 91530204 and 11688101, the National Center for Mathematics and Interdisciplinary Sciences, CAS, and Key Research Program of Frontier Sciences QYZDJ-SSW-SYS010, CAS

Abstract Full Text(HTML) Figure(3) / Table(1) Related Papers Cited by
  • Matrix completion problems have applications in various domains such as information theory, statistics, engineering, etc. Meanwhile, solving matrix completion problems is not a easy task since the nonconvex and nonsmooth rank operation is involved. Existing approaches can be categorized into two classes. The first ones use nuclear norm to take the place of rank operation, and any convex optimization algorithms can be used to solve the reformulated problem. The limitation of this class of approaches is singular value decomposition (SVD) is involved to tackle the nuclear norm which significantly increases the computational cost. The other ones factorize the target matrix by two slim matrices. Fast algorithms for solving the reformulated nonconvex optimization problem usually lack of global convergence, meanwhile convergence guaranteed algorithms require restricted stepsize. In this paper, we consider the matrix factorization model for matrix completion problems, and propose an alternating minimization method for solving it. The global convergence to a stationary point or local minimizer is guaranteed under mild conditions. We compare the proposed algorithm with some state-of-the-art algorithms in solving a bunch of testing problems. The numerical results illustrate the efficiency and great potential of our algorithm.

    Mathematics Subject Classification: 15A18, 65F15, 65K05, 90C06.


    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Relative error vs iteration number, $ m $ = $ n $ = 2000

    Figure 2.  Computing time vs sampling ratio, $ m $ = $ n $ = 2000

    Figure 3.  Computing time vs dimension

    Table 4.1.  Results of speed performance test for synthetic data with $m$ = $n$ = $2000$

    Problem settings SVT LMaFit New Algorithm
    sampling ratio rank($L^*$) $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time
    20% 5 8.947e-13 126.0 10.288 8.396e-13 67.7 1.746 8.880e-13 42.7 1.218
    10 1.183e-7 190.2 22.593 9.005e-13 66.8 1.892 8.525e-13 57.3 1.821
    40% 5 8.494e-13 87.0 12.286 7.022e-13 33.3 1.376 6.705e-13 24.2 1.100
    10 8.702e-13 101.7 19.319 7.169e-13 39.2 1.722 7.001e-13 29.9 1.468
    60% 5 8.470e-13 72.3 14.429 6.895e-13 33.2 1.821 5.106e-13 18.0 1.096
    10 8.465e-13 80.4 18.717 7.573e-13 39.1 2.237 5.467e-13 21.0 1.334
    80% 5 8.765e-13 62.3 15.112 6.556e-13 23.1 1.547 3.881e-13 15.0 1.100
    10 8.180e-13 67.5 19.641 6.421e-13 25.1 1.801 4.308e-13 16.7 1.322
     | Show Table
    DownLoad: CSV
  • [1] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.
    [2] J. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.
    [3] E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936. 
    [4] E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.
    [5] E. J. Candès and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions, Found. Comput. Math., 6 (2006), 227-254.  doi: 10.1007/s10208-004-0162-x.
    [6] E. J. CandèsT. Romberg and J. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.
    [7] E. J. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.
    [8] E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.
    [9] R. Chartrand, Nonconvex splitting for regularized low-rank + sparse decomposition, IEEE Trans. Signal Process., 60 (2012), 5810-5819.  doi: 10.1109/TSP.2012.2208955.
    [10] C. ChenB. HeY. Ye and X. Yuan, The direct extension of admm for multi-block convex minimization problems is not necessarily convergent, Math. Program., 155 (2016), 57-79.  doi: 10.1007/s10107-014-0826-5.
    [11] W. Dai and O. Milenkovic, Set: an algorithm for consistent matrix completion, CoRR, abs/0909.2705 (2009).
    [12] D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.
    [13] M. Elad, Why simple shrinkage is still relevant for redundant representations?, IEEE Trans. Inform. Theory, 52 (2006), 5559-5569.  doi: 10.1109/TIT.2006.885522.
    [14] L. Eldén, Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2007. doi: 10.1137/1.9780898718867.
    [15] D. Goldfarb, S. Ma and Z. Wen, Solving low-rank matrix completion problems efficiently, in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton'09, 2009, 1013–1020.
    [16] D. Gross, Recovering Low-Rank Matrices from Few Coefficients in Any Basis, tech. rep., Leibniz University, 2009.
    [17] D. Han and X. Yuan, A note on the alternating direction method of multipliers, J. Optim. Theory Appl., 155 (2012), 227-238.  doi: 10.1007/s10957-012-0003-z.
    [18] M. Hong and Z.-Q. Luo, On the linear convergence of the alternating direction method of multipliers, preprint, http://arXiv.org/abs/1208.3922, 2012. doi: 10.1007/s10107-016-1034-2.
    [19] M. Hong, Z.-Q. Luo and M. Razaviyayn, Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, SIAM J. Optim., 26 (2016), 337–364, http://arXiv.org/abs/1410.1390. doi: 10.1137/140990309.
    [20] R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.
    [21] R. H. KeshavanA. Montanari and S. Oh, Matrix completion from noisy entries, J. Mach. Learn. Res., 11 (2010), 2057-2078. 
    [22] R. H. Keshavan and S. Oh, A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion, tech. rep., Dept. of Electrical Engineering, Stanford University, 2009.
    [23] H. Kim and H. Park, Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method, SIAM J. Matrix Anal. Appl., 30 (2008), 713-730.  doi: 10.1137/07069239X.
    [24] J. Kim and H. Park, Sparse Nonnegative Matrix Factorization for Clustering, tech. rep., Georgia Institute of Technology, 2008. Technical Report GT-CSE-08-01.
    [25] K. Lee and Y. Bresler, Admira: atomic decomposition for minimum rank approximation, IEEE Trans. Inf. Theor., 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.
    [26] M. Li, D. Sun and K.-C. Toh, A convergent 3-block semi-proximal admm for convex minimization problems with one strongly convex block, Asia Pac. J. Oper. Res., 32 (2015), 1550024, 19 pp. doi: 10.1142/S0217595915500244.
    [27] X. LiuZ. Wen and Y. Zhang, An efficient gauss–newton algorithm for symmetric low-rank product matrix approximations, SIAM J. Optim., 25 (2015), 1571-1608.  doi: 10.1137/140971464.
    [28] Y. LiuD. Sun and K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Math. Program., 133 (2012), 399-436.  doi: 10.1007/s10107-010-0437-8.
    [29] Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. A., 31 (2009), 1235-1256.  doi: 10.1137/090755436.
    [30] S. MaD. Goldfarb and L. Chen, Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.
    [31] R. Mazumder, T. Hastie and R. Tibshirani, Regularization Methods for Learning Incomplete Matrices, tech. rep., Stanford University, 2009.
    [32] R. Meka, P. Jain and I. S. Dhillon, Guaranteed Rank Minimization Via Singular Value Projection, CoRR, abs/0909.5457 (2009).
    [33] T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867. 
    [34] Y. Nesterov, Introductory Lectures on Convex Optimization, A basic course. Applied Optimization, 87. Kluwer Academic Publishers, Boston, MA, 2004. doi: 10.1007/978-1-4419-8853-9.
    [35] B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430. 
    [36] B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.
    [37] Y. ShenZ. Wen and Y. Zhang, Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization, Optim. Methods Softw., 29 (2014), 239-263.  doi: 10.1080/10556788.2012.700713.
    [38] K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pac. J. Optim., 6 (2010), 615-640. 
    [39] C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. J. Comput. Vision, 9 (1992), 137-154. 
    [40] Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.
    [41] J. Yang and X. Yuan, An Inexact Alternating Direction Method for Trace Norm Regularized Least Squares Problem, tech. rep., Dept. of Mathematics, Nanjing University, 2010.
    [42] J. YangY. Zhang and W. Yin, An efficient tvl1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), 2842-2865.  doi: 10.1137/080732894.
    [43] X. Yuan and J. Yang, Sparse and low-rank matrix decomposition via alternating direction method, Pac. J. Optim., 9 (2013), 167-180. 
    [44] Z. Zhu, A. M.-C. So and Y. Ye, Fast and Near–Optimal Matrix Completion Via Randomized Basis Pursuit, tech. rep., Stanford University, 2009.
    [45] H. ZouT. Hastie and R. Tibshirani, Sparse principal component analysis, J. Comput. Graph. Stat., 15 (2006), 265-286.  doi: 10.1198/106186006X113430.
  • 加载中




Article Metrics

HTML views(1127) PDF downloads(242) Cited by(0)

Access History

Other Articles By Authors



    DownLoad:  Full-Size Img  PowerPoint