• Previous Article
    An efficient adjoint computational method based on lifted IRK integrator and exact penalty function for optimal control problems involving continuous inequality constraints
  • DCDS-S Home
  • This Issue
  • Next Article
    Remarks on mean curvature flow solitons in warped products
doi: 10.3934/dcdss.2020103

An alternating minimization method for matrix completion problems

1. 

School of Applied Mathematics, Nanjing University Of Finance & Economics, China

2. 

State Key Laboratory of Scientific and Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, University of Chinese Academy of Sciences, China

* Corresponding author

Received  September 2018 Revised  November 2018 Published  September 2019

Fund Project: The first author is supported by NSFC grants 11401295 and 11726618, and by Major Program of the National Social Science Foundation of China under Grant 12 & ZD114 and by National Social Science Foundation of China under Grant 15BGL158, 17BTQ063 and by Qinglan Project of Jiangsu Province, and Social Science Foundation of Jiangsu Province under Grant 18GLA002. The second author is supported by NSFC grants 11622112, 11471325, 91530204 and 11688101, the National Center for Mathematics and Interdisciplinary Sciences, CAS, and Key Research Program of Frontier Sciences QYZDJ-SSW-SYS010, CAS

Matrix completion problems have applications in various domains such as information theory, statistics, engineering, etc. Meanwhile, solving matrix completion problems is not a easy task since the nonconvex and nonsmooth rank operation is involved. Existing approaches can be categorized into two classes. The first ones use nuclear norm to take the place of rank operation, and any convex optimization algorithms can be used to solve the reformulated problem. The limitation of this class of approaches is singular value decomposition (SVD) is involved to tackle the nuclear norm which significantly increases the computational cost. The other ones factorize the target matrix by two slim matrices. Fast algorithms for solving the reformulated nonconvex optimization problem usually lack of global convergence, meanwhile convergence guaranteed algorithms require restricted stepsize. In this paper, we consider the matrix factorization model for matrix completion problems, and propose an alternating minimization method for solving it. The global convergence to a stationary point or local minimizer is guaranteed under mild conditions. We compare the proposed algorithm with some state-of-the-art algorithms in solving a bunch of testing problems. The numerical results illustrate the efficiency and great potential of our algorithm.

Citation: Yuan Shen, Xin Liu. An alternating minimization method for matrix completion problems. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020103
References:
[1]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.  Google Scholar

[2]

J. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.  Google Scholar

[3]

E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936.   Google Scholar

[4]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[5]

E. J. Candès and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions, Found. Comput. Math., 6 (2006), 227-254.  doi: 10.1007/s10208-004-0162-x.  Google Scholar

[6]

E. J. CandèsT. Romberg and J. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.  Google Scholar

[7]

E. J. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.  Google Scholar

[8]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.  Google Scholar

[9]

R. Chartrand, Nonconvex splitting for regularized low-rank + sparse decomposition, IEEE Trans. Signal Process., 60 (2012), 5810-5819.  doi: 10.1109/TSP.2012.2208955.  Google Scholar

[10]

C. ChenB. HeY. Ye and X. Yuan, The direct extension of admm for multi-block convex minimization problems is not necessarily convergent, Math. Program., 155 (2016), 57-79.  doi: 10.1007/s10107-014-0826-5.  Google Scholar

[11]

W. Dai and O. Milenkovic, Set: an algorithm for consistent matrix completion, CoRR, abs/0909.2705 (2009). Google Scholar

[12]

D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.  Google Scholar

[13]

M. Elad, Why simple shrinkage is still relevant for redundant representations?, IEEE Trans. Inform. Theory, 52 (2006), 5559-5569.  doi: 10.1109/TIT.2006.885522.  Google Scholar

[14]

L. Eldén, Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2007. doi: 10.1137/1.9780898718867.  Google Scholar

[15]

D. Goldfarb, S. Ma and Z. Wen, Solving low-rank matrix completion problems efficiently, in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton'09, 2009, 1013–1020. Google Scholar

[16]

D. Gross, Recovering Low-Rank Matrices from Few Coefficients in Any Basis, tech. rep., Leibniz University, 2009. Google Scholar

[17]

D. Han and X. Yuan, A note on the alternating direction method of multipliers, J. Optim. Theory Appl., 155 (2012), 227-238.  doi: 10.1007/s10957-012-0003-z.  Google Scholar

[18]

M. Hong and Z.-Q. Luo, On the linear convergence of the alternating direction method of multipliers, preprint, http://arXiv.org/abs/1208.3922, 2012. doi: 10.1007/s10107-016-1034-2.  Google Scholar

[19]

M. Hong, Z.-Q. Luo and M. Razaviyayn, Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, SIAM J. Optim., 26 (2016), 337–364, http://arXiv.org/abs/1410.1390. doi: 10.1137/140990309.  Google Scholar

[20]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.  Google Scholar

[21]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from noisy entries, J. Mach. Learn. Res., 11 (2010), 2057-2078.   Google Scholar

[22]

R. H. Keshavan and S. Oh, A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion, tech. rep., Dept. of Electrical Engineering, Stanford University, 2009. Google Scholar

[23]

H. Kim and H. Park, Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method, SIAM J. Matrix Anal. Appl., 30 (2008), 713-730.  doi: 10.1137/07069239X.  Google Scholar

[24]

J. Kim and H. Park, Sparse Nonnegative Matrix Factorization for Clustering, tech. rep., Georgia Institute of Technology, 2008. Technical Report GT-CSE-08-01. Google Scholar

[25]

K. Lee and Y. Bresler, Admira: atomic decomposition for minimum rank approximation, IEEE Trans. Inf. Theor., 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.  Google Scholar

[26]

M. Li, D. Sun and K.-C. Toh, A convergent 3-block semi-proximal admm for convex minimization problems with one strongly convex block, Asia Pac. J. Oper. Res., 32 (2015), 1550024, 19 pp. doi: 10.1142/S0217595915500244.  Google Scholar

[27]

X. LiuZ. Wen and Y. Zhang, An efficient gauss–newton algorithm for symmetric low-rank product matrix approximations, SIAM J. Optim., 25 (2015), 1571-1608.  doi: 10.1137/140971464.  Google Scholar

[28]

Y. LiuD. Sun and K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Math. Program., 133 (2012), 399-436.  doi: 10.1007/s10107-010-0437-8.  Google Scholar

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. A., 31 (2009), 1235-1256.  doi: 10.1137/090755436.  Google Scholar

[30]

S. MaD. Goldfarb and L. Chen, Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.  Google Scholar

[31]

R. Mazumder, T. Hastie and R. Tibshirani, Regularization Methods for Learning Incomplete Matrices, tech. rep., Stanford University, 2009. Google Scholar

[32]

R. Meka, P. Jain and I. S. Dhillon, Guaranteed Rank Minimization Via Singular Value Projection, CoRR, abs/0909.5457 (2009). Google Scholar

[33]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867.   Google Scholar

[34]

Y. Nesterov, Introductory Lectures on Convex Optimization, A basic course. Applied Optimization, 87. Kluwer Academic Publishers, Boston, MA, 2004. doi: 10.1007/978-1-4419-8853-9.  Google Scholar

[35]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430.   Google Scholar

[36]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.  Google Scholar

[37]

Y. ShenZ. Wen and Y. Zhang, Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization, Optim. Methods Softw., 29 (2014), 239-263.  doi: 10.1080/10556788.2012.700713.  Google Scholar

[38]

K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pac. J. Optim., 6 (2010), 615-640.   Google Scholar

[39]

C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. J. Comput. Vision, 9 (1992), 137-154.   Google Scholar

[40]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.  Google Scholar

[41]

J. Yang and X. Yuan, An Inexact Alternating Direction Method for Trace Norm Regularized Least Squares Problem, tech. rep., Dept. of Mathematics, Nanjing University, 2010. Google Scholar

[42]

J. YangY. Zhang and W. Yin, An efficient tvl1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), 2842-2865.  doi: 10.1137/080732894.  Google Scholar

[43]

X. Yuan and J. Yang, Sparse and low-rank matrix decomposition via alternating direction method, Pac. J. Optim., 9 (2013), 167-180.   Google Scholar

[44]

Z. Zhu, A. M.-C. So and Y. Ye, Fast and Near–Optimal Matrix Completion Via Randomized Basis Pursuit, tech. rep., Stanford University, 2009. Google Scholar

[45]

H. ZouT. Hastie and R. Tibshirani, Sparse principal component analysis, J. Comput. Graph. Stat., 15 (2006), 265-286.  doi: 10.1198/106186006X113430.  Google Scholar

show all references

References:
[1]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.  Google Scholar

[2]

J. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.  Google Scholar

[3]

E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936.   Google Scholar

[4]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[5]

E. J. Candès and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions, Found. Comput. Math., 6 (2006), 227-254.  doi: 10.1007/s10208-004-0162-x.  Google Scholar

[6]

E. J. CandèsT. Romberg and J. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.  Google Scholar

[7]

E. J. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.  Google Scholar

[8]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.  Google Scholar

[9]

R. Chartrand, Nonconvex splitting for regularized low-rank + sparse decomposition, IEEE Trans. Signal Process., 60 (2012), 5810-5819.  doi: 10.1109/TSP.2012.2208955.  Google Scholar

[10]

C. ChenB. HeY. Ye and X. Yuan, The direct extension of admm for multi-block convex minimization problems is not necessarily convergent, Math. Program., 155 (2016), 57-79.  doi: 10.1007/s10107-014-0826-5.  Google Scholar

[11]

W. Dai and O. Milenkovic, Set: an algorithm for consistent matrix completion, CoRR, abs/0909.2705 (2009). Google Scholar

[12]

D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.  Google Scholar

[13]

M. Elad, Why simple shrinkage is still relevant for redundant representations?, IEEE Trans. Inform. Theory, 52 (2006), 5559-5569.  doi: 10.1109/TIT.2006.885522.  Google Scholar

[14]

L. Eldén, Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2007. doi: 10.1137/1.9780898718867.  Google Scholar

[15]

D. Goldfarb, S. Ma and Z. Wen, Solving low-rank matrix completion problems efficiently, in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton'09, 2009, 1013–1020. Google Scholar

[16]

D. Gross, Recovering Low-Rank Matrices from Few Coefficients in Any Basis, tech. rep., Leibniz University, 2009. Google Scholar

[17]

D. Han and X. Yuan, A note on the alternating direction method of multipliers, J. Optim. Theory Appl., 155 (2012), 227-238.  doi: 10.1007/s10957-012-0003-z.  Google Scholar

[18]

M. Hong and Z.-Q. Luo, On the linear convergence of the alternating direction method of multipliers, preprint, http://arXiv.org/abs/1208.3922, 2012. doi: 10.1007/s10107-016-1034-2.  Google Scholar

[19]

M. Hong, Z.-Q. Luo and M. Razaviyayn, Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, SIAM J. Optim., 26 (2016), 337–364, http://arXiv.org/abs/1410.1390. doi: 10.1137/140990309.  Google Scholar

[20]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.  Google Scholar

[21]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from noisy entries, J. Mach. Learn. Res., 11 (2010), 2057-2078.   Google Scholar

[22]

R. H. Keshavan and S. Oh, A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion, tech. rep., Dept. of Electrical Engineering, Stanford University, 2009. Google Scholar

[23]

H. Kim and H. Park, Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method, SIAM J. Matrix Anal. Appl., 30 (2008), 713-730.  doi: 10.1137/07069239X.  Google Scholar

[24]

J. Kim and H. Park, Sparse Nonnegative Matrix Factorization for Clustering, tech. rep., Georgia Institute of Technology, 2008. Technical Report GT-CSE-08-01. Google Scholar

[25]

K. Lee and Y. Bresler, Admira: atomic decomposition for minimum rank approximation, IEEE Trans. Inf. Theor., 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.  Google Scholar

[26]

M. Li, D. Sun and K.-C. Toh, A convergent 3-block semi-proximal admm for convex minimization problems with one strongly convex block, Asia Pac. J. Oper. Res., 32 (2015), 1550024, 19 pp. doi: 10.1142/S0217595915500244.  Google Scholar

[27]

X. LiuZ. Wen and Y. Zhang, An efficient gauss–newton algorithm for symmetric low-rank product matrix approximations, SIAM J. Optim., 25 (2015), 1571-1608.  doi: 10.1137/140971464.  Google Scholar

[28]

Y. LiuD. Sun and K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Math. Program., 133 (2012), 399-436.  doi: 10.1007/s10107-010-0437-8.  Google Scholar

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. A., 31 (2009), 1235-1256.  doi: 10.1137/090755436.  Google Scholar

[30]

S. MaD. Goldfarb and L. Chen, Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.  Google Scholar

[31]

R. Mazumder, T. Hastie and R. Tibshirani, Regularization Methods for Learning Incomplete Matrices, tech. rep., Stanford University, 2009. Google Scholar

[32]

R. Meka, P. Jain and I. S. Dhillon, Guaranteed Rank Minimization Via Singular Value Projection, CoRR, abs/0909.5457 (2009). Google Scholar

[33]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867.   Google Scholar

[34]

Y. Nesterov, Introductory Lectures on Convex Optimization, A basic course. Applied Optimization, 87. Kluwer Academic Publishers, Boston, MA, 2004. doi: 10.1007/978-1-4419-8853-9.  Google Scholar

[35]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430.   Google Scholar

[36]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.  Google Scholar

[37]

Y. ShenZ. Wen and Y. Zhang, Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization, Optim. Methods Softw., 29 (2014), 239-263.  doi: 10.1080/10556788.2012.700713.  Google Scholar

[38]

K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pac. J. Optim., 6 (2010), 615-640.   Google Scholar

[39]

C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. J. Comput. Vision, 9 (1992), 137-154.   Google Scholar

[40]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.  Google Scholar

[41]

J. Yang and X. Yuan, An Inexact Alternating Direction Method for Trace Norm Regularized Least Squares Problem, tech. rep., Dept. of Mathematics, Nanjing University, 2010. Google Scholar

[42]

J. YangY. Zhang and W. Yin, An efficient tvl1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), 2842-2865.  doi: 10.1137/080732894.  Google Scholar

[43]

X. Yuan and J. Yang, Sparse and low-rank matrix decomposition via alternating direction method, Pac. J. Optim., 9 (2013), 167-180.   Google Scholar

[44]

Z. Zhu, A. M.-C. So and Y. Ye, Fast and Near–Optimal Matrix Completion Via Randomized Basis Pursuit, tech. rep., Stanford University, 2009. Google Scholar

[45]

H. ZouT. Hastie and R. Tibshirani, Sparse principal component analysis, J. Comput. Graph. Stat., 15 (2006), 265-286.  doi: 10.1198/106186006X113430.  Google Scholar

Figure 1.  Relative error vs iteration number, $ m $ = $ n $ = 2000
Figure 2.  Computing time vs sampling ratio, $ m $ = $ n $ = 2000
Figure 3.  Computing time vs dimension
Table 4.1.  Results of speed performance test for synthetic data with $m$ = $n$ = $2000$
Problem settings SVT LMaFit New Algorithm
sampling ratio rank($L^*$) $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time
20% 5 8.947e-13 126.0 10.288 8.396e-13 67.7 1.746 8.880e-13 42.7 1.218
10 1.183e-7 190.2 22.593 9.005e-13 66.8 1.892 8.525e-13 57.3 1.821
40% 5 8.494e-13 87.0 12.286 7.022e-13 33.3 1.376 6.705e-13 24.2 1.100
10 8.702e-13 101.7 19.319 7.169e-13 39.2 1.722 7.001e-13 29.9 1.468
60% 5 8.470e-13 72.3 14.429 6.895e-13 33.2 1.821 5.106e-13 18.0 1.096
10 8.465e-13 80.4 18.717 7.573e-13 39.1 2.237 5.467e-13 21.0 1.334
80% 5 8.765e-13 62.3 15.112 6.556e-13 23.1 1.547 3.881e-13 15.0 1.100
10 8.180e-13 67.5 19.641 6.421e-13 25.1 1.801 4.308e-13 16.7 1.322
Problem settings SVT LMaFit New Algorithm
sampling ratio rank($L^*$) $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time
20% 5 8.947e-13 126.0 10.288 8.396e-13 67.7 1.746 8.880e-13 42.7 1.218
10 1.183e-7 190.2 22.593 9.005e-13 66.8 1.892 8.525e-13 57.3 1.821
40% 5 8.494e-13 87.0 12.286 7.022e-13 33.3 1.376 6.705e-13 24.2 1.100
10 8.702e-13 101.7 19.319 7.169e-13 39.2 1.722 7.001e-13 29.9 1.468
60% 5 8.470e-13 72.3 14.429 6.895e-13 33.2 1.821 5.106e-13 18.0 1.096
10 8.465e-13 80.4 18.717 7.573e-13 39.1 2.237 5.467e-13 21.0 1.334
80% 5 8.765e-13 62.3 15.112 6.556e-13 23.1 1.547 3.881e-13 15.0 1.100
10 8.180e-13 67.5 19.641 6.421e-13 25.1 1.801 4.308e-13 16.7 1.322
[1]

Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015

[2]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[3]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[4]

Meijuan Shang, Yanan Liu, Lingchen Kong, Xianchao Xiu, Ying Yang. Nonconvex mixed matrix minimization. Mathematical Foundations of Computing, 2019, 2 (2) : 107-126. doi: 10.3934/mfc.2019009

[5]

Yu-Ning Yang, Su Zhang. On linear convergence of projected gradient method for a class of affine rank minimization problems. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1507-1519. doi: 10.3934/jimo.2016.12.1507

[6]

Xianchao Xiu, Lingchen Kong. Rank-one and sparse matrix decomposition for dynamic MRI. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 127-134. doi: 10.3934/naco.2015.5.127

[7]

Yonggui Zhu, Yuying Shi, Bin Zhang, Xinyan Yu. Weighted-average alternating minimization method for magnetic resonance image reconstruction based on compressive sensing. Inverse Problems & Imaging, 2014, 8 (3) : 925-937. doi: 10.3934/ipi.2014.8.925

[8]

Giuseppe Geymonat, Françoise Krasucki. Hodge decomposition for symmetric matrix fields and the elasticity complex in Lipschitz domains. Communications on Pure & Applied Analysis, 2009, 8 (1) : 295-309. doi: 10.3934/cpaa.2009.8.295

[9]

Duo Wang, Zheng-Fen Jin, Youlin Shang. A penalty decomposition method for nuclear norm minimization with l1 norm fidelity term. Evolution Equations & Control Theory, 2019, 8 (4) : 695-708. doi: 10.3934/eect.2019034

[10]

Vladimir Gaitsgory, Tanya Tarnopolskaya. Threshold value of the penalty parameter in the minimization of $L_1$-penalized conditional value-at-risk. Journal of Industrial & Management Optimization, 2013, 9 (1) : 191-204. doi: 10.3934/jimo.2013.9.191

[11]

Yifu Feng, Min Zhang. A $p$-spherical section property for matrix Schatten-$p$ quasi-norm minimization. Journal of Industrial & Management Optimization, 2020, 16 (1) : 397-407. doi: 10.3934/jimo.2018159

[12]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[13]

Jiying Liu, Jubo Zhu, Fengxia Yan, Zenghui Zhang. Compressive sampling and $l_1$ minimization for SAR imaging with low sampling rate. Inverse Problems & Imaging, 2013, 7 (4) : 1295-1305. doi: 10.3934/ipi.2013.7.1295

[14]

Weinan E, Weiguo Gao. Orbital minimization with localization. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 249-264. doi: 10.3934/dcds.2009.23.249

[15]

Yi Yang, Jianwei Ma, Stanley Osher. Seismic data reconstruction via matrix completion. Inverse Problems & Imaging, 2013, 7 (4) : 1379-1392. doi: 10.3934/ipi.2013.7.1379

[16]

Quanyi Liang, Kairong Liu, Gang Meng, Zhikun She. Minimization of the lowest eigenvalue for a vibrating beam. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 2079-2092. doi: 10.3934/dcds.2018085

[17]

Mehdi Badsi, Martin Campos Pinto, Bruno Després. A minimization formulation of a bi-kinetic sheath. Kinetic & Related Models, 2016, 9 (4) : 621-656. doi: 10.3934/krm.2016010

[18]

M. Zuhair Nashed, Alexandru Tamasan. Structural stability in a minimization problem and applications to conductivity imaging. Inverse Problems & Imaging, 2011, 5 (1) : 219-236. doi: 10.3934/ipi.2011.5.219

[19]

María Andrea Arias Serna, María Eugenia Puerta Yepes, César Edinson Escalante Coterio, Gerardo Arango Ospina. $ (Q,r) $ Model with $ CVaR_α $ of costs minimization. Journal of Industrial & Management Optimization, 2017, 13 (1) : 135-146. doi: 10.3934/jimo.2016008

[20]

Naoufel Ben Abdallah, Irene M. Gamba, Giuseppe Toscani. On the minimization problem of sub-linear convex functionals. Kinetic & Related Models, 2011, 4 (4) : 857-871. doi: 10.3934/krm.2011.4.857

2018 Impact Factor: 0.545

Article outline

Figures and Tables

[Back to Top]