April  2020, 14(2): 233-265. doi: 10.3934/ipi.2020011

Guarantees of riemannian optimization for low rank matrix completion

1. 

School of Data Science, Fudan University, Shanghai, China

2. 

Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong SAR, China

3. 

Office of the President, King Abdullah University of Science and Technology, Thuwal 23955-6900, Kingdom of Saudi Arabia

* Corresponding author: Jian-Feng Cai

Received  January 2019 Revised  October 2019 Published  February 2020

Fund Project: The first author is supported by National Science Foundation of China (NSFC) 11801088 and Shanghai Sailing Program 18YF1401600. The second author is supported by Hong Kong Research Grant Council (HKRGC) General Research Fund (GRF) 16306317.

We establish the exact recovery guarantees for a class of Riemannian optimization methods based on the embedded manifold of low rank matrices for matrix completion. Assume
$ m $
entries of an
$ n\times n $
rank
$ r $
matrix are sampled independently and uniformly with replacement. We first show that with high probability the Riemannian gradient descent and conjugate gradient descent algorithms initialized by one step hard thresholding are guaranteed to converge linearly to the measured matrix provided
$ \begin{align*} m\geq C_\kappa n^{1.5}r\log^{1.5}(n), \end{align*} $
where
$ C_\kappa $
is a numerical constant depending on the condition number of the measured matrix. Then the sampling complexity is further improved to
$ \begin{align*} m\geq C_\kappa nr^2\log^{2}(n) \end{align*} $
via the resampled Riemannian gradient descent initialization. The analysis of the new initialization procedure relies on an asymmetric restricted isometry property of the sampling operator and the curvature of the low rank matrix manifold. Numerical simulation shows that the algorithms are able to recover a low rank matrix from nearly the minimum number of measurements.
Citation: Ke Wei, Jian-Feng Cai, Tony F. Chan, Shingyu Leung. Guarantees of riemannian optimization for low rank matrix completion. Inverse Problems & Imaging, 2020, 14 (2) : 233-265. doi: 10.3934/ipi.2020011
References:
[1]

Y. Amit, M. Fink, N. Srebro and S. Ullman, Uncovering shared structures in multiclass classification, in: Proceedings of the 24th International Conference on Machine Learning, 2007, 17–24. doi: 10.1145/1273496.1273499.  Google Scholar

[2]

A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning, Advances in Neural Information Processing Systems, 2007. Google Scholar

[3] P.-A. AbsilR. Mahony and R. Sepulchre, Optimization Algorithms on Matrix Manifolds,, Princeton University Press, 2008.  doi: 10.1515/9781400830244.  Google Scholar
[4]

A. AhmedB. Recht and J. Romberg, Blind deconvolution using convex programming, IEEE Transactions on Information Theory, 60 (2014), 1711-1732.  doi: 10.1109/TIT.2013.2294644.  Google Scholar

[5]

R. Ahlswede and A. Winter, Strong converse for identification via quantum channels, IEEE Transactions on Information Theory, 48 (2002), 569-579.  doi: 10.1109/18.985947.  Google Scholar

[6]

S. BhojanapalliA. Kyrillidis and S. Sanghavi, Dropping convexity for faster semi-definite optimization, JMLR: Workshop and Conference Proceedings, 49 (2016), 1-53.   Google Scholar

[7]

J. BlanchardJ. Tanner and K. Wei, CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Information and Inference, 4 (2015), 289-327.  doi: 10.1093/imaiai/iav011.  Google Scholar

[8]

N. Boumal and P.-A. Absil, RTRMC: A Riemannian trust-region method for low-rank matrix completion, in: Advances in Neural Information Processing Systems, 2011. Google Scholar

[9]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[10]

E. J. Candès and Y. Plan, Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.  Google Scholar

[11]

Y. Chen and M. J. Wainwright, Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees, arXiv: 1509.03025, 2015. Google Scholar

[12]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Transactions on Information Theory, 56 (2009), 2053-2080.  doi: 10.1109/TIT.2010.2044061.  Google Scholar

[13]

Y. Chen, Incoherence-optimal matrix completion, IEEE Transactions on Information Theory, 61 (2015), 2909-2923.  doi: 10.1109/TIT.2015.2415195.  Google Scholar

[14]

J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20 (2010), 1956-1982.  doi: 10.1137/080738970.  Google Scholar

[15]

E. J. CandèsY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. on Imaging Sciences, 6 (2013), 199-225.  doi: 10.1137/110848074.  Google Scholar

[16]

E. J. CandèsX. Li and M. Soltanolkotabi, Phase retrieval via Wirtinger flow: Theory and algorithms, IEEE Transactions on Information Theory, 61 (2015), 1985-2007.  doi: 10.1109/TIT.2015.2399924.  Google Scholar

[17]

Y. Chen and E. Candès, Solving random quadratic systems of equations is nearly as easy as solving linear systems, Communications on Pure and Applied Mathematics, 70 (2017), 822-883.  doi: 10.1002/cpa.21638.  Google Scholar

[18]

L. Eldén, Matrix Methods in Data Mining and Pattern Recogonization, SIAM, 2007. doi: 10.1137/1.9780898718867.  Google Scholar

[19]

M. Fazel, Matrix rank minimization with applications, ph. D. dissertation, Stanford University, 2002. Google Scholar

[20]

D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Transactions on Information Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.  Google Scholar

[21]

D. Goldfarb and S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Foundations of Computational Mathematics, 11 (2011), 183-210.  doi: 10.1007/s10208-011-9084-6.  Google Scholar

[22]

R. W. Gerchberg and W. O. Saxton, A practical algorithm for the determination of the phase from image and diffraction plane pictures, Optik, 35 (1972), 237-246.   Google Scholar

[23]

N. J. A. Harvey, D. R. Karger and S. Yekhanin, The complexity of matrix completion, in: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006, 1103–1111. doi: 10.1145/1109557.1109679.  Google Scholar

[24]

J. P. Haldar and D. Hernando, Rank-constrained solutions to linear matrix equations using PowerFactorization, IEEE Signal Processing Letters, 16 (2009), 584-587.  doi: 10.1109/LSP.2009.2018223.  Google Scholar

[25]

P. Jain, R. Meka and I. Dhillon, Guaranteed rank minimization via singular value projection, in: NIPS, 2010. Google Scholar

[26]

P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, in: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, 2013,665–674. doi: 10.1145/2488608.2488693.  Google Scholar

[27]

P. Jain and P. Netrapalli, Fast exact matrix completion with finite samples, JMLR: Workshop and Conference Proceedings, 40 (2015), 1-28.   Google Scholar

[28]

A. Kyrillidis and V. Cevher, Matrix recipes for hard thresholding methods, Journal of Mathematical Imaging and Vision, 48 (2014), 235-265.  doi: 10.1007/s10851-013-0434-7.  Google Scholar

[29]

R. H. Keshavan, Efficient algorithms for collaborative filtering, ph. D. dissertation, Stanford University, 2012. Google Scholar

[30]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Transactions on Information Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.  Google Scholar

[31]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235-1256.  doi: 10.1137/090755436.  Google Scholar

[32]

K. Lee and Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Transactions on Information Theory, 128 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.  Google Scholar

[33]

S. Ling and T. Strohmer, Blind deconvolution meets blind demixing: Algorithms and performance bounds, IEEE Transactions on Information Theory, 63 (2017), 4497-4520.  doi: 10.1109/TIT.2017.2701342.  Google Scholar

[34]

B. Mishra, K. A. Apuroop and R. Sepulchre, A Riemannian geometry for low-rank matrix completion, arXiv 1306.2672, 2013. Google Scholar

[35]

B. MishraG. MeyerS. Bonnabel and R. Sepulchre, Fixed-rank matrix factorizations and Riemannian low-rank optimization, Computational Statistics, 29 (2014), 591-621.  doi: 10.1007/s00180-013-0464-z.  Google Scholar

[36]

T. Ngo and Y. Saad, Scaled Gradients on Grassmann Manifolds for Matrix Completion, in: Advances in Neural Information Processing Systems, 2012. Google Scholar

[37]

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 2006.  Google Scholar

[38]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, 52 (2010), 471-501.  doi: 10.1137/070697835.  Google Scholar

[39]

B. Recht, A simpler approach to matrix completion, The Journal of Machine Learning Research, 12 (2011), 3413-3430.   Google Scholar

[40]

C. D. Sa, K. Olukotun, C. Ré, Global convergence of stochastic gradient descent for some nonconvex matrix problems, in: ICML, 2015. Google Scholar

[41]

R. Sun and Z. Luo, Guaranteed matrix completion via non-convex factorization, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science–FOCS 2015, IEEE Computer Soc., Los Alamitos, CA, 2015,270–289.  Google Scholar

[42]

J. SunQ. Qu and J. Wright, A geometric analysis of phase retrieval, Foundations of Computational Mathematics, 18 (2018), 1131-1198.  doi: 10.1007/s10208-017-9365-9.  Google Scholar

[43]

J. Tanner and K. Wei, Normalized iterative hard thresholding for matrix completion, SIAM Journal on Scientific Computing, 35 (2013), S104–S125. doi: 10.1137/120876459.  Google Scholar

[44]

S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi and B. Recht, Low-rank solutions of linear matrix equations via Procrustes flow, in: ICML, 2016. Google Scholar

[45]

J. Tanner and K. Wei, Low rank matrix completion by alternating steepest descent methods, Applied and Computational Harmonic Analysis, 40 (2016), 417-429.  doi: 10.1016/j.acha.2015.08.003.  Google Scholar

[46]

L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Review, 38 (1996), 49-95.  doi: 10.1137/1038003.  Google Scholar

[47]

B. Vandereycken, Low rank matrix completion by Riemannian optimization, SIAM Journal on Optimization, 23 (2013), 1214-1236.  doi: 10.1137/110845768.  Google Scholar

[48]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a non-linear successive over-relaxation algorithm, Mathematical Programming Computation, 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.  Google Scholar

[49]

C. D. WhiteS. Sanghavi and R. Ward, The local convexity of solving systems of quadratic equations, Results in Mathematics, 71 (2017), 569-608.  doi: 10.1007/s00025-016-0564-5.  Google Scholar

[50]

K. WeiJ.-F. CaiT. F. Chan and S. Leung, Guarantees of Riemannian optimization for low rank matrix recovery, SIAM Journal on Matrix Analysis nad Applications, 37 (2016), 1198-1222.  doi: 10.1137/15M1050525.  Google Scholar

[51]

K. Wei, Solving systems of phaseless equations via Kaczmarz methods: A proof of concept study, Inverse Problems, 31 (2015), 125008, 23pp. doi: 10.1088/0266-5611/31/12/125008.  Google Scholar

[52]

Q. Zheng and J. Lafferty, A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements, NIPS, 2015. Google Scholar

[53]

T. Zhao, Z. Wang and H. Liu, Nonconvex low rank matrix factorization via inexact first order oracle, NIPS, 2015. Google Scholar

[54]

Q. Zheng and J. Lafferty, Convergence analysis for rectangular matrix completion using Burer-Monteiro factorization and gradient descent, 2016, arXiv: 1605.0705. Google Scholar

show all references

References:
[1]

Y. Amit, M. Fink, N. Srebro and S. Ullman, Uncovering shared structures in multiclass classification, in: Proceedings of the 24th International Conference on Machine Learning, 2007, 17–24. doi: 10.1145/1273496.1273499.  Google Scholar

[2]

A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning, Advances in Neural Information Processing Systems, 2007. Google Scholar

[3] P.-A. AbsilR. Mahony and R. Sepulchre, Optimization Algorithms on Matrix Manifolds,, Princeton University Press, 2008.  doi: 10.1515/9781400830244.  Google Scholar
[4]

A. AhmedB. Recht and J. Romberg, Blind deconvolution using convex programming, IEEE Transactions on Information Theory, 60 (2014), 1711-1732.  doi: 10.1109/TIT.2013.2294644.  Google Scholar

[5]

R. Ahlswede and A. Winter, Strong converse for identification via quantum channels, IEEE Transactions on Information Theory, 48 (2002), 569-579.  doi: 10.1109/18.985947.  Google Scholar

[6]

S. BhojanapalliA. Kyrillidis and S. Sanghavi, Dropping convexity for faster semi-definite optimization, JMLR: Workshop and Conference Proceedings, 49 (2016), 1-53.   Google Scholar

[7]

J. BlanchardJ. Tanner and K. Wei, CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Information and Inference, 4 (2015), 289-327.  doi: 10.1093/imaiai/iav011.  Google Scholar

[8]

N. Boumal and P.-A. Absil, RTRMC: A Riemannian trust-region method for low-rank matrix completion, in: Advances in Neural Information Processing Systems, 2011. Google Scholar

[9]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[10]

E. J. Candès and Y. Plan, Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.  Google Scholar

[11]

Y. Chen and M. J. Wainwright, Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees, arXiv: 1509.03025, 2015. Google Scholar

[12]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Transactions on Information Theory, 56 (2009), 2053-2080.  doi: 10.1109/TIT.2010.2044061.  Google Scholar

[13]

Y. Chen, Incoherence-optimal matrix completion, IEEE Transactions on Information Theory, 61 (2015), 2909-2923.  doi: 10.1109/TIT.2015.2415195.  Google Scholar

[14]

J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20 (2010), 1956-1982.  doi: 10.1137/080738970.  Google Scholar

[15]

E. J. CandèsY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. on Imaging Sciences, 6 (2013), 199-225.  doi: 10.1137/110848074.  Google Scholar

[16]

E. J. CandèsX. Li and M. Soltanolkotabi, Phase retrieval via Wirtinger flow: Theory and algorithms, IEEE Transactions on Information Theory, 61 (2015), 1985-2007.  doi: 10.1109/TIT.2015.2399924.  Google Scholar

[17]

Y. Chen and E. Candès, Solving random quadratic systems of equations is nearly as easy as solving linear systems, Communications on Pure and Applied Mathematics, 70 (2017), 822-883.  doi: 10.1002/cpa.21638.  Google Scholar

[18]

L. Eldén, Matrix Methods in Data Mining and Pattern Recogonization, SIAM, 2007. doi: 10.1137/1.9780898718867.  Google Scholar

[19]

M. Fazel, Matrix rank minimization with applications, ph. D. dissertation, Stanford University, 2002. Google Scholar

[20]

D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Transactions on Information Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.  Google Scholar

[21]

D. Goldfarb and S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Foundations of Computational Mathematics, 11 (2011), 183-210.  doi: 10.1007/s10208-011-9084-6.  Google Scholar

[22]

R. W. Gerchberg and W. O. Saxton, A practical algorithm for the determination of the phase from image and diffraction plane pictures, Optik, 35 (1972), 237-246.   Google Scholar

[23]

N. J. A. Harvey, D. R. Karger and S. Yekhanin, The complexity of matrix completion, in: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006, 1103–1111. doi: 10.1145/1109557.1109679.  Google Scholar

[24]

J. P. Haldar and D. Hernando, Rank-constrained solutions to linear matrix equations using PowerFactorization, IEEE Signal Processing Letters, 16 (2009), 584-587.  doi: 10.1109/LSP.2009.2018223.  Google Scholar

[25]

P. Jain, R. Meka and I. Dhillon, Guaranteed rank minimization via singular value projection, in: NIPS, 2010. Google Scholar

[26]

P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, in: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, 2013,665–674. doi: 10.1145/2488608.2488693.  Google Scholar

[27]

P. Jain and P. Netrapalli, Fast exact matrix completion with finite samples, JMLR: Workshop and Conference Proceedings, 40 (2015), 1-28.   Google Scholar

[28]

A. Kyrillidis and V. Cevher, Matrix recipes for hard thresholding methods, Journal of Mathematical Imaging and Vision, 48 (2014), 235-265.  doi: 10.1007/s10851-013-0434-7.  Google Scholar

[29]

R. H. Keshavan, Efficient algorithms for collaborative filtering, ph. D. dissertation, Stanford University, 2012. Google Scholar

[30]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Transactions on Information Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.  Google Scholar

[31]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235-1256.  doi: 10.1137/090755436.  Google Scholar

[32]

K. Lee and Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Transactions on Information Theory, 128 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.  Google Scholar

[33]

S. Ling and T. Strohmer, Blind deconvolution meets blind demixing: Algorithms and performance bounds, IEEE Transactions on Information Theory, 63 (2017), 4497-4520.  doi: 10.1109/TIT.2017.2701342.  Google Scholar

[34]

B. Mishra, K. A. Apuroop and R. Sepulchre, A Riemannian geometry for low-rank matrix completion, arXiv 1306.2672, 2013. Google Scholar

[35]

B. MishraG. MeyerS. Bonnabel and R. Sepulchre, Fixed-rank matrix factorizations and Riemannian low-rank optimization, Computational Statistics, 29 (2014), 591-621.  doi: 10.1007/s00180-013-0464-z.  Google Scholar

[36]

T. Ngo and Y. Saad, Scaled Gradients on Grassmann Manifolds for Matrix Completion, in: Advances in Neural Information Processing Systems, 2012. Google Scholar

[37]

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 2006.  Google Scholar

[38]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, 52 (2010), 471-501.  doi: 10.1137/070697835.  Google Scholar

[39]

B. Recht, A simpler approach to matrix completion, The Journal of Machine Learning Research, 12 (2011), 3413-3430.   Google Scholar

[40]

C. D. Sa, K. Olukotun, C. Ré, Global convergence of stochastic gradient descent for some nonconvex matrix problems, in: ICML, 2015. Google Scholar

[41]

R. Sun and Z. Luo, Guaranteed matrix completion via non-convex factorization, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science–FOCS 2015, IEEE Computer Soc., Los Alamitos, CA, 2015,270–289.  Google Scholar

[42]

J. SunQ. Qu and J. Wright, A geometric analysis of phase retrieval, Foundations of Computational Mathematics, 18 (2018), 1131-1198.  doi: 10.1007/s10208-017-9365-9.  Google Scholar

[43]

J. Tanner and K. Wei, Normalized iterative hard thresholding for matrix completion, SIAM Journal on Scientific Computing, 35 (2013), S104–S125. doi: 10.1137/120876459.  Google Scholar

[44]

S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi and B. Recht, Low-rank solutions of linear matrix equations via Procrustes flow, in: ICML, 2016. Google Scholar

[45]

J. Tanner and K. Wei, Low rank matrix completion by alternating steepest descent methods, Applied and Computational Harmonic Analysis, 40 (2016), 417-429.  doi: 10.1016/j.acha.2015.08.003.  Google Scholar

[46]

L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Review, 38 (1996), 49-95.  doi: 10.1137/1038003.  Google Scholar

[47]

B. Vandereycken, Low rank matrix completion by Riemannian optimization, SIAM Journal on Optimization, 23 (2013), 1214-1236.  doi: 10.1137/110845768.  Google Scholar

[48]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a non-linear successive over-relaxation algorithm, Mathematical Programming Computation, 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.  Google Scholar

[49]

C. D. WhiteS. Sanghavi and R. Ward, The local convexity of solving systems of quadratic equations, Results in Mathematics, 71 (2017), 569-608.  doi: 10.1007/s00025-016-0564-5.  Google Scholar

[50]

K. WeiJ.-F. CaiT. F. Chan and S. Leung, Guarantees of Riemannian optimization for low rank matrix recovery, SIAM Journal on Matrix Analysis nad Applications, 37 (2016), 1198-1222.  doi: 10.1137/15M1050525.  Google Scholar

[51]

K. Wei, Solving systems of phaseless equations via Kaczmarz methods: A proof of concept study, Inverse Problems, 31 (2015), 125008, 23pp. doi: 10.1088/0266-5611/31/12/125008.  Google Scholar

[52]

Q. Zheng and J. Lafferty, A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements, NIPS, 2015. Google Scholar

[53]

T. Zhao, Z. Wang and H. Liu, Nonconvex low rank matrix factorization via inexact first order oracle, NIPS, 2015. Google Scholar

[54]

Q. Zheng and J. Lafferty, Convergence analysis for rectangular matrix completion using Burer-Monteiro factorization and gradient descent, 2016, arXiv: 1605.0705. Google Scholar

Figure 1.  Geometric comparison between NIHT (a) and RGrad (b). One more projection is introduced in RGrad
Figure 2.  Empirical phase transition curves (a) RGrad, (b) RCG, (c) RCG restarted and (d) GD when $ n = 800 $. Horizontal axis $ p = m/n^2 $ and vertical axis $ q = (2n-r)r/m $. White denotes successful recovery in all ten random tests, and black denotes failure in all tests
Figure 3.  Relative residual (mean and standard deviation over ten random tests) as function of number of iterations for $ n = 8000 $, $ r = 100 $, $ 1/ q = 2 $ (a) and $ 1/ q = 3 $ (b). The values after each algorithm are the average computational time (seconds) for convergence
Figure 4.  Performance of (a) RGrad, (b) RCG and (c) RCG restarted under different SNR
Table 1.  Comparison of RGrad/RCG and GD on sampling complexity (SC), required assumptions (RA), per-iteration computational cost (PICC), and local convergence rate (LCR). Remarks: 1) (Ⅰ) represents RGD/RCG with the one step hard thresholding initialization and (Ⅱ) represents RGD/RCG with the resampled Riemannian gradient descent initialization; 2) $v_g$ and $v_{cg}$ are both absolute numerical constants which are less than one
Algorithm SC RA PICC LCR
RGrad, RCG (I) $O(\kappa n^{3/2}r\log^{3/2}(n))$$\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
RGrad, RCG (II) $O(\kappa^6nr^2\log^2(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
GD [54] $O(\kappa^2nr^2\log(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr)$ $(1-\frac{C}{\mu^2\kappa^2r^2})^{1/2}$
Algorithm SC RA PICC LCR
RGrad, RCG (I) $O(\kappa n^{3/2}r\log^{3/2}(n))$$\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
RGrad, RCG (II) $O(\kappa^6nr^2\log^2(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
GD [54] $O(\kappa^2nr^2\log(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr)$ $(1-\frac{C}{\mu^2\kappa^2r^2})^{1/2}$
[1]

Wataru Nakamura, Yasushi Narushima, Hiroshi Yabe. Nonlinear conjugate gradient methods with sufficient descent properties for unconstrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (3) : 595-619. doi: 10.3934/jimo.2013.9.595

[2]

Shishun Li, Zhengda Huang. Guaranteed descent conjugate gradient methods with modified secant condition. Journal of Industrial & Management Optimization, 2008, 4 (4) : 739-755. doi: 10.3934/jimo.2008.4.739

[3]

El-Sayed M.E. Mostafa. A nonlinear conjugate gradient method for a special class of matrix optimization problems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 883-903. doi: 10.3934/jimo.2014.10.883

[4]

Gaohang Yu, Lutai Guan, Guoyin Li. Global convergence of modified Polak-Ribière-Polyak conjugate gradient methods with sufficient descent property. Journal of Industrial & Management Optimization, 2008, 4 (3) : 565-579. doi: 10.3934/jimo.2008.4.565

[5]

Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su. Parallel matrix factorization for low-rank tensor completion. Inverse Problems & Imaging, 2015, 9 (2) : 601-624. doi: 10.3934/ipi.2015.9.601

[6]

Feng Bao, Thomas Maier. Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems. Foundations of Data Science, 2019, 0 (0) : 0-0. doi: 10.3934/fods.2020001

[7]

Saman Babaie–Kafaki, Reza Ghanbari. A class of descent four–term extension of the Dai–Liao conjugate gradient method based on the scaled memoryless BFGS update. Journal of Industrial & Management Optimization, 2017, 13 (2) : 649-658. doi: 10.3934/jimo.2016038

[8]

Xiaming Chen. Kernel-based online gradient descent using distributed approach. Mathematical Foundations of Computing, 2019, 2 (1) : 1-9. doi: 10.3934/mfc.2019001

[9]

Zhong Wan, Chaoming Hu, Zhanlu Yang. A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modified line search. Discrete & Continuous Dynamical Systems - B, 2011, 16 (4) : 1157-1169. doi: 10.3934/dcdsb.2011.16.1157

[10]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[11]

Guanghui Zhou, Qin Ni, Meilan Zeng. A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 595-608. doi: 10.3934/jimo.2016034

[12]

Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 289-326. doi: 10.3934/naco.2015.5.289

[13]

King Hann Lim, Hong Hui Tan, Hendra G. Harno. Approximate greatest descent in neural network optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 327-336. doi: 10.3934/naco.2018021

[14]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems & Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[15]

Yi Yang, Jianwei Ma, Stanley Osher. Seismic data reconstruction via matrix completion. Inverse Problems & Imaging, 2013, 7 (4) : 1379-1392. doi: 10.3934/ipi.2013.7.1379

[16]

Yuan Shen, Xin Liu. An alternating minimization method for matrix completion problems. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020103

[17]

Lei Zhang, Anfu Zhu, Aiguo Wu, Lingling Lv. Parametric solutions to the regulator-conjugate matrix equations. Journal of Industrial & Management Optimization, 2017, 13 (2) : 623-631. doi: 10.3934/jimo.2016036

[18]

Yigui Ou, Haichan Lin. A class of accelerated conjugate-gradient-like methods based on a modified secant equation. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-16. doi: 10.3934/jimo.2019013

[19]

Yigui Ou, Xin Zhou. A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2018, 14 (2) : 785-801. doi: 10.3934/jimo.2017075

[20]

Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015

2018 Impact Factor: 1.469

Metrics

  • PDF downloads (34)
  • HTML views (52)
  • Cited by (0)

Other articles
by authors

[Back to Top]