\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Guarantees of riemannian optimization for low rank matrix completion

  • * Corresponding author: Jian-Feng Cai

    * Corresponding author: Jian-Feng Cai 
The first author is supported by National Science Foundation of China (NSFC) 11801088 and Shanghai Sailing Program 18YF1401600. The second author is supported by Hong Kong Research Grant Council (HKRGC) General Research Fund (GRF) 16306317.
Abstract Full Text(HTML) Figure(4) / Table(1) Related Papers Cited by
  • We establish the exact recovery guarantees for a class of Riemannian optimization methods based on the embedded manifold of low rank matrices for matrix completion. Assume $ m $ entries of an $ n\times n $ rank $ r $ matrix are sampled independently and uniformly with replacement. We first show that with high probability the Riemannian gradient descent and conjugate gradient descent algorithms initialized by one step hard thresholding are guaranteed to converge linearly to the measured matrix provided

    $ \begin{align*} m\geq C_\kappa n^{1.5}r\log^{1.5}(n), \end{align*} $

    where $ C_\kappa $ is a numerical constant depending on the condition number of the measured matrix. Then the sampling complexity is further improved to

    $ \begin{align*} m\geq C_\kappa nr^2\log^{2}(n) \end{align*} $

    via the resampled Riemannian gradient descent initialization. The analysis of the new initialization procedure relies on an asymmetric restricted isometry property of the sampling operator and the curvature of the low rank matrix manifold. Numerical simulation shows that the algorithms are able to recover a low rank matrix from nearly the minimum number of measurements.

    Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Geometric comparison between NIHT (a) and RGrad (b). One more projection is introduced in RGrad

    Figure 2.  Empirical phase transition curves (a) RGrad, (b) RCG, (c) RCG restarted and (d) GD when $ n = 800 $. Horizontal axis $ p = m/n^2 $ and vertical axis $ q = (2n-r)r/m $. White denotes successful recovery in all ten random tests, and black denotes failure in all tests

    Figure 3.  Relative residual (mean and standard deviation over ten random tests) as function of number of iterations for $ n = 8000 $, $ r = 100 $, $ 1/ q = 2 $ (a) and $ 1/ q = 3 $ (b). The values after each algorithm are the average computational time (seconds) for convergence

    Figure 4.  Performance of (a) RGrad, (b) RCG and (c) RCG restarted under different SNR

    Table 1.  Comparison of RGrad/RCG and GD on sampling complexity (SC), required assumptions (RA), per-iteration computational cost (PICC), and local convergence rate (LCR). Remarks: 1) (Ⅰ) represents RGD/RCG with the one step hard thresholding initialization and (Ⅱ) represents RGD/RCG with the resampled Riemannian gradient descent initialization; 2) $v_g$ and $v_{cg}$ are both absolute numerical constants which are less than one

    Algorithm SC RA PICC LCR
    RGrad, RCG (I) $O(\kappa n^{3/2}r\log^{3/2}(n))$$\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
    RGrad, RCG (II) $O(\kappa^6nr^2\log^2(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr+r^3)$ $v_{g},~v_{cg}$
    GD [54] $O(\kappa^2nr^2\log(n))$ $\textbf{A0}$ $O(|\Omega|r+|\Omega|+nr^2+nr)$ $(1-\frac{C}{\mu^2\kappa^2r^2})^{1/2}$
     | Show Table
    DownLoad: CSV
  • [1] Y. Amit, M. Fink, N. Srebro and S. Ullman, Uncovering shared structures in multiclass classification, in: Proceedings of the 24th International Conference on Machine Learning, 2007, 17–24. doi: 10.1145/1273496.1273499.
    [2] A. Argyriou, T. Evgeniou and M. Pontil, Multi-task feature learning, Advances in Neural Information Processing Systems, 2007.
    [3] P.-A. AbsilR. Mahony and  R. SepulchreOptimization Algorithms on Matrix Manifolds,, Princeton University Press, 2008.  doi: 10.1515/9781400830244.
    [4] A. AhmedB. Recht and J. Romberg, Blind deconvolution using convex programming, IEEE Transactions on Information Theory, 60 (2014), 1711-1732.  doi: 10.1109/TIT.2013.2294644.
    [5] R. Ahlswede and A. Winter, Strong converse for identification via quantum channels, IEEE Transactions on Information Theory, 48 (2002), 569-579.  doi: 10.1109/18.985947.
    [6] S. BhojanapalliA. Kyrillidis and S. Sanghavi, Dropping convexity for faster semi-definite optimization, JMLR: Workshop and Conference Proceedings, 49 (2016), 1-53. 
    [7] J. BlanchardJ. Tanner and K. Wei, CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion, Information and Inference, 4 (2015), 289-327.  doi: 10.1093/imaiai/iav011.
    [8] N. Boumal and P.-A. Absil, RTRMC: A Riemannian trust-region method for low-rank matrix completion, in: Advances in Neural Information Processing Systems, 2011.
    [9] E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.
    [10] E. J. Candès and Y. Plan, Tight oracle bounds for low-rank matrix recovery from a minimal number of random measurements, IEEE Transactions on Information Theory, 57 (2011), 2342-2359.  doi: 10.1109/TIT.2011.2111771.
    [11] Y. Chen and M. J. Wainwright, Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees, arXiv: 1509.03025, 2015.
    [12] E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Transactions on Information Theory, 56 (2009), 2053-2080.  doi: 10.1109/TIT.2010.2044061.
    [13] Y. Chen, Incoherence-optimal matrix completion, IEEE Transactions on Information Theory, 61 (2015), 2909-2923.  doi: 10.1109/TIT.2015.2415195.
    [14] J.-F. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM Journal on Optimization, 20 (2010), 1956-1982.  doi: 10.1137/080738970.
    [15] E. J. CandèsY. EldarT. Strohmer and V. Voroninski, Phase retrieval via matrix completion, SIAM J. on Imaging Sciences, 6 (2013), 199-225.  doi: 10.1137/110848074.
    [16] E. J. CandèsX. Li and M. Soltanolkotabi, Phase retrieval via Wirtinger flow: Theory and algorithms, IEEE Transactions on Information Theory, 61 (2015), 1985-2007.  doi: 10.1109/TIT.2015.2399924.
    [17] Y. Chen and E. Candès, Solving random quadratic systems of equations is nearly as easy as solving linear systems, Communications on Pure and Applied Mathematics, 70 (2017), 822-883.  doi: 10.1002/cpa.21638.
    [18] L. Eldén, Matrix Methods in Data Mining and Pattern Recogonization, SIAM, 2007. doi: 10.1137/1.9780898718867.
    [19] M. Fazel, Matrix rank minimization with applications, ph. D. dissertation, Stanford University, 2002.
    [20] D. Gross, Recovering low-rank matrices from few coefficients in any basis, IEEE Transactions on Information Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.
    [21] D. Goldfarb and S. Ma, Convergence of fixed-point continuation algorithms for matrix rank minimization, Foundations of Computational Mathematics, 11 (2011), 183-210.  doi: 10.1007/s10208-011-9084-6.
    [22] R. W. Gerchberg and W. O. Saxton, A practical algorithm for the determination of the phase from image and diffraction plane pictures, Optik, 35 (1972), 237-246. 
    [23] N. J. A. Harvey, D. R. Karger and S. Yekhanin, The complexity of matrix completion, in: Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006, 1103–1111. doi: 10.1145/1109557.1109679.
    [24] J. P. Haldar and D. Hernando, Rank-constrained solutions to linear matrix equations using PowerFactorization, IEEE Signal Processing Letters, 16 (2009), 584-587.  doi: 10.1109/LSP.2009.2018223.
    [25] P. Jain, R. Meka and I. Dhillon, Guaranteed rank minimization via singular value projection, in: NIPS, 2010.
    [26] P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, in: Proceedings of the Forty-Fifth Annual ACM Symposium on Theory of Computing, 2013,665–674. doi: 10.1145/2488608.2488693.
    [27] P. Jain and P. Netrapalli, Fast exact matrix completion with finite samples, JMLR: Workshop and Conference Proceedings, 40 (2015), 1-28. 
    [28] A. Kyrillidis and V. Cevher, Matrix recipes for hard thresholding methods, Journal of Mathematical Imaging and Vision, 48 (2014), 235-265.  doi: 10.1007/s10851-013-0434-7.
    [29] R. H. Keshavan, Efficient algorithms for collaborative filtering, ph. D. dissertation, Stanford University, 2012.
    [30] R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Transactions on Information Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.
    [31] Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM Journal on Matrix Analysis and Applications, 31 (2009), 1235-1256.  doi: 10.1137/090755436.
    [32] K. Lee and Y. Bresler, ADMiRA: Atomic decomposition for minimum rank approximation, IEEE Transactions on Information Theory, 128 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.
    [33] S. Ling and T. Strohmer, Blind deconvolution meets blind demixing: Algorithms and performance bounds, IEEE Transactions on Information Theory, 63 (2017), 4497-4520.  doi: 10.1109/TIT.2017.2701342.
    [34] B. Mishra, K. A. Apuroop and R. Sepulchre, A Riemannian geometry for low-rank matrix completion, arXiv 1306.2672, 2013.
    [35] B. MishraG. MeyerS. Bonnabel and R. Sepulchre, Fixed-rank matrix factorizations and Riemannian low-rank optimization, Computational Statistics, 29 (2014), 591-621.  doi: 10.1007/s00180-013-0464-z.
    [36] T. Ngo and Y. Saad, Scaled Gradients on Grassmann Manifolds for Matrix Completion, in: Advances in Neural Information Processing Systems, 2012.
    [37] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 2006.
    [38] B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Review, 52 (2010), 471-501.  doi: 10.1137/070697835.
    [39] B. Recht, A simpler approach to matrix completion, The Journal of Machine Learning Research, 12 (2011), 3413-3430. 
    [40] C. D. Sa, K. Olukotun, C. Ré, Global convergence of stochastic gradient descent for some nonconvex matrix problems, in: ICML, 2015.
    [41] R. Sun and Z. Luo, Guaranteed matrix completion via non-convex factorization, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science–FOCS 2015, IEEE Computer Soc., Los Alamitos, CA, 2015,270–289.
    [42] J. SunQ. Qu and J. Wright, A geometric analysis of phase retrieval, Foundations of Computational Mathematics, 18 (2018), 1131-1198.  doi: 10.1007/s10208-017-9365-9.
    [43] J. Tanner and K. Wei, Normalized iterative hard thresholding for matrix completion, SIAM Journal on Scientific Computing, 35 (2013), S104–S125. doi: 10.1137/120876459.
    [44] S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi and B. Recht, Low-rank solutions of linear matrix equations via Procrustes flow, in: ICML, 2016.
    [45] J. Tanner and K. Wei, Low rank matrix completion by alternating steepest descent methods, Applied and Computational Harmonic Analysis, 40 (2016), 417-429.  doi: 10.1016/j.acha.2015.08.003.
    [46] L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Review, 38 (1996), 49-95.  doi: 10.1137/1038003.
    [47] B. Vandereycken, Low rank matrix completion by Riemannian optimization, SIAM Journal on Optimization, 23 (2013), 1214-1236.  doi: 10.1137/110845768.
    [48] Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a non-linear successive over-relaxation algorithm, Mathematical Programming Computation, 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.
    [49] C. D. WhiteS. Sanghavi and R. Ward, The local convexity of solving systems of quadratic equations, Results in Mathematics, 71 (2017), 569-608.  doi: 10.1007/s00025-016-0564-5.
    [50] K. WeiJ.-F. CaiT. F. Chan and S. Leung, Guarantees of Riemannian optimization for low rank matrix recovery, SIAM Journal on Matrix Analysis nad Applications, 37 (2016), 1198-1222.  doi: 10.1137/15M1050525.
    [51] K. Wei, Solving systems of phaseless equations via Kaczmarz methods: A proof of concept study, Inverse Problems, 31 (2015), 125008, 23pp. doi: 10.1088/0266-5611/31/12/125008.
    [52] Q. Zheng and J. Lafferty, A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements, NIPS, 2015.
    [53] T. Zhao, Z. Wang and H. Liu, Nonconvex low rank matrix factorization via inexact first order oracle, NIPS, 2015.
    [54] Q. Zheng and J. Lafferty, Convergence analysis for rectangular matrix completion using Burer-Monteiro factorization and gradient descent, 2016, arXiv: 1605.0705.
  • 加载中

Figures(4)

Tables(1)

SHARE

Article Metrics

HTML views(1122) PDF downloads(569) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return