\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Fast algorithms for robust principal component analysis with an upper bound on the rank

  • * Corresponding author

    * Corresponding author

N. Sha and M. Yan are supported by NSF grant DMS-1621798 and DMS-2012439. L. Shi is supported by NNSFC grant 11631015 and Shanghai Science and Technology Research Program 19JC1420101 and and 20JC1412700

Abstract / Introduction Full Text(HTML) Figure(4) / Table(4) Related Papers Cited by
  • The robust principal component analysis (RPCA) decomposes a data matrix into a low-rank part and a sparse part. There are mainly two types of algorithms for RPCA. The first type of algorithm applies regularization terms on the singular values of a matrix to obtain a low-rank matrix. However, calculating singular values can be very expensive for large matrices. The second type of algorithm replaces the low-rank matrix as the multiplication of two small matrices. They are faster than the first type because no singular value decomposition (SVD) is required. However, the rank of the low-rank matrix is required, and an accurate rank estimation is needed to obtain a reasonable solution. In this paper, we propose algorithms that combine both types. Our proposed algorithms require an upper bound of the rank and SVD on small matrices. First, they are faster than the first type because the cost of SVD on small matrices is negligible. Second, they are more robust than the second type because an upper bound of the rank instead of the exact rank is required. Furthermore, we apply the Gauss-Newton method to increase the speed of our algorithms. Numerical experiments show the better performance of our proposed algorithms.

    Mathematics Subject Classification: Primary:65K10, 90C26;Secondary:65D18.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 2.  The relative error to the true low-rank matrix vs the rank $ p $ for Shen et al.'s and Alg. 2. Alg. 2 is robust to $ p $, as long as $ p $ is not smaller than the true rank 25

    Figure 1.  The contour map of the relative error to $ {\bf{L}}^\star $ for different parameters. In this experiment, we set $ r = 25 $ and $ s = 20 $. The upper bound of the rank is set to be $ p = 30 $

    Figure 3.  The numerical experiment on the 'cameraman' image. (A-C) show that the proposed model performs better than Shen et al.'s both visually and in terms of RE and PSNR. (D) compares the objective values vs time for general SVD, Alg. 1, and Alg. 2. Here $ f^\star $ is the value obtained by Alg. 2 with more iterations. It shows the fast speed with the Gauss-Newton approach and acceleration. With the Gauss-Newton approach, the computation time for Alg. 1 is reduced to about 1/7 of the one with standard SVD (from 65.11s to 8.43s). The accelerated Alg. 2 requires 5.2s, though the number of iterations is reduced from 3194 to 360

    Figure 4.  The numerical experiment on the 'Barbara' image. (A-C) show that the proposed model performs better than Shen et al.'s both visually and in terms of RE and PSNR. (D) compares the objective values vs time for general SVD, Alg. 1, and Alg. 2. Here $ f^\star $ is the value obtained by Alg. 2 with more iterations. It shows the fast speed with the Gauss-Newton approach and acceleration. With the Gauss-Newton approach, the computation time for Alg. 1 is reduced to less than 1/3 of the one with standard SVD (from 148.6s to 43.7s). The accelerated Alg. 2 requires 23.3s, though the number of iterations is reduced from 3210 to 300

    Algorithm 1: Proposed Algorithm
    Input: $ {\bf{D}} $, $ \mu $, $ \lambda $, $ p $, $ \mathcal{A} $, stepsize $ t $, stopping criteria $ \epsilon $, maximum number of iterations $ Max\_Iter $, initialization $ {\bf{L}}^0 = \bf{0} $
    Output: $ {\bf{L}} $, $ {\bf{S}} $
     | Show Table
    DownLoad: CSV
    Algorithm 2: Accelerated algorithm with nonmonotone APG
    Input:$ {\bf{D}} $, $ \mu $, $ \lambda $, $ p $, $ \mathcal{A} $, stepsize $ t $, $ \eta \in [0,1) $, $ \delta > 0 $, stopping criteria $ \epsilon $, maximum number of iterations $ Max\_Iter $, initialization: $ {\bf{L}}^0 = {\bf{L}}^1 = {\bf{Z}}^1 = \textbf{0} $, $ t^0 = 0 $, $ t^1 = q^1=1 $, $ c^1 = F( {\bf{L}}^1) $
    Output:$ {\bf{L}} $, $ {\bf{S}} $
     | Show Table
    DownLoad: CSV

    Table 1.  Comparison of three RPCA algorithms. We compare the relative error of their solutions to the true low-rank matrix and the number of iterations. Both Alg. 1 and Alg. 2 have better performance than [20] in terms of the relative error and the number of iterations. Alg. 2 has the fewest iterations but the relative error could be large. It is because the true low-rank matrix is not the optimal solution to the optimization problem, and the trajectory of the iterations moves close to $ {\bf{L}}^\star $ before it approaches the optimal solution

    $ r $ s Shen et al.'s [20] Alg. 1 Alg.2
    $ RE( {\bf{L}}, {\bf{L}}^\star) $ $ \# $ iter $ RE( {\bf{L}}, {\bf{L}}^\star) $ $ \# $ iter $ RE( {\bf{L}}, {\bf{L}}^\star) $ $ \# $ iter
    25 20 0.0745 1318 0.0075 296 0.0075 68
    50 20 0.0496 1434 0.0101 473 0.0088 77
    25 40 0.0990 2443 0.0635 796 0.0915 187
     | Show Table
    DownLoad: CSV

    Table 2.  Performance of Alg. 2 on low-rank matrix recovery with missing entries. We change the level of sparsity in the sparse noise, standard deviation of the Gaussian noise, and the ratio of missing entries

    {s} {$ \sigma $} ratio of missing entries $ RE( {\bf{L}}, {\bf{L}}^\star) $ by Alg. 2
    20 0.05 10% 0.0079
    20 0.05 20% 0.0088
    20 0.05 50% 0.0201
    5 0.01 50% 0.0015
     | Show Table
    DownLoad: CSV
  • [1] E. Amaldi and V. Kann, On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, Theoretical Computer Science, 209 (1998), 237-260.  doi: 10.1016/S0304-3975(97)00115-1.
    [2] T. Bouwmans and E. H. Zahzah, Robust pca via principal component pursuit: A review for a comparative evaluation in video surveillance, Computer Vision and Image Understanding, 122 (2014), 22-34. 
    [3] H. CaiJ.-F. Cai and K. Wei, Accelerated alternating projections for robust principal component analysis, The Journal of Machine Learning Research, 20 (2019), 685-717. 
    [4] E. J. CandèsX. LiY. Ma and J. Wright, Robust principal component analysis?, Journal of the ACM (JACM), 58 (2011), 1-37.  doi: 10.1145/1970392.1970395.
    [5] R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization, IEEE Signal Processing Letters, 14 (2007), 707-710.  doi: 10.1109/LSP.2007.898300.
    [6] J. P. Cunningham and Z. Ghahramani, Linear dimensionality reduction: Survey, insights, and generalizations, The Journal of Machine Learning Research, 16 (2015), 2859-2900. 
    [7] J. F. P. Da CostaH. Alonso and L. Roque, A weighted principal component analysis and its application to gene expression data, IEEE/ACM Transactions on Computational Biology and Bioinformatics, 8 (2009), 246-252. 
    [8] F. De la Torre and M. J. Black, Robust principal component analysis for computer vision, in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, vol. 1, IEEE, 2001, 362-369.
    [9] E. Elhamifar and R. Vidal, Sparse subspace clustering: Algorithm, theory, and applications, IEEE transactions on pattern analysis and machine intelligence, 35 (2013), 2765-2781.  doi: 10.1109/TPAMI.2013.57.
    [10] J. Fan and R. Li, Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of the American statistical Association, 96 (2001), 1348-1360.  doi: 10.1198/016214501753382273.
    [11] R. A. Horn and  C. R. JohnsonMatrix Analysis, Cambridge university press, 2013. 
    [12] X.-L. HuangL. Shi and M. Yan, Nonconvex sorted $\ell_1$ minimization for sparse approximation, Journal of the Operations Research Society of China, 3 (2015), 207-229.  doi: 10.1007/s40305-014-0069-4.
    [13] G. Li and T. K. Pong, Global convergence of splitting methods for nonconvex composite optimization, SIAM Journal on Optimization, 25 (2015), 2434-2460.  doi: 10.1137/140998135.
    [14] H. Li and Z. Lin, Accelerated proximal gradient methods for nonconvex programming, in Advances in Neural Information Processing Systems, 2015, 379-387.
    [15] Z. Lin, M. Chen and Y. Ma, The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. 2010, arXiv preprint arXiv: 1009.5055, (2010), 663-670.
    [16] G. LiuZ. LinS. YanJ. SunY. Yu and Y. Ma, Robust recovery of subspace structures by low-rank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2012), 171-184. 
    [17] X. LiuZ. Wen and Y. Zhang, An efficient Gauss-Newton algorithm for symmetric low-rank product matrix approximations, SIAM Journal on Optimization, 25 (2015), 1571-1608.  doi: 10.1137/140971464.
    [18] Y. Lou and M. Yan, Fast l1-l2 minimization via a proximal operator, Journal of Scientific Computing, 74 (2018), 767-785.  doi: 10.1007/s10915-017-0463-2.
    [19] N. Sha, M. Yan and Y. Lin, Efficient seismic denoising techniques using robust principal component analysis, in SEG Technical Program Expanded Abstracts 2019, Society of Exploration Geophysicists, 2019, 2543-2547.
    [20] Y. ShenH. Xu and X. Liu, An alternating minimization method for robust principal component analysis, Optimization Methods and Software, 34 (2019), 1251-1276.  doi: 10.1080/10556788.2018.1496086.
    [21] M. Tao and X. Yuan, Recovering low-rank and sparse components of matrices from incomplete and noisy observations, SIAM Journal on Optimization, 21 (2011), 57-81.  doi: 10.1137/100781894.
    [22] L. N. Trefethen and D. Bau â…¢, Numerical linear algebra, vol. 50, SIAM, 1997.
    [23] F. WenR. YingP. Liu and T.-K. Truong, Nonconvex regularized robust PCA using the proximal block coordinate descent algorithm, IEEE Transactions on Signal Processing, 67 (2019), 5402-5416.  doi: 10.1109/TSP.2019.2940121.
    [24] Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Mathematical Programming Computation, 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.
    [25] J. Wright, A. Ganesh, S. Rao, Y. Peng and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization, in Advances in Neural Information Processing Systems, 2009, 2080-2088.
    [26] X. Yuan and J. Yang, Sparse and low-rank matrix decomposition via alternating direction methods, preprint, 12 (2009).
    [27] C.-H. Zhang, Nearly unbiased variable selection under minimax concave penalty, The Annals of Statistics, 38 (2010), 894-942.  doi: 10.1214/09-AOS729.
  • 加载中

Figures(4)

Tables(4)

SHARE

Article Metrics

HTML views(1865) PDF downloads(387) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return