\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

A penalty decomposition method for nuclear norm minimization with l1 norm fidelity term

This work of Y. Shang is supported by the National Natural Science Foundation of China(No.11471102). This work of Z.-F Jin is supported by the National Natural Science Foundation of China(No.61370220) and the Plan for Scientific Innovation Talent of Henan Province(No.174200510011). This work of Duo Wang is supported by the National Natural Science Foundation of China(No.11701150)

Abstract / Introduction Full Text(HTML) Figure(4) / Table(2) Related Papers Cited by
  • It is well known that the nuclear norm minimization problems are widely used in numerous fields such as machine learning, system recognition, and image processing, etc. It has captured considerable attention and taken good progress. Many researchers have made great contributions to the nuclear norm minimization problem with $ l_{2} $ norm fidelity term, which is just able to deal with those problems with Gaussian noise. In this paper, we propose an efficient penalty decomposition(PD) method to solve the nuclear norm minimization problem with $ l_{1} $ norm fidelity term. One subproblem admits a closed-form solution due to its special structure, and another can also get a closed-form solution by linearizing its quadratic term. The convergence results of the proposed method are given as well. Finally, the effectiveness and efficiency of the proposed method are verified by some numerical experiments.

    Mathematics Subject Classification: Primary: 90C06, 90C25, 90C30.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Numerical comparisons of model (22) and (23) with three noisy cases(m = n = 500, r = 10, sr = 0.5)

    Figure 2.  Numerical comparisons of model (22) and (23) with noiseless(m = n = 500). First row: $ r = 10 $, $ sr $ is $ 0.3,0.5,0.7 $ respectively. Second row: $ sr = 0.5 $, $ r $ is $ 5,15,30 $ respectively

    Figure 3.  (a): Original gray image($ 512\times 512 $); (b): Corresponding low rank images with $ r = 30 $; (c): Randomly masked image from rank $ 30 $ with $ sr = 70\% $ and $ q = 0.01 $; (e):Randomly masked image from rank $ 30 $ with $ sr = 70\% $ and $ \sigma = 0.001 $, $ q = 0.01 $; (d), (f): Recovered images by $ PD-l_{1} $, whose "RelErr" are $ 4.760e-3 $, $ 4.827e-3 $ respectively

    Figure 4.  Numerical results of $ PD-l_{1} $ for problem (7) with impulsive noise: m = n = 500, r = 10, sr = 0.5, 0.7, 0.9, q = 0.01

    Table 1.  Numerical results of $PD-l_{1}$ for (7), m = n, sr = 0.5, q = 0.01

    $PD-l_{1}$
    $(m, r)$ p/dr Iter Time RelErr
    $(100, 5)$ 5.13 392 2.84 9.905e-004
    $(200, 5)$ 10.13 187 4.03 9.881e-004
    $(300, 5)$ 15.13 130 6.51 9.916e-004
    $(400, 5)$ 20.13 99 9.26 9.947e-004
    $(500, 5)$ 25.13 91 14.44 9.709e-004
    $(600, 5)$ 30.13 78 19.31 9.853e-004
    $(700, 5)$ 35.13 85 36.56 9.984e-004
    $(800, 5)$ 40.13 88 54.63 9.889e-004
    $(900, 5)$ 45.13 60 51.40 9.793e-004
    $(1000, 5)$ 50.13 62 72.48 9.737e-004
     | Show Table
    DownLoad: CSV

    Table 2.  Numerical results of $PD-l_{1}$ for (7), m = n, sr = 0.5

    $PD-l_{1}$
    $(m, r)$ p/dr q $\sigma$ Time RelErr
    $(500, 10)$ 12.63 0.01 0.00 21.43 9.905e-004
    $(500, 10)$ 12.63 0.01 0.001 21.65 9.822e-004
    $(500, 10)$ 12.63 0.01 0.0001 21.77 9.781e-004
    $(500, 10)$ 12.63 0.001 0.0001 20.61 8.054e-004
    $(500, 10)$ 12.63 0.005 0.0001 20.53 8.659e-004
    $(500, 10)$ 12.63 0.00 0.0001 2.15 7.722e-005
     | Show Table
    DownLoad: CSV
  • [1] E. AbreuM. LightstoneS. K. Mitra and K. Arakawa, A new efficient approach for the removal of impulse noise from highly corrupted images, IEEE Transactions on Image Processing, 5 (1996), 1012-1025.  doi: 10.1109/83.503916.
    [2] S. Alliney, A property of the minimum vectors of a regularizing functional defined by means of the absolute norm, IEEE Transactions on Signal Processing, 45 (1997), 913-917.  doi: 10.1109/78.564179.
    [3] H. AttouchJ. Bolte and B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward–backward splitting, and regularized gauss–seidel methods, Mathematical Programming, 137 (2013), 91-129.  doi: 10.1007/s10107-011-0484-9.
    [4] A. Beck and L. Tetruashvili, On the convergence of block coordinate descent type methods, SIAM Journal on Optimization, 23 (2013), 2037-2060.  doi: 10.1137/120887679.
    [5] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Athena Scientific, Belmont, MA, 2014.
    [6] P. Bloomfield and W. Steiger, Least Absolute Deviations, Theory, Applications, and Algorithms, Progress in Probability and Statistics, 6. Birkhäuser Boston, Inc., Boston, MA, 1983.
    [7] J. BolteS. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Mathematical Programming, 146 (2014), 459-494.  doi: 10.1007/s10107-013-0701-9.
    [8] J. F. CaiE. J. Cand$\grave{e}$s and Z. Shen, A singular value thresholding algorithm for matrix completion, Siam Journal on Optimization, 20 (2008), 1956-1982.  doi: 10.1137/080738970.
    [9] J. F. CaiR. H. Chan and M. Nikolova, Two-phase approach for deblurring images corrupted by impulse plus gaussian noise, Inverse Problems and Imaging, 2 (2012), 187-204.  doi: 10.3934/ipi.2008.2.187.
    [10] C. ChenB. He and X. Yuan, Matrix completion via an alternating direction method, Ima Journal of Numerical Analysis, 32 (2012), 227-245.  doi: 10.1093/imanum/drq039.
    [11] Z. Dong and W. Zhu, An improvement of the penalty decomposition method for sparse approximation, Signal Processing, 113 (2015), 52-60.  doi: 10.1016/j.sigpro.2015.01.012.
    [12] M. Fazel, H. Hindi and S. P. Boyd, Rank minimization and applications in system theory, in American Control Conference, 2004. Proceedings of the IEEE, 4 (2004). doi: 10.23919/ACC.2004.1384521.
    [13] M. Fazel, H. Hindi and S. P. Boyd, A rank minimization heuristic with application to minimum order system approximation, in American Control Conference, 2001. Proceedings of the IEEE, 6 (2001). doi: 10.1109/ACC.2001.945730.
    [14] S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, in Computer Vision and Pattern Recognition, 2014, 2862–2869. doi: 10.1109/CVPR.2014.366.
    [15] H. Ji, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, in Computer Vision and Pattern Recognition, 2010, 1791–1798. doi: 10.1109/CVPR.2010.5539849.
    [16] Z. F. JinZ. WanX. Zhao and Y. Xiao, A penalty decomposition method for rank minimization problem with affine constraints, Applied Mathematical Modelling, 39 (2015), 4859-4870.  doi: 10.1016/j.apm.2015.03.054.
    [17] Z. F. JinQ. Wang and Z. Wan, Recovering low-rank matrices from corrupted observations via the linear conjugate gradient algorithm, Journal of Computational and Applied Mathematics, 256 (2014), 114-120.  doi: 10.1016/j.cam.2013.07.009.
    [18] Z. Lu and Y. Zhang, Penalty decomposition methods for rank minimization, Optimization Methods and Software, 30 (2015), 531-558.  doi: 10.1080/10556788.2014.936438.
    [19] S. MaD. Goldfarb and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Mathematical Programming, 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.
    [20] Y. Marnissi, Y. Zheng, E. Chouzenoux and J. C. Pesquet, A Variational Bayesian Approach for Restoring Data Corrupted with Non-Gaussian Noise, Research report, Laboratoire Informatique Gaspard Monge, 2016.
    [21] C. A. MicchelliL. ShenY. Xu and X. Zeng, Proximity algorithms for the l1/tv image denoising model, Advances in Computational Mathematics, 38 (2013), 401-426.  doi: 10.1007/s10444-011-9243-y.
    [22] K. Mohan and M. Fazel, Reweighted nuclear norm minimization with application to system identification, in American Control Conference, 2010, 2953–2959. doi: 10.1109/ACC.2010.5531594.
    [23] M. Nikolova, A variational approach to remove outliers and impulse noise, Journal of Mathematical Imaging and Vision, 20 (2004), 99-120.  doi: 10.1023/B:JMIV.0000011920.58935.9c.
    [24] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 1999. doi: 10.1007/b98874.
    [25] J. F. Sturm, Using SeDuMi 1.02, a matlab toolbox for optimization over symmetric cones, Optimization Methods and Software, 11 (1999), 625-653.  doi: 10.1080/10556789908805766.
    [26] K. C. Toh and S. W. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pacific Journal of Optimization, 6 (2010), 615-640. 
    [27] P. Tseng, Convergence of a block coordinate descent method for nondifferentiable minimization, Journal of Optimization Theory and Applications, 109 (2001), 475-494.  doi: 10.1023/A:1017501703105.
    [28] R. H. TütüncüK. C. Toh and M. J. Todd, Solving semidefinite-quadratic-linear programs using SDPT3, Mathematical Programming, 95 (2003), 189-217.  doi: 10.1007/s10107-002-0347-5.
    [29] Y. H. Xiao and Z. F. Jin, An alternating direction method for linear-constrained matrix nuclear norm minimization, Numerical Linear Algebra with Applications, 19 (2012), 541-554.  doi: 10.1002/nla.783.
    [30] J. YangL. LuoJ. QianY. TaiF. Zhang and Y. Xu, Nuclear norm based matrix regression with applications to face recognition with occlusion and illumination changes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (2016), 156-171. 
    [31] J. Yang and X. Yuan, Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization, Mathematics of Computation, 82 (2013), 301-329.  doi: 10.1090/S0025-5718-2012-02598-1.
  • 加载中

Figures(4)

Tables(2)

SHARE

Article Metrics

HTML views(2122) PDF downloads(234) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return