December  2019, 8(4): 695-708. doi: 10.3934/eect.2019034

A penalty decomposition method for nuclear norm minimization with l1 norm fidelity term

1. 

School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China

2. 

Henan Joint International Research Laboratory of Cyberspace Security Applications, Luoyang 471023, China

* Corresponding author: jinzhengfen@whu.edu.cn

Received  April 2018 Revised  February 2019 Published  June 2019

Fund Project: This work of Y. Shang is supported by the National Natural Science Foundation of China(No.11471102). This work of Z.-F Jin is supported by the National Natural Science Foundation of China(No.61370220) and the Plan for Scientific Innovation Talent of Henan Province(No.174200510011). This work of Duo Wang is supported by the National Natural Science Foundation of China(No.11701150)

It is well known that the nuclear norm minimization problems are widely used in numerous fields such as machine learning, system recognition, and image processing, etc. It has captured considerable attention and taken good progress. Many researchers have made great contributions to the nuclear norm minimization problem with $ l_{2} $ norm fidelity term, which is just able to deal with those problems with Gaussian noise. In this paper, we propose an efficient penalty decomposition(PD) method to solve the nuclear norm minimization problem with $ l_{1} $ norm fidelity term. One subproblem admits a closed-form solution due to its special structure, and another can also get a closed-form solution by linearizing its quadratic term. The convergence results of the proposed method are given as well. Finally, the effectiveness and efficiency of the proposed method are verified by some numerical experiments.

Citation: Duo Wang, Zheng-Fen Jin, Youlin Shang. A penalty decomposition method for nuclear norm minimization with l1 norm fidelity term. Evolution Equations & Control Theory, 2019, 8 (4) : 695-708. doi: 10.3934/eect.2019034
References:
[1]

E. AbreuM. LightstoneS. K. Mitra and K. Arakawa, A new efficient approach for the removal of impulse noise from highly corrupted images, IEEE Transactions on Image Processing, 5 (1996), 1012-1025. doi: 10.1109/83.503916. Google Scholar

[2]

S. Alliney, A property of the minimum vectors of a regularizing functional defined by means of the absolute norm, IEEE Transactions on Signal Processing, 45 (1997), 913-917. doi: 10.1109/78.564179. Google Scholar

[3]

H. AttouchJ. Bolte and B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward–backward splitting, and regularized gauss–seidel methods, Mathematical Programming, 137 (2013), 91-129. doi: 10.1007/s10107-011-0484-9. Google Scholar

[4]

A. Beck and L. Tetruashvili, On the convergence of block coordinate descent type methods, SIAM Journal on Optimization, 23 (2013), 2037-2060. doi: 10.1137/120887679. Google Scholar

[5]

D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Athena Scientific, Belmont, MA, 2014. Google Scholar

[6]

P. Bloomfield and W. Steiger, Least Absolute Deviations, Theory, Applications, and Algorithms, Progress in Probability and Statistics, 6. Birkhäuser Boston, Inc., Boston, MA, 1983. Google Scholar

[7]

J. BolteS. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Mathematical Programming, 146 (2014), 459-494. doi: 10.1007/s10107-013-0701-9. Google Scholar

[8]

J. F. CaiE. J. Cand$\grave{e}$s and Z. Shen, A singular value thresholding algorithm for matrix completion, Siam Journal on Optimization, 20 (2008), 1956-1982. doi: 10.1137/080738970. Google Scholar

[9]

J. F. CaiR. H. Chan and M. Nikolova, Two-phase approach for deblurring images corrupted by impulse plus gaussian noise, Inverse Problems and Imaging, 2 (2012), 187-204. doi: 10.3934/ipi.2008.2.187. Google Scholar

[10]

C. ChenB. He and X. Yuan, Matrix completion via an alternating direction method, Ima Journal of Numerical Analysis, 32 (2012), 227-245. doi: 10.1093/imanum/drq039. Google Scholar

[11]

Z. Dong and W. Zhu, An improvement of the penalty decomposition method for sparse approximation, Signal Processing, 113 (2015), 52-60. doi: 10.1016/j.sigpro.2015.01.012. Google Scholar

[12]

M. Fazel, H. Hindi and S. P. Boyd, Rank minimization and applications in system theory, in American Control Conference, 2004. Proceedings of the IEEE, 4 (2004). doi: 10.23919/ACC.2004.1384521. Google Scholar

[13]

M. Fazel, H. Hindi and S. P. Boyd, A rank minimization heuristic with application to minimum order system approximation, in American Control Conference, 2001. Proceedings of the IEEE, 6 (2001). doi: 10.1109/ACC.2001.945730. Google Scholar

[14]

S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, in Computer Vision and Pattern Recognition, 2014, 2862–2869. doi: 10.1109/CVPR.2014.366. Google Scholar

[15]

H. Ji, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, in Computer Vision and Pattern Recognition, 2010, 1791–1798. doi: 10.1109/CVPR.2010.5539849. Google Scholar

[16]

Z. F. JinZ. WanX. Zhao and Y. Xiao, A penalty decomposition method for rank minimization problem with affine constraints, Applied Mathematical Modelling, 39 (2015), 4859-4870. doi: 10.1016/j.apm.2015.03.054. Google Scholar

[17]

Z. F. JinQ. Wang and Z. Wan, Recovering low-rank matrices from corrupted observations via the linear conjugate gradient algorithm, Journal of Computational and Applied Mathematics, 256 (2014), 114-120. doi: 10.1016/j.cam.2013.07.009. Google Scholar

[18]

Z. Lu and Y. Zhang, Penalty decomposition methods for rank minimization, Optimization Methods and Software, 30 (2015), 531-558. doi: 10.1080/10556788.2014.936438. Google Scholar

[19]

S. MaD. Goldfarb and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Mathematical Programming, 128 (2011), 321-353. doi: 10.1007/s10107-009-0306-5. Google Scholar

[20]

Y. Marnissi, Y. Zheng, E. Chouzenoux and J. C. Pesquet, A Variational Bayesian Approach for Restoring Data Corrupted with Non-Gaussian Noise, Research report, Laboratoire Informatique Gaspard Monge, 2016.Google Scholar

[21]

C. A. MicchelliL. ShenY. Xu and X. Zeng, Proximity algorithms for the l1/tv image denoising model, Advances in Computational Mathematics, 38 (2013), 401-426. doi: 10.1007/s10444-011-9243-y. Google Scholar

[22]

K. Mohan and M. Fazel, Reweighted nuclear norm minimization with application to system identification, in American Control Conference, 2010, 2953–2959. doi: 10.1109/ACC.2010.5531594. Google Scholar

[23]

M. Nikolova, A variational approach to remove outliers and impulse noise, Journal of Mathematical Imaging and Vision, 20 (2004), 99-120. doi: 10.1023/B:JMIV.0000011920.58935.9c. Google Scholar

[24]

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 1999. doi: 10.1007/b98874. Google Scholar

[25]

J. F. Sturm, Using SeDuMi 1.02, a matlab toolbox for optimization over symmetric cones, Optimization Methods and Software, 11 (1999), 625-653. doi: 10.1080/10556789908805766. Google Scholar

[26]

K. C. Toh and S. W. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pacific Journal of Optimization, 6 (2010), 615-640. Google Scholar

[27]

P. Tseng, Convergence of a block coordinate descent method for nondifferentiable minimization, Journal of Optimization Theory and Applications, 109 (2001), 475-494. doi: 10.1023/A:1017501703105. Google Scholar

[28]

R. H. TütüncüK. C. Toh and M. J. Todd, Solving semidefinite-quadratic-linear programs using SDPT3, Mathematical Programming, 95 (2003), 189-217. doi: 10.1007/s10107-002-0347-5. Google Scholar

[29]

Y. H. Xiao and Z. F. Jin, An alternating direction method for linear-constrained matrix nuclear norm minimization, Numerical Linear Algebra with Applications, 19 (2012), 541-554. doi: 10.1002/nla.783. Google Scholar

[30]

J. YangL. LuoJ. QianY. TaiF. Zhang and Y. Xu, Nuclear norm based matrix regression with applications to face recognition with occlusion and illumination changes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (2016), 156-171. Google Scholar

[31]

J. Yang and X. Yuan, Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization, Mathematics of Computation, 82 (2013), 301-329. doi: 10.1090/S0025-5718-2012-02598-1. Google Scholar

show all references

References:
[1]

E. AbreuM. LightstoneS. K. Mitra and K. Arakawa, A new efficient approach for the removal of impulse noise from highly corrupted images, IEEE Transactions on Image Processing, 5 (1996), 1012-1025. doi: 10.1109/83.503916. Google Scholar

[2]

S. Alliney, A property of the minimum vectors of a regularizing functional defined by means of the absolute norm, IEEE Transactions on Signal Processing, 45 (1997), 913-917. doi: 10.1109/78.564179. Google Scholar

[3]

H. AttouchJ. Bolte and B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward–backward splitting, and regularized gauss–seidel methods, Mathematical Programming, 137 (2013), 91-129. doi: 10.1007/s10107-011-0484-9. Google Scholar

[4]

A. Beck and L. Tetruashvili, On the convergence of block coordinate descent type methods, SIAM Journal on Optimization, 23 (2013), 2037-2060. doi: 10.1137/120887679. Google Scholar

[5]

D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Athena Scientific, Belmont, MA, 2014. Google Scholar

[6]

P. Bloomfield and W. Steiger, Least Absolute Deviations, Theory, Applications, and Algorithms, Progress in Probability and Statistics, 6. Birkhäuser Boston, Inc., Boston, MA, 1983. Google Scholar

[7]

J. BolteS. Sabach and M. Teboulle, Proximal alternating linearized minimization for nonconvex and nonsmooth problems, Mathematical Programming, 146 (2014), 459-494. doi: 10.1007/s10107-013-0701-9. Google Scholar

[8]

J. F. CaiE. J. Cand$\grave{e}$s and Z. Shen, A singular value thresholding algorithm for matrix completion, Siam Journal on Optimization, 20 (2008), 1956-1982. doi: 10.1137/080738970. Google Scholar

[9]

J. F. CaiR. H. Chan and M. Nikolova, Two-phase approach for deblurring images corrupted by impulse plus gaussian noise, Inverse Problems and Imaging, 2 (2012), 187-204. doi: 10.3934/ipi.2008.2.187. Google Scholar

[10]

C. ChenB. He and X. Yuan, Matrix completion via an alternating direction method, Ima Journal of Numerical Analysis, 32 (2012), 227-245. doi: 10.1093/imanum/drq039. Google Scholar

[11]

Z. Dong and W. Zhu, An improvement of the penalty decomposition method for sparse approximation, Signal Processing, 113 (2015), 52-60. doi: 10.1016/j.sigpro.2015.01.012. Google Scholar

[12]

M. Fazel, H. Hindi and S. P. Boyd, Rank minimization and applications in system theory, in American Control Conference, 2004. Proceedings of the IEEE, 4 (2004). doi: 10.23919/ACC.2004.1384521. Google Scholar

[13]

M. Fazel, H. Hindi and S. P. Boyd, A rank minimization heuristic with application to minimum order system approximation, in American Control Conference, 2001. Proceedings of the IEEE, 6 (2001). doi: 10.1109/ACC.2001.945730. Google Scholar

[14]

S. Gu, L. Zhang, W. Zuo and X. Feng, Weighted nuclear norm minimization with application to image denoising, in Computer Vision and Pattern Recognition, 2014, 2862–2869. doi: 10.1109/CVPR.2014.366. Google Scholar

[15]

H. Ji, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, in Computer Vision and Pattern Recognition, 2010, 1791–1798. doi: 10.1109/CVPR.2010.5539849. Google Scholar

[16]

Z. F. JinZ. WanX. Zhao and Y. Xiao, A penalty decomposition method for rank minimization problem with affine constraints, Applied Mathematical Modelling, 39 (2015), 4859-4870. doi: 10.1016/j.apm.2015.03.054. Google Scholar

[17]

Z. F. JinQ. Wang and Z. Wan, Recovering low-rank matrices from corrupted observations via the linear conjugate gradient algorithm, Journal of Computational and Applied Mathematics, 256 (2014), 114-120. doi: 10.1016/j.cam.2013.07.009. Google Scholar

[18]

Z. Lu and Y. Zhang, Penalty decomposition methods for rank minimization, Optimization Methods and Software, 30 (2015), 531-558. doi: 10.1080/10556788.2014.936438. Google Scholar

[19]

S. MaD. Goldfarb and L. Chen, Fixed point and Bregman iterative methods for matrix rank minimization, Mathematical Programming, 128 (2011), 321-353. doi: 10.1007/s10107-009-0306-5. Google Scholar

[20]

Y. Marnissi, Y. Zheng, E. Chouzenoux and J. C. Pesquet, A Variational Bayesian Approach for Restoring Data Corrupted with Non-Gaussian Noise, Research report, Laboratoire Informatique Gaspard Monge, 2016.Google Scholar

[21]

C. A. MicchelliL. ShenY. Xu and X. Zeng, Proximity algorithms for the l1/tv image denoising model, Advances in Computational Mathematics, 38 (2013), 401-426. doi: 10.1007/s10444-011-9243-y. Google Scholar

[22]

K. Mohan and M. Fazel, Reweighted nuclear norm minimization with application to system identification, in American Control Conference, 2010, 2953–2959. doi: 10.1109/ACC.2010.5531594. Google Scholar

[23]

M. Nikolova, A variational approach to remove outliers and impulse noise, Journal of Mathematical Imaging and Vision, 20 (2004), 99-120. doi: 10.1023/B:JMIV.0000011920.58935.9c. Google Scholar

[24]

J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 1999. doi: 10.1007/b98874. Google Scholar

[25]

J. F. Sturm, Using SeDuMi 1.02, a matlab toolbox for optimization over symmetric cones, Optimization Methods and Software, 11 (1999), 625-653. doi: 10.1080/10556789908805766. Google Scholar

[26]

K. C. Toh and S. W. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pacific Journal of Optimization, 6 (2010), 615-640. Google Scholar

[27]

P. Tseng, Convergence of a block coordinate descent method for nondifferentiable minimization, Journal of Optimization Theory and Applications, 109 (2001), 475-494. doi: 10.1023/A:1017501703105. Google Scholar

[28]

R. H. TütüncüK. C. Toh and M. J. Todd, Solving semidefinite-quadratic-linear programs using SDPT3, Mathematical Programming, 95 (2003), 189-217. doi: 10.1007/s10107-002-0347-5. Google Scholar

[29]

Y. H. Xiao and Z. F. Jin, An alternating direction method for linear-constrained matrix nuclear norm minimization, Numerical Linear Algebra with Applications, 19 (2012), 541-554. doi: 10.1002/nla.783. Google Scholar

[30]

J. YangL. LuoJ. QianY. TaiF. Zhang and Y. Xu, Nuclear norm based matrix regression with applications to face recognition with occlusion and illumination changes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (2016), 156-171. Google Scholar

[31]

J. Yang and X. Yuan, Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization, Mathematics of Computation, 82 (2013), 301-329. doi: 10.1090/S0025-5718-2012-02598-1. Google Scholar

Figure 1.  Numerical comparisons of model (22) and (23) with three noisy cases(m = n = 500, r = 10, sr = 0.5)
Figure 2.  Numerical comparisons of model (22) and (23) with noiseless(m = n = 500). First row: $ r = 10 $, $ sr $ is $ 0.3,0.5,0.7 $ respectively. Second row: $ sr = 0.5 $, $ r $ is $ 5,15,30 $ respectively
Figure 3.  (a): Original gray image($ 512\times 512 $); (b): Corresponding low rank images with $ r = 30 $; (c): Randomly masked image from rank $ 30 $ with $ sr = 70\% $ and $ q = 0.01 $; (e):Randomly masked image from rank $ 30 $ with $ sr = 70\% $ and $ \sigma = 0.001 $, $ q = 0.01 $; (d), (f): Recovered images by $ PD-l_{1} $, whose "RelErr" are $ 4.760e-3 $, $ 4.827e-3 $ respectively
Figure 4.  Numerical results of $ PD-l_{1} $ for problem (7) with impulsive noise: m = n = 500, r = 10, sr = 0.5, 0.7, 0.9, q = 0.01
Table 1.  Numerical results of $PD-l_{1}$ for (7), m = n, sr = 0.5, q = 0.01
$PD-l_{1}$
$(m, r)$ p/dr Iter Time RelErr
$(100, 5)$ 5.13 392 2.84 9.905e-004
$(200, 5)$ 10.13 187 4.03 9.881e-004
$(300, 5)$ 15.13 130 6.51 9.916e-004
$(400, 5)$ 20.13 99 9.26 9.947e-004
$(500, 5)$ 25.13 91 14.44 9.709e-004
$(600, 5)$ 30.13 78 19.31 9.853e-004
$(700, 5)$ 35.13 85 36.56 9.984e-004
$(800, 5)$ 40.13 88 54.63 9.889e-004
$(900, 5)$ 45.13 60 51.40 9.793e-004
$(1000, 5)$ 50.13 62 72.48 9.737e-004
$PD-l_{1}$
$(m, r)$ p/dr Iter Time RelErr
$(100, 5)$ 5.13 392 2.84 9.905e-004
$(200, 5)$ 10.13 187 4.03 9.881e-004
$(300, 5)$ 15.13 130 6.51 9.916e-004
$(400, 5)$ 20.13 99 9.26 9.947e-004
$(500, 5)$ 25.13 91 14.44 9.709e-004
$(600, 5)$ 30.13 78 19.31 9.853e-004
$(700, 5)$ 35.13 85 36.56 9.984e-004
$(800, 5)$ 40.13 88 54.63 9.889e-004
$(900, 5)$ 45.13 60 51.40 9.793e-004
$(1000, 5)$ 50.13 62 72.48 9.737e-004
Table 2.  Numerical results of $PD-l_{1}$ for (7), m = n, sr = 0.5
$PD-l_{1}$
$(m, r)$ p/dr q $\sigma$ Time RelErr
$(500, 10)$ 12.63 0.01 0.00 21.43 9.905e-004
$(500, 10)$ 12.63 0.01 0.001 21.65 9.822e-004
$(500, 10)$ 12.63 0.01 0.0001 21.77 9.781e-004
$(500, 10)$ 12.63 0.001 0.0001 20.61 8.054e-004
$(500, 10)$ 12.63 0.005 0.0001 20.53 8.659e-004
$(500, 10)$ 12.63 0.00 0.0001 2.15 7.722e-005
$PD-l_{1}$
$(m, r)$ p/dr q $\sigma$ Time RelErr
$(500, 10)$ 12.63 0.01 0.00 21.43 9.905e-004
$(500, 10)$ 12.63 0.01 0.001 21.65 9.822e-004
$(500, 10)$ 12.63 0.01 0.0001 21.77 9.781e-004
$(500, 10)$ 12.63 0.001 0.0001 20.61 8.054e-004
$(500, 10)$ 12.63 0.005 0.0001 20.53 8.659e-004
$(500, 10)$ 12.63 0.00 0.0001 2.15 7.722e-005
[1]

Braxton Osting, Jérôme Darbon, Stanley Osher. Statistical ranking using the $l^{1}$-norm on graphs. Inverse Problems & Imaging, 2013, 7 (3) : 907-926. doi: 10.3934/ipi.2013.7.907

[2]

Pia Heins, Michael Moeller, Martin Burger. Locally sparse reconstruction using the $l^{1,\infty}$-norm. Inverse Problems & Imaging, 2015, 9 (4) : 1093-1137. doi: 10.3934/ipi.2015.9.1093

[3]

P. R. Zingano. Asymptotic behavior of the $L^1$ norm of solutions to nonlinear parabolic equations. Communications on Pure & Applied Analysis, 2004, 3 (1) : 151-159. doi: 10.3934/cpaa.2004.3.151

[4]

Yifu Feng, Min Zhang. A $p$-spherical section property for matrix Schatten-$p$ quasi-norm minimization. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-11. doi: 10.3934/jimo.2018159

[5]

Yingying Li, Stanley Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Problems & Imaging, 2009, 3 (3) : 487-503. doi: 10.3934/ipi.2009.3.487

[6]

Lidan Li, Hongwei Zhang, Liwei Zhang. Inverse quadratic programming problem with $ l_1 $ norm measure. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-13. doi: 10.3934/jimo.2019061

[7]

Vladimir Gaitsgory, Tanya Tarnopolskaya. Threshold value of the penalty parameter in the minimization of $L_1$-penalized conditional value-at-risk. Journal of Industrial & Management Optimization, 2013, 9 (1) : 191-204. doi: 10.3934/jimo.2013.9.191

[8]

Lei Wu, Zhe Sun. A new spectral method for $l_1$-regularized minimization. Inverse Problems & Imaging, 2015, 9 (1) : 257-272. doi: 10.3934/ipi.2015.9.257

[9]

Karina Samvelyan, Frol Zapolsky. Rigidity of the ${{L}^{p}}$-norm of the Poisson bracket on surfaces. Electronic Research Announcements, 2017, 24: 28-37. doi: 10.3934/era.2017.24.004

[10]

Feishe Chen, Lixin Shen, Yuesheng Xu, Xueying Zeng. The Moreau envelope approach for the L1/TV image denoising model. Inverse Problems & Imaging, 2014, 8 (1) : 53-77. doi: 10.3934/ipi.2014.8.53

[11]

Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. Journal of Industrial & Management Optimization, 2016, 12 (3) : 949-973. doi: 10.3934/jimo.2016.12.949

[12]

Yunhai Xiao, Soon-Yi Wu, Bing-Sheng He. A proximal alternating direction method for $\ell_{2,1}$-norm least squares problem in multi-task feature learning. Journal of Industrial & Management Optimization, 2012, 8 (4) : 1057-1069. doi: 10.3934/jimo.2012.8.1057

[13]

Zhengshan Dong, Jianli Chen, Wenxing Zhu. Homotopy method for matrix rank minimization based on the matrix hard thresholding method. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 211-224. doi: 10.3934/naco.2019015

[14]

Sijia Zhong, Daoyuan Fang. $L^2$-concentration phenomenon for Zakharov system below energy norm II. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1117-1132. doi: 10.3934/cpaa.2009.8.1117

[15]

Ben Green, Terence Tao, Tamar Ziegler. An inverse theorem for the Gowers $U^{s+1}[N]$-norm. Electronic Research Announcements, 2011, 18: 69-90. doi: 10.3934/era.2011.18.69

[16]

Donglei Du, Tianping Shuai. Errata to:''Optimal preemptive online scheduling to minimize $l_{p}$ norm on two processors''[Journal of Industrial and Management Optimization, 1(3) (2005), 345-351.]. Journal of Industrial & Management Optimization, 2008, 4 (2) : 339-341. doi: 10.3934/jimo.2008.4.339

[17]

Axel Grünrock, Sebastian Herr. The Fourier restriction norm method for the Zakharov-Kuznetsov equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2061-2068. doi: 10.3934/dcds.2014.34.2061

[18]

Fengmin Xu, Yanfei Wang. Recovery of seismic wavefields by an lq-norm constrained regularization method. Inverse Problems & Imaging, 2018, 12 (5) : 1157-1172. doi: 10.3934/ipi.2018048

[19]

Saeed Ketabchi, Hossein Moosaei, M. Parandegan, Hamidreza Navidi. Computing minimum norm solution of linear systems of equations by the generalized Newton method. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 113-119. doi: 10.3934/naco.2017008

[20]

Linghua Chen, Espen R. Jakobsen. L1 semigroup generation for Fokker-Planck operators associated to general Lévy driven SDEs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5735-5763. doi: 10.3934/dcds.2018250

2018 Impact Factor: 1.048

Metrics

  • PDF downloads (27)
  • HTML views (141)
  • Cited by (0)

Other articles
by authors

[Back to Top]