• Previous Article
    A new iterative identification method for damping control of power system in multi-interference
  • DCDS-S Home
  • This Issue
  • Next Article
    Penalty method-based equilibrium point approach for solving the linear bilevel multiobjective programming problem
June  2020, 13(6): 1757-1772. doi: 10.3934/dcdss.2020103

An alternating minimization method for matrix completion problems

1. 

School of Applied Mathematics, Nanjing University Of Finance & Economics, China

2. 

State Key Laboratory of Scientific and Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, University of Chinese Academy of Sciences, China

* Corresponding author

Received  September 2018 Revised  November 2018 Published  September 2019

Fund Project: The first author is supported by NSFC grants 11401295 and 11726618, and by Major Program of the National Social Science Foundation of China under Grant 12 & ZD114 and by National Social Science Foundation of China under Grant 15BGL158, 17BTQ063 and by Qinglan Project of Jiangsu Province, and Social Science Foundation of Jiangsu Province under Grant 18GLA002. The second author is supported by NSFC grants 11622112, 11471325, 91530204 and 11688101, the National Center for Mathematics and Interdisciplinary Sciences, CAS, and Key Research Program of Frontier Sciences QYZDJ-SSW-SYS010, CAS

Matrix completion problems have applications in various domains such as information theory, statistics, engineering, etc. Meanwhile, solving matrix completion problems is not a easy task since the nonconvex and nonsmooth rank operation is involved. Existing approaches can be categorized into two classes. The first ones use nuclear norm to take the place of rank operation, and any convex optimization algorithms can be used to solve the reformulated problem. The limitation of this class of approaches is singular value decomposition (SVD) is involved to tackle the nuclear norm which significantly increases the computational cost. The other ones factorize the target matrix by two slim matrices. Fast algorithms for solving the reformulated nonconvex optimization problem usually lack of global convergence, meanwhile convergence guaranteed algorithms require restricted stepsize. In this paper, we consider the matrix factorization model for matrix completion problems, and propose an alternating minimization method for solving it. The global convergence to a stationary point or local minimizer is guaranteed under mild conditions. We compare the proposed algorithm with some state-of-the-art algorithms in solving a bunch of testing problems. The numerical results illustrate the efficiency and great potential of our algorithm.

Citation: Yuan Shen, Xin Liu. An alternating minimization method for matrix completion problems. Discrete & Continuous Dynamical Systems - S, 2020, 13 (6) : 1757-1772. doi: 10.3934/dcdss.2020103
References:
[1]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.  Google Scholar

[2]

J. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.  Google Scholar

[3]

E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936.   Google Scholar

[4]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[5]

E. J. Candès and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions, Found. Comput. Math., 6 (2006), 227-254.  doi: 10.1007/s10208-004-0162-x.  Google Scholar

[6]

E. J. CandèsT. Romberg and J. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.  Google Scholar

[7]

E. J. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.  Google Scholar

[8]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.  Google Scholar

[9]

R. Chartrand, Nonconvex splitting for regularized low-rank + sparse decomposition, IEEE Trans. Signal Process., 60 (2012), 5810-5819.  doi: 10.1109/TSP.2012.2208955.  Google Scholar

[10]

C. ChenB. HeY. Ye and X. Yuan, The direct extension of admm for multi-block convex minimization problems is not necessarily convergent, Math. Program., 155 (2016), 57-79.  doi: 10.1007/s10107-014-0826-5.  Google Scholar

[11]

W. Dai and O. Milenkovic, Set: an algorithm for consistent matrix completion, CoRR, abs/0909.2705 (2009). Google Scholar

[12]

D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.  Google Scholar

[13]

M. Elad, Why simple shrinkage is still relevant for redundant representations?, IEEE Trans. Inform. Theory, 52 (2006), 5559-5569.  doi: 10.1109/TIT.2006.885522.  Google Scholar

[14]

L. Eldén, Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2007. doi: 10.1137/1.9780898718867.  Google Scholar

[15]

D. Goldfarb, S. Ma and Z. Wen, Solving low-rank matrix completion problems efficiently, in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton'09, 2009, 1013–1020. Google Scholar

[16]

D. Gross, Recovering Low-Rank Matrices from Few Coefficients in Any Basis, tech. rep., Leibniz University, 2009. Google Scholar

[17]

D. Han and X. Yuan, A note on the alternating direction method of multipliers, J. Optim. Theory Appl., 155 (2012), 227-238.  doi: 10.1007/s10957-012-0003-z.  Google Scholar

[18]

M. Hong and Z.-Q. Luo, On the linear convergence of the alternating direction method of multipliers, preprint, http://arXiv.org/abs/1208.3922, 2012. doi: 10.1007/s10107-016-1034-2.  Google Scholar

[19]

M. Hong, Z.-Q. Luo and M. Razaviyayn, Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, SIAM J. Optim., 26 (2016), 337–364, http://arXiv.org/abs/1410.1390. doi: 10.1137/140990309.  Google Scholar

[20]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.  Google Scholar

[21]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from noisy entries, J. Mach. Learn. Res., 11 (2010), 2057-2078.   Google Scholar

[22]

R. H. Keshavan and S. Oh, A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion, tech. rep., Dept. of Electrical Engineering, Stanford University, 2009. Google Scholar

[23]

H. Kim and H. Park, Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method, SIAM J. Matrix Anal. Appl., 30 (2008), 713-730.  doi: 10.1137/07069239X.  Google Scholar

[24]

J. Kim and H. Park, Sparse Nonnegative Matrix Factorization for Clustering, tech. rep., Georgia Institute of Technology, 2008. Technical Report GT-CSE-08-01. Google Scholar

[25]

K. Lee and Y. Bresler, Admira: atomic decomposition for minimum rank approximation, IEEE Trans. Inf. Theor., 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.  Google Scholar

[26]

M. Li, D. Sun and K.-C. Toh, A convergent 3-block semi-proximal admm for convex minimization problems with one strongly convex block, Asia Pac. J. Oper. Res., 32 (2015), 1550024, 19 pp. doi: 10.1142/S0217595915500244.  Google Scholar

[27]

X. LiuZ. Wen and Y. Zhang, An efficient gauss–newton algorithm for symmetric low-rank product matrix approximations, SIAM J. Optim., 25 (2015), 1571-1608.  doi: 10.1137/140971464.  Google Scholar

[28]

Y. LiuD. Sun and K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Math. Program., 133 (2012), 399-436.  doi: 10.1007/s10107-010-0437-8.  Google Scholar

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. A., 31 (2009), 1235-1256.  doi: 10.1137/090755436.  Google Scholar

[30]

S. MaD. Goldfarb and L. Chen, Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.  Google Scholar

[31]

R. Mazumder, T. Hastie and R. Tibshirani, Regularization Methods for Learning Incomplete Matrices, tech. rep., Stanford University, 2009. Google Scholar

[32]

R. Meka, P. Jain and I. S. Dhillon, Guaranteed Rank Minimization Via Singular Value Projection, CoRR, abs/0909.5457 (2009). Google Scholar

[33]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867.   Google Scholar

[34]

Y. Nesterov, Introductory Lectures on Convex Optimization, A basic course. Applied Optimization, 87. Kluwer Academic Publishers, Boston, MA, 2004. doi: 10.1007/978-1-4419-8853-9.  Google Scholar

[35]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430.   Google Scholar

[36]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.  Google Scholar

[37]

Y. ShenZ. Wen and Y. Zhang, Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization, Optim. Methods Softw., 29 (2014), 239-263.  doi: 10.1080/10556788.2012.700713.  Google Scholar

[38]

K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pac. J. Optim., 6 (2010), 615-640.   Google Scholar

[39]

C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. J. Comput. Vision, 9 (1992), 137-154.   Google Scholar

[40]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.  Google Scholar

[41]

J. Yang and X. Yuan, An Inexact Alternating Direction Method for Trace Norm Regularized Least Squares Problem, tech. rep., Dept. of Mathematics, Nanjing University, 2010. Google Scholar

[42]

J. YangY. Zhang and W. Yin, An efficient tvl1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), 2842-2865.  doi: 10.1137/080732894.  Google Scholar

[43]

X. Yuan and J. Yang, Sparse and low-rank matrix decomposition via alternating direction method, Pac. J. Optim., 9 (2013), 167-180.   Google Scholar

[44]

Z. Zhu, A. M.-C. So and Y. Ye, Fast and Near–Optimal Matrix Completion Via Randomized Basis Pursuit, tech. rep., Stanford University, 2009. Google Scholar

[45]

H. ZouT. Hastie and R. Tibshirani, Sparse principal component analysis, J. Comput. Graph. Stat., 15 (2006), 265-286.  doi: 10.1198/106186006X113430.  Google Scholar

show all references

References:
[1]

A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci., 2 (2009), 183-202.  doi: 10.1137/080716542.  Google Scholar

[2]

J. CaiE. J. Candès and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.  Google Scholar

[3]

E. J. Candès and Y. Plan, Matrix completion with noise, Proceedings of the IEEE, 98 (2010), 925-936.   Google Scholar

[4]

E. J. Candès and B. Recht, Exact matrix completion via convex optimization, Found. Comput. Math., 9 (2009), 717-772.  doi: 10.1007/s10208-009-9045-5.  Google Scholar

[5]

E. J. Candès and J. Romberg, Quantitative robust uncertainty principles and optimally sparse decompositions, Found. Comput. Math., 6 (2006), 227-254.  doi: 10.1007/s10208-004-0162-x.  Google Scholar

[6]

E. J. CandèsT. Romberg and J. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), 489-509.  doi: 10.1109/TIT.2005.862083.  Google Scholar

[7]

E. J. Candès and T. Tao, Near optimal signal recovery from random projections: Universal encoding strategies?, IEEE Trans. Inform. Theory, 52 (2006), 5406-5425.  doi: 10.1109/TIT.2006.885507.  Google Scholar

[8]

E. J. Candès and T. Tao, The power of convex relaxation: Near-optimal matrix completion, IEEE Trans. Inform. Theory, 56 (2010), 2053-2080.  doi: 10.1109/TIT.2010.2044061.  Google Scholar

[9]

R. Chartrand, Nonconvex splitting for regularized low-rank + sparse decomposition, IEEE Trans. Signal Process., 60 (2012), 5810-5819.  doi: 10.1109/TSP.2012.2208955.  Google Scholar

[10]

C. ChenB. HeY. Ye and X. Yuan, The direct extension of admm for multi-block convex minimization problems is not necessarily convergent, Math. Program., 155 (2016), 57-79.  doi: 10.1007/s10107-014-0826-5.  Google Scholar

[11]

W. Dai and O. Milenkovic, Set: an algorithm for consistent matrix completion, CoRR, abs/0909.2705 (2009). Google Scholar

[12]

D. Donoho, Compressed sensing, IEEE Trans. Inform. Theory, 52 (2006), 1289-1306.  doi: 10.1109/TIT.2006.871582.  Google Scholar

[13]

M. Elad, Why simple shrinkage is still relevant for redundant representations?, IEEE Trans. Inform. Theory, 52 (2006), 5559-5569.  doi: 10.1109/TIT.2006.885522.  Google Scholar

[14]

L. Eldén, Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms), Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2007. doi: 10.1137/1.9780898718867.  Google Scholar

[15]

D. Goldfarb, S. Ma and Z. Wen, Solving low-rank matrix completion problems efficiently, in Proceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton'09, 2009, 1013–1020. Google Scholar

[16]

D. Gross, Recovering Low-Rank Matrices from Few Coefficients in Any Basis, tech. rep., Leibniz University, 2009. Google Scholar

[17]

D. Han and X. Yuan, A note on the alternating direction method of multipliers, J. Optim. Theory Appl., 155 (2012), 227-238.  doi: 10.1007/s10957-012-0003-z.  Google Scholar

[18]

M. Hong and Z.-Q. Luo, On the linear convergence of the alternating direction method of multipliers, preprint, http://arXiv.org/abs/1208.3922, 2012. doi: 10.1007/s10107-016-1034-2.  Google Scholar

[19]

M. Hong, Z.-Q. Luo and M. Razaviyayn, Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems, SIAM J. Optim., 26 (2016), 337–364, http://arXiv.org/abs/1410.1390. doi: 10.1137/140990309.  Google Scholar

[20]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from a few entries, IEEE Trans. Inform. Theory, 56 (2010), 2980-2998.  doi: 10.1109/TIT.2010.2046205.  Google Scholar

[21]

R. H. KeshavanA. Montanari and S. Oh, Matrix completion from noisy entries, J. Mach. Learn. Res., 11 (2010), 2057-2078.   Google Scholar

[22]

R. H. Keshavan and S. Oh, A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion, tech. rep., Dept. of Electrical Engineering, Stanford University, 2009. Google Scholar

[23]

H. Kim and H. Park, Nonnegative matrix factorization based on alternating nonnegativity constrained least squares and active set method, SIAM J. Matrix Anal. Appl., 30 (2008), 713-730.  doi: 10.1137/07069239X.  Google Scholar

[24]

J. Kim and H. Park, Sparse Nonnegative Matrix Factorization for Clustering, tech. rep., Georgia Institute of Technology, 2008. Technical Report GT-CSE-08-01. Google Scholar

[25]

K. Lee and Y. Bresler, Admira: atomic decomposition for minimum rank approximation, IEEE Trans. Inf. Theor., 56 (2010), 4402-4416.  doi: 10.1109/TIT.2010.2054251.  Google Scholar

[26]

M. Li, D. Sun and K.-C. Toh, A convergent 3-block semi-proximal admm for convex minimization problems with one strongly convex block, Asia Pac. J. Oper. Res., 32 (2015), 1550024, 19 pp. doi: 10.1142/S0217595915500244.  Google Scholar

[27]

X. LiuZ. Wen and Y. Zhang, An efficient gauss–newton algorithm for symmetric low-rank product matrix approximations, SIAM J. Optim., 25 (2015), 1571-1608.  doi: 10.1137/140971464.  Google Scholar

[28]

Y. LiuD. Sun and K.-C. Toh, An implementable proximal point algorithmic framework for nuclear norm minimization, Math. Program., 133 (2012), 399-436.  doi: 10.1007/s10107-010-0437-8.  Google Scholar

[29]

Z. Liu and L. Vandenberghe, Interior-point method for nuclear norm approximation with application to system identification, SIAM J. Matrix Anal. A., 31 (2009), 1235-1256.  doi: 10.1137/090755436.  Google Scholar

[30]

S. MaD. Goldfarb and L. Chen, Fixed point and bregman iterative methods for matrix rank minimization, Math. Program., 128 (2011), 321-353.  doi: 10.1007/s10107-009-0306-5.  Google Scholar

[31]

R. Mazumder, T. Hastie and R. Tibshirani, Regularization Methods for Learning Incomplete Matrices, tech. rep., Stanford University, 2009. Google Scholar

[32]

R. Meka, P. Jain and I. S. Dhillon, Guaranteed Rank Minimization Via Singular Value Projection, CoRR, abs/0909.5457 (2009). Google Scholar

[33]

T. Morita and T. Kanade, A sequential factorization method for recovering shape and motion from image streams, IEEE Trans. Pattern Anal. Mach. Intell., 19 (1997), 858-867.   Google Scholar

[34]

Y. Nesterov, Introductory Lectures on Convex Optimization, A basic course. Applied Optimization, 87. Kluwer Academic Publishers, Boston, MA, 2004. doi: 10.1007/978-1-4419-8853-9.  Google Scholar

[35]

B. Recht, A simpler approach to matrix completion, J. Mach. Learn. Res., 12 (2011), 3413-3430.   Google Scholar

[36]

B. RechtM. Fazel and P. A. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.  Google Scholar

[37]

Y. ShenZ. Wen and Y. Zhang, Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization, Optim. Methods Softw., 29 (2014), 239-263.  doi: 10.1080/10556788.2012.700713.  Google Scholar

[38]

K.-C. Toh and S. Yun, An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems, Pac. J. Optim., 6 (2010), 615-640.   Google Scholar

[39]

C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. J. Comput. Vision, 9 (1992), 137-154.   Google Scholar

[40]

Z. WenW. Yin and Y. Zhang, Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm, Math. Program. Comput., 4 (2012), 333-361.  doi: 10.1007/s12532-012-0044-1.  Google Scholar

[41]

J. Yang and X. Yuan, An Inexact Alternating Direction Method for Trace Norm Regularized Least Squares Problem, tech. rep., Dept. of Mathematics, Nanjing University, 2010. Google Scholar

[42]

J. YangY. Zhang and W. Yin, An efficient tvl1 algorithm for deblurring multichannel images corrupted by impulsive noise, SIAM J. Sci. Comput., 31 (2009), 2842-2865.  doi: 10.1137/080732894.  Google Scholar

[43]

X. Yuan and J. Yang, Sparse and low-rank matrix decomposition via alternating direction method, Pac. J. Optim., 9 (2013), 167-180.   Google Scholar

[44]

Z. Zhu, A. M.-C. So and Y. Ye, Fast and Near–Optimal Matrix Completion Via Randomized Basis Pursuit, tech. rep., Stanford University, 2009. Google Scholar

[45]

H. ZouT. Hastie and R. Tibshirani, Sparse principal component analysis, J. Comput. Graph. Stat., 15 (2006), 265-286.  doi: 10.1198/106186006X113430.  Google Scholar

Figure 1.  Relative error vs iteration number, $ m $ = $ n $ = 2000
Figure 2.  Computing time vs sampling ratio, $ m $ = $ n $ = 2000
Figure 3.  Computing time vs dimension
Table 4.1.  Results of speed performance test for synthetic data with $m$ = $n$ = $2000$
Problem settings SVT LMaFit New Algorithm
sampling ratio rank($L^*$) $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time
20% 5 8.947e-13 126.0 10.288 8.396e-13 67.7 1.746 8.880e-13 42.7 1.218
10 1.183e-7 190.2 22.593 9.005e-13 66.8 1.892 8.525e-13 57.3 1.821
40% 5 8.494e-13 87.0 12.286 7.022e-13 33.3 1.376 6.705e-13 24.2 1.100
10 8.702e-13 101.7 19.319 7.169e-13 39.2 1.722 7.001e-13 29.9 1.468
60% 5 8.470e-13 72.3 14.429 6.895e-13 33.2 1.821 5.106e-13 18.0 1.096
10 8.465e-13 80.4 18.717 7.573e-13 39.1 2.237 5.467e-13 21.0 1.334
80% 5 8.765e-13 62.3 15.112 6.556e-13 23.1 1.547 3.881e-13 15.0 1.100
10 8.180e-13 67.5 19.641 6.421e-13 25.1 1.801 4.308e-13 16.7 1.322
Problem settings SVT LMaFit New Algorithm
sampling ratio rank($L^*$) $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time $\hbox{err}_{\Omega}(L)$ iter time
20% 5 8.947e-13 126.0 10.288 8.396e-13 67.7 1.746 8.880e-13 42.7 1.218
10 1.183e-7 190.2 22.593 9.005e-13 66.8 1.892 8.525e-13 57.3 1.821
40% 5 8.494e-13 87.0 12.286 7.022e-13 33.3 1.376 6.705e-13 24.2 1.100
10 8.702e-13 101.7 19.319 7.169e-13 39.2 1.722 7.001e-13 29.9 1.468
60% 5 8.470e-13 72.3 14.429 6.895e-13 33.2 1.821 5.106e-13 18.0 1.096
10 8.465e-13 80.4 18.717 7.573e-13 39.1 2.237 5.467e-13 21.0 1.334
80% 5 8.765e-13 62.3 15.112 6.556e-13 23.1 1.547 3.881e-13 15.0 1.100
10 8.180e-13 67.5 19.641 6.421e-13 25.1 1.801 4.308e-13 16.7 1.322
[1]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020076

[2]

Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020074

[3]

Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $ p $ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020442

[4]

Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020321

[5]

Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056

[6]

Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020345

[7]

Shiqiu Fu, Kanishka Perera. On a class of semipositone problems with singular Trudinger-Moser nonlinearities. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020452

[8]

Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248

[9]

S. Sadeghi, H. Jafari, S. Nemati. Solving fractional Advection-diffusion equation using Genocchi operational matrix based on Atangana-Baleanu derivative. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020435

[10]

Yuri Fedorov, Božidar Jovanović. Continuous and discrete Neumann systems on Stiefel varieties as matrix generalizations of the Jacobi–Mumford systems. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020375

[11]

Mehdi Badsi. Collisional sheath solutions of a bi-species Vlasov-Poisson-Boltzmann boundary value problem. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020052

[12]

Lei Liu, Li Wu. Multiplicity of closed characteristics on $ P $-symmetric compact convex hypersurfaces in $ \mathbb{R}^{2n} $. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020378

[13]

Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247

[14]

Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020  doi: 10.3934/dcds.2020379

[15]

Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020347

[16]

Anna Canale, Francesco Pappalardo, Ciro Tarantino. Weighted multipolar Hardy inequalities and evolution problems with Kolmogorov operators perturbed by singular potentials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020274

[17]

Susmita Sadhu. Complex oscillatory patterns near singular Hopf bifurcation in a two-timescale ecosystem. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020342

2019 Impact Factor: 1.233

Metrics

  • PDF downloads (124)
  • HTML views (326)
  • Cited by (0)

Other articles
by authors

[Back to Top]