# American Institute of Mathematical Sciences

February  2021, 15(1): 159-183. doi: 10.3934/ipi.2020076

## Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory

 1 Institute of Geophysics & Geomatics, China University of Geosciences (Wuhan), Wuhan, MO 430074, China 2 School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287-1804, USA 3 School of Mathematics & Physics, China University of Geosciences (Wuhan), Wuhan, MO 430074, China

* Corresponding author: Rosemary A. Renaut

Received  March 2020 Revised  August 2020 Published  November 2020

Fund Project: The first author is supported by the National Key R & D Program of China 2018YFC1503705; The second author is supported by the NSF grant DMS 1913136; The third author is supported by the National Key R & D Program of China 2018YFC1503705 and Hubei Subsurface Multi-scale Imaging Key Laboratory (China University of Geosciences) SMIL-2018-06

A fast non-convex low-rank matrix decomposition method for potential field data separation is presented. The singular value decomposition of the large size trajectory matrix, which is also a block Hankel matrix, is obtained using a fast randomized singular value decomposition algorithm in which fast block Hankel matrix-vector multiplications are implemented with minimal memory storage. This fast block Hankel matrix randomized singular value decomposition algorithm is integrated into the $\text{Altproj}$ algorithm, which is a standard non-convex method for solving the robust principal component analysis optimization problem. The integration of this improved estimation for the partial singular value decomposition avoids the construction of the trajectory matrix in the robust principal component analysis optimization problem. Hence, gravity and magnetic data matrices of large size can be computed and potential field data separation is achieved with better computational efficiency. The presented algorithm is also robust and, hence, algorithm-dependent parameters are easily determined. The performance of the algorithm, with and without the efficient estimation of the low rank matrix, is contrasted for the separation of synthetic gravity and magnetic data matrices of different sizes. These results demonstrate that the presented algorithm is not only computationally more efficient but it is also more accurate. Moreover, it is possible to solve far larger problems. As an example, for the adopted computational environment, matrices of sizes larger than $205 \times 205$ generate "out of memory" exceptions without the improvement, whereas a matrix of size $2001\times 2001$ can now be calculated in $1062.29$s. Finally, the presented algorithm is applied to separate real gravity and magnetic data in the Tongling area, Anhui province, China. Areas which may exhibit mineralizations are inferred based on the separated anomalies.

Citation: Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076
##### References:

show all references

##### References:
Experiments for Algorithm 3 for $r = 10$ with $q = 0$, $1$, $2$, and increasing $P = Q$. Each experiment is repeated $50$ times for each parameter setting. Let $C_q$ be the measured computational cost in terms of clock time measured in seconds of the algorithm in each case for given $q$. Figure 1(a) is the mean of each $C_q$ over the $50$ experiments and Figure 1(b) shows the ratios $C_1/C_0$, $C_2/C_0$ and $C_2/C_1$
Experiments for Algorithm 3 with $q = 0$, $1$, $2$ and increasing $r$, $r = 1:4:49$, and $\mathbf{X}$ of size $141 \times 141$. Each experiment is repeated $50$ times for each $r$ and $q$. The box plots for the computational times are reported in seconds
Figures 3(a) and 3(b) are the geologic models and the forward magnetic field, respectively
These results show tests of the parameters for Algorithm 4. Figures 4(a) and 4(b) are the $\text{RMSE}$ and the computational times for the separations of the data in Figure 3 with different $\beta$; Figures 4(c) and 4(d) are the $\text{RMSE}$ and the computational times of the separations of the data in Figure 3 with different $r^{*}$. These are results over $50$ runs in each case
Figures 5(a) and 5(b) are the synthetic regional and residual anomalies for the models in Figure 3, respectively; Figures 5(c) and 5(d) are the separated regional and residual anomalies, respectively, for data of size $201 \times 201$ obtained using Algorithm 4 with $\beta = 0.0062$ and $r^* = 6$; Figures 5(e) and 5(f) are the separated regional and residual anomalies, respectively, obtained using $\text{LRMD_PFS}$ with $\alpha = 0.0005$
Figures 6(a) and 6(b) are the geologic models and the forward gravity field, respectively
Figures 7(a) and 7(b) are the synthetic regional and residual anomalies for the models in Figure 6, respectively; Figures 7(c) and 7(d) are the separated regional and residual anomalies, respectively, for data of size $201 \times 201$ obtained using Algorithm 4 with $\beta = 0.0005$ and $r^* = 6$; Figures 7(e) and 7(f) are the separated regional and residual anomalies, respectively, obtained using $\text{LRMD_PFS}$ with $\alpha = 0.0007$
Figures 8(a) and 8(b) are the maps of the Bouguer gravity and the $\text{RTP}$ magnetic anomalies of the study area in Tongling
Figures 9(a) and 9(b) are the separated regional and residual gravity anomalies of the study area, respectively; Figures 9(c) and 9(d) are the separated regional and residual magnetic anomalies of the study area, respectively
Predictions of the distributions of areas that may have sharns or ore bodies based on the separated high-gravity and high-magnetic fields
Demonstrating non-monotonic increase in computational time using Algorithm 1 for $P\times Q$ between $2^{15}$ and $80000$
Computational cost measured in terms of floating point operations and storage of floating point entries at each step of Algorithm 3 implemented with ($\text{FBHMRSVD}$), and without ($\text{RSVD}$), the use of $\text{FBHMMM}$ for multiplications with $\mathbf{T}$. Here $\ell = \mathcal{O}(r)$, and $\ell = 2r$ when $p = r$
 $\text{FBHMRSVD}$ $\text{RSVD}$ Step Cost in flops Cost in storage Cost in flops Cost in storage 4, 9 $\mathcal{O}(\ell PQ\log_{2}{PQ})$ $PQ+\ell (K \hat{K}+L \hat{L} )$ $2\ell K \hat{K} L \hat{L}$ $K \hat{K} L \hat{L} +\ell (K \hat{K}+ L \hat{L} )$ 5, 10 $2\ell^2(L \hat{L} -\ell /3)$ $2\ell L \hat{L}$ $2\ell^2(L \hat{L} -\ell /3)$ $2\ell L \hat{L}$ 7, 13 $\mathcal{O}(\ell PQ\log_{2}{PQ})$ $PQ+\ell (K \hat{K}+ L \hat{L} )$ $4\ell K \hat{K} L \hat{L}$ $K \hat{K} L \hat{L} +\ell (K \hat{K}+ L \hat{L} )$ 8 $2\ell^2(K \hat{K}-\ell /3)$ $2\ell K \hat{K}$ $2\ell^2(K \hat{K}-\ell /3)$ $2\ell K \hat{K}$ 14 $\mathcal{O}(\ell^3)$ $2\ell^2+\ell$ $\mathcal{O}(\ell^3)$ $2\ell^2+\ell$ 15 $\ell r(2\ell +3K \hat{K})$ $r(K \hat{K}+L \hat{L} )+2\ell^2$ $\ell r(2\ell +3K \hat{K})$ $r(K \hat{K}+L \hat{L} )+2\ell^2$ $+\ell L \hat{L} +r+\ell$ $+\ell L \hat{L} +r+\ell$
 $\text{FBHMRSVD}$ $\text{RSVD}$ Step Cost in flops Cost in storage Cost in flops Cost in storage 4, 9 $\mathcal{O}(\ell PQ\log_{2}{PQ})$ $PQ+\ell (K \hat{K}+L \hat{L} )$ $2\ell K \hat{K} L \hat{L}$ $K \hat{K} L \hat{L} +\ell (K \hat{K}+ L \hat{L} )$ 5, 10 $2\ell^2(L \hat{L} -\ell /3)$ $2\ell L \hat{L}$ $2\ell^2(L \hat{L} -\ell /3)$ $2\ell L \hat{L}$ 7, 13 $\mathcal{O}(\ell PQ\log_{2}{PQ})$ $PQ+\ell (K \hat{K}+ L \hat{L} )$ $4\ell K \hat{K} L \hat{L}$ $K \hat{K} L \hat{L} +\ell (K \hat{K}+ L \hat{L} )$ 8 $2\ell^2(K \hat{K}-\ell /3)$ $2\ell K \hat{K}$ $2\ell^2(K \hat{K}-\ell /3)$ $2\ell K \hat{K}$ 14 $\mathcal{O}(\ell^3)$ $2\ell^2+\ell$ $\mathcal{O}(\ell^3)$ $2\ell^2+\ell$ 15 $\ell r(2\ell +3K \hat{K})$ $r(K \hat{K}+L \hat{L} )+2\ell^2$ $\ell r(2\ell +3K \hat{K})$ $r(K \hat{K}+L \hat{L} )+2\ell^2$ $+\ell L \hat{L} +r+\ell$ $+\ell L \hat{L} +r+\ell$
Comparisons of the rank $r$ relative errors using Algorithm 3 to estimate the rank $r$ partial SVD of $\mathbf{X}$, with the mean reported over $50$ runs in each case
 Matrix size Rank $r$ relative error $(r=10)$ $\mathbf{X}$ $q=0$ $q=1$ $q=2$ $51 \times 51$ $1.6056$ $1.0022$ $1.0000$ $81 \times 81$ $1.6977$ $1.0028$ $1.0002$ $115 \times 115$ $1.4600$ $1.0150$ $1.0006$ $141 \times 141$ $1.9577$ $1.0185$ $1.0006$ $171 \times 171$ $1.5904$ $1.0038$ $1.0001$ $201 \times 201$ $1.4598$ $1.0063$ $1.0003$
 Matrix size Rank $r$ relative error $(r=10)$ $\mathbf{X}$ $q=0$ $q=1$ $q=2$ $51 \times 51$ $1.6056$ $1.0022$ $1.0000$ $81 \times 81$ $1.6977$ $1.0028$ $1.0002$ $115 \times 115$ $1.4600$ $1.0150$ $1.0006$ $141 \times 141$ $1.9577$ $1.0185$ $1.0006$ $171 \times 171$ $1.5904$ $1.0038$ $1.0001$ $201 \times 201$ $1.4598$ $1.0063$ $1.0003$
Comparisons of the mean computational times over $10$ runs, with rank $r = 10$ and $q = 1$, for Algorithm 3, both with and without use of fast matrix-matrix multiplication, $\text{FBHMRSVD}$ and $\text{RSVD}$, respectively, as compared to direct calculation using the partial $\text{SVD}$. Algorithm 3 is evaluated using both the $\text{1DFFT}$ and $\text{2DFFT}$ for fast calculation. $/$ denotes that an "out of memory" error is reported
 Matrix sizes Time (seconds) $\mathbf{X}$ $\mathbf{T}$ $\mathtt{FBHMRSVD (2DFFT)}$ $\mathtt{FBHMRSVD (1DFFT)}$ $\mathtt{RSVD}$ $\mathtt{SVD}$ $81 \times 81$ $1681 \times 1681$ $0.023$ $0.038$ $0.13$ $0.24$ $115 \times 115$ $3364 \times 3364$ $0.064$ $0.079$ $0.41$ $0.92$ $141 \times 141$ $5041 \times 5041$ $0.058$ $0.129$ $0.78$ $1.95$ $171 \times 171$ $7396 \times 7396$ $0.059$ $0.14$ $1.45$ $3.93$ $201 \times 201$ $10201 \times 10201$ $0.13$ $0.31$ $2.80$ $7.61$ $311 \times 311$ $24336 \times 24336$ $0.47$ $0.64$ $14.09$ $41.18$ $401 \times 401$ $40401 \times 40401$ $1.46$ $1.15$ $111.14$ $206.79$ $601 \times 601$ $90601 \times 90601$ $1.85$ $2.17$ $/$ $/$ $1001 \times 1001$ $251001 \times 251001$ $2.44$ $3.58$ $/$ $/$ $2001 \times 2001$ $1002001 \times 1002001$ $13.93$ $26.13$ $/$ $/$
 Matrix sizes Time (seconds) $\mathbf{X}$ $\mathbf{T}$ $\mathtt{FBHMRSVD (2DFFT)}$ $\mathtt{FBHMRSVD (1DFFT)}$ $\mathtt{RSVD}$ $\mathtt{SVD}$ $81 \times 81$ $1681 \times 1681$ $0.023$ $0.038$ $0.13$ $0.24$ $115 \times 115$ $3364 \times 3364$ $0.064$ $0.079$ $0.41$ $0.92$ $141 \times 141$ $5041 \times 5041$ $0.058$ $0.129$ $0.78$ $1.95$ $171 \times 171$ $7396 \times 7396$ $0.059$ $0.14$ $1.45$ $3.93$ $201 \times 201$ $10201 \times 10201$ $0.13$ $0.31$ $2.80$ $7.61$ $311 \times 311$ $24336 \times 24336$ $0.47$ $0.64$ $14.09$ $41.18$ $401 \times 401$ $40401 \times 40401$ $1.46$ $1.15$ $111.14$ $206.79$ $601 \times 601$ $90601 \times 90601$ $1.85$ $2.17$ $/$ $/$ $1001 \times 1001$ $251001 \times 251001$ $2.44$ $3.58$ $/$ $/$ $2001 \times 2001$ $1002001 \times 1002001$ $13.93$ $26.13$ $/$ $/$
The parameters that define the geologic models in Figures 3(a) and 6(a)
 Geologic model Shape Central position Model parameters Density Magnetization (length, width, depth extent)/radius (g/cm$^3$) (A/m) model-$1$ Block $(700,400,600)$ $(300,400,200)$ $0.5$ $8000$ model-$2$ sphere $(250,600,700)$ $200$ $0.4$ $7000$ model-$3$ Block $(500,500, 40)$ $(50, 20, 40)$ $0.5$ $5000$ model-$4$ Block $(500,475, 40)$ $(10, 30, 40)$ $0.5$ $5000$ model-$5$ sphere $(300,200, 40)$ $20$ $5000$ model-$6$ sphere $(600,800, 40)$ $20$ $5000$ model-$7$ sphere $(200,200, 40)$ $20$ $0.7$ model-$8$ Block $(800,800, 40)$ $(80, 80, 40)$ $0.5$
 Geologic model Shape Central position Model parameters Density Magnetization (length, width, depth extent)/radius (g/cm$^3$) (A/m) model-$1$ Block $(700,400,600)$ $(300,400,200)$ $0.5$ $8000$ model-$2$ sphere $(250,600,700)$ $200$ $0.4$ $7000$ model-$3$ Block $(500,500, 40)$ $(50, 20, 40)$ $0.5$ $5000$ model-$4$ Block $(500,475, 40)$ $(10, 30, 40)$ $0.5$ $5000$ model-$5$ sphere $(300,200, 40)$ $20$ $5000$ model-$6$ sphere $(600,800, 40)$ $20$ $5000$ model-$7$ sphere $(200,200, 40)$ $20$ $0.7$ model-$8$ Block $(800,800, 40)$ $(80, 80, 40)$ $0.5$
Comparisons of the computational times of the $\text{FBHMRSVD}$, $\text{RSVD}$, and $\text{SVD}$. $/$ denotes that either the computational time is too high to perform the experiment, or an "out of memory" error is reported. Matrices $\mathbf{X}$ and $\mathbf{T}$ are square with sizes $P = Q$ and $\hat{K} = \hat{L}$, respectively. Reported are the mean times and $\text{RMSE}$ over $10$ runs
 Matrix sizes $\text{FNCLRMD_PFS}$ $\text{LRMD_PFS}$ $P$ $\hat{K}$ $r^*$ $\beta$ $\text{RMSE}$ (nT) Time (s) $\alpha$ $\text{RMSE}$ (nT) Time (s) $101$ $2601$ $6$ $0.0156$ $3.78$ $4.11$ $0.002$ $16.39$ $17.03$ $141$ $5041$ $6$ $0.0130$ $3.52$ $3.00$ $0.001$ $15.10$ $109.85$ $171$ $7396$ $6$ $0.0088$ $3.63$ $2.87$ $0.0008$ $14.46$ $410.89$ $201$ $10201$ $6$ $0.0062$ $3.59$ $6.75$ $0.0005$ $13.80$ $862.96$ $241$ $14641$ $6$ $0.0044$ $3.56$ $15.06$ $/$ $/$ $/$ $311$ $24336$ $6$ $0.0026$ $3.61$ $27.00$ $/$ $/$ $/$ $401$ $40401$ $6$ $0.0014$ $3.65$ $37.81$ $/$ $/$ $/$ $601$ $90601$ $6$ $0.0007$ $3.64$ $107.43$ $/$ $/$ $/$ $1001$ $251001$ $6$ $0.0002$ $3.75$ $171.93$ $/$ $/$ $/$ $2001$ $1002001$ $6$ $0.00002$ $4.19$ $1305.42$ $/$ $/$ $/$
 Matrix sizes $\text{FNCLRMD_PFS}$ $\text{LRMD_PFS}$ $P$ $\hat{K}$ $r^*$ $\beta$ $\text{RMSE}$ (nT) Time (s) $\alpha$ $\text{RMSE}$ (nT) Time (s) $101$ $2601$ $6$ $0.0156$ $3.78$ $4.11$ $0.002$ $16.39$ $17.03$ $141$ $5041$ $6$ $0.0130$ $3.52$ $3.00$ $0.001$ $15.10$ $109.85$ $171$ $7396$ $6$ $0.0088$ $3.63$ $2.87$ $0.0008$ $14.46$ $410.89$ $201$ $10201$ $6$ $0.0062$ $3.59$ $6.75$ $0.0005$ $13.80$ $862.96$ $241$ $14641$ $6$ $0.0044$ $3.56$ $15.06$ $/$ $/$ $/$ $311$ $24336$ $6$ $0.0026$ $3.61$ $27.00$ $/$ $/$ $/$ $401$ $40401$ $6$ $0.0014$ $3.65$ $37.81$ $/$ $/$ $/$ $601$ $90601$ $6$ $0.0007$ $3.64$ $107.43$ $/$ $/$ $/$ $1001$ $251001$ $6$ $0.0002$ $3.75$ $171.93$ $/$ $/$ $/$ $2001$ $1002001$ $6$ $0.00002$ $4.19$ $1305.42$ $/$ $/$ $/$
Acronyms used throughout
 Acronym Description $\text{FBHMRSVD}$ fast block Hankel matrix randomized SVD algorithm $\text{FBHMMM}$ fast block Hankel matrix-matrix multiplication algorithm $\text{FBHMVM}$ fast block Hankel matrix-vector multiplication Algorithm $\text{FNCLRMD_PFS}$ fast non-convex low-rank matrix decomposition algorithm for potential field separation $\text{EALM}$ exact augmented Lagrange multiplier method $\text{IALM}$ inexact augmented Lagrange multiplier method $\text{LRMD_PFS}$ low-rank matrix decomposition for potential field separation $\text{RPCA}$ robust principal component analysis $\text{RSVD}$ randomized singular value decomposition $\text{SVD}$ singular value decomposition $\text{RMSE}$ root mean square error $\text{RTP}$ reduce to the pole
 Acronym Description $\text{FBHMRSVD}$ fast block Hankel matrix randomized SVD algorithm $\text{FBHMMM}$ fast block Hankel matrix-matrix multiplication algorithm $\text{FBHMVM}$ fast block Hankel matrix-vector multiplication Algorithm $\text{FNCLRMD_PFS}$ fast non-convex low-rank matrix decomposition algorithm for potential field separation $\text{EALM}$ exact augmented Lagrange multiplier method $\text{IALM}$ inexact augmented Lagrange multiplier method $\text{LRMD_PFS}$ low-rank matrix decomposition for potential field separation $\text{RPCA}$ robust principal component analysis $\text{RSVD}$ randomized singular value decomposition $\text{SVD}$ singular value decomposition $\text{RMSE}$ root mean square error $\text{RTP}$ reduce to the pole
Notation used throughout
 Notation Description $\bf J$ exchange matrix $\bf X$ $2D$ gridded potential field data matrix $\bf Tj$ Hankel matrix constructed from the $j$th column of $\bf X$ $\bf T$ trajectory matrix of $\bf X$ $\bf X_1,\cdots,\bf X_Q$ first to $Q$th columns of $\bf X$, respectively $\bf U,\bf V,\bf Sigma$ $\text{SVD}$ of $\bf T$, $\bf T=\bf U\bf Sigma V^T$ $\bf U_r,\bf V_r,\bf Sigma_r$ rank-$r$ partial $\text{SVD}$ of $\bf T$ using $\text{FBHMRSVD}$ $\bf XD$, $\bf XS$ data matrices of regional and residual anomalies, respectively $\bf TD$, $\bf TS$ trajectory matrices of $\bf XD$ and $\bf XS$, respectively $\bf XD^*$, $\bf XS^*$ approximations of $\bf XD$ and $\bf XS$ using $\text{FNCLRMD_PFS}$, respectively $\bf u_1,\bf u_2,\cdots$ $\bf U=[\bf u_1,\bf u_2,\cdots]$, $\bf u_1,\bf u_2,\cdots$ are the left singular vectors of $\bf T$ $\bf v_1,\bf v_2,\cdots$ $\bf V=[\bf v_1,\bf v_2,\cdots]$, $\bf v_1,\bf v_2,\cdots$ are the right singular vectors of $\bf T$ $x_{mn}$ element at $m$th row and $n$th column of $\bf X$ $P$, $Q$ $\bf X$ is of size $P \times Q$ $K$, $L$ $\bf Tj$ is of size $K \times L$ $\hat K$, $\hat L$ $\bf T$ is a block Hankel matrix with $\hat K \times \hat L$ blocks $P_C$, $Q_C$ $\bf C$ is of size $P_C \times Q_C$ $\sigma_1$, $\sigma_2$, $\cdots$ $\bf \Sigma=\mathtt{diag}(\sigma_{1},\sigma_{2},\cdots)$, where $\sigma_1$, $\sigma_2$, $\cdots$ are the singular values of $\bf T$ $r$ desired rank parameter in $\text{FBHMRSVD}$ $p$ oversampling parameter in $\text{FBHMRSVD}$ $q$ power iteration parameter in $\text{FBHMRSVD}$ $r^*$ desired rank parameter in $\text{FNCLRMD_PFS}$ $\beta$ thresholding parameter in $\text{FNCLRMD_PFS}$ $\alpha$ weighting parameter in $\text{LRMD_PFS}$ $\|\cdot\|_p$, $\|\cdot\|_*$ $\ell_p$ and nuclear norms, respectively
 Notation Description $\bf J$ exchange matrix $\bf X$ $2D$ gridded potential field data matrix $\bf Tj$ Hankel matrix constructed from the $j$th column of $\bf X$ $\bf T$ trajectory matrix of $\bf X$ $\bf X_1,\cdots,\bf X_Q$ first to $Q$th columns of $\bf X$, respectively $\bf U,\bf V,\bf Sigma$ $\text{SVD}$ of $\bf T$, $\bf T=\bf U\bf Sigma V^T$ $\bf U_r,\bf V_r,\bf Sigma_r$ rank-$r$ partial $\text{SVD}$ of $\bf T$ using $\text{FBHMRSVD}$ $\bf XD$, $\bf XS$ data matrices of regional and residual anomalies, respectively $\bf TD$, $\bf TS$ trajectory matrices of $\bf XD$ and $\bf XS$, respectively $\bf XD^*$, $\bf XS^*$ approximations of $\bf XD$ and $\bf XS$ using $\text{FNCLRMD_PFS}$, respectively $\bf u_1,\bf u_2,\cdots$ $\bf U=[\bf u_1,\bf u_2,\cdots]$, $\bf u_1,\bf u_2,\cdots$ are the left singular vectors of $\bf T$ $\bf v_1,\bf v_2,\cdots$ $\bf V=[\bf v_1,\bf v_2,\cdots]$, $\bf v_1,\bf v_2,\cdots$ are the right singular vectors of $\bf T$ $x_{mn}$ element at $m$th row and $n$th column of $\bf X$ $P$, $Q$ $\bf X$ is of size $P \times Q$ $K$, $L$ $\bf Tj$ is of size $K \times L$ $\hat K$, $\hat L$ $\bf T$ is a block Hankel matrix with $\hat K \times \hat L$ blocks $P_C$, $Q_C$ $\bf C$ is of size $P_C \times Q_C$ $\sigma_1$, $\sigma_2$, $\cdots$ $\bf \Sigma=\mathtt{diag}(\sigma_{1},\sigma_{2},\cdots)$, where $\sigma_1$, $\sigma_2$, $\cdots$ are the singular values of $\bf T$ $r$ desired rank parameter in $\text{FBHMRSVD}$ $p$ oversampling parameter in $\text{FBHMRSVD}$ $q$ power iteration parameter in $\text{FBHMRSVD}$ $r^*$ desired rank parameter in $\text{FNCLRMD_PFS}$ $\beta$ thresholding parameter in $\text{FNCLRMD_PFS}$ $\alpha$ weighting parameter in $\text{LRMD_PFS}$ $\|\cdot\|_p$, $\|\cdot\|_*$ $\ell_p$ and nuclear norms, respectively
 [1] Mingchao Zhao, You-Wei Wen, Michael Ng, Hongwei Li. A nonlocal low rank model for poisson noise removal. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021003 [2] Ningyu Sha, Lei Shi, Ming Yan. Fast algorithms for robust principal component analysis with an upper bound on the rank. Inverse Problems & Imaging, 2021, 15 (1) : 109-128. doi: 10.3934/ipi.2020067 [3] Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276 [4] Zi Xu, Siwen Wang, Jinjin Huang. An efficient low complexity algorithm for box-constrained weighted maximin dispersion problem. Journal of Industrial & Management Optimization, 2021, 17 (2) : 971-979. doi: 10.3934/jimo.2020007 [5] Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems & Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077 [6] Marion Darbas, Jérémy Heleine, Stephanie Lohrengel. Numerical resolution by the quasi-reversibility method of a data completion problem for Maxwell's equations. Inverse Problems & Imaging, 2020, 14 (6) : 1107-1133. doi: 10.3934/ipi.2020056 [7] Liupeng Wang, Yunqing Huang. Error estimates for second-order SAV finite element method to phase field crystal model. Electronic Research Archive, 2021, 29 (1) : 1735-1752. doi: 10.3934/era.2020089 [8] Wenya Qi, Padmanabhan Seshaiyer, Junping Wang. A four-field mixed finite element method for Biot's consolidation problems. Electronic Research Archive, , () : -. doi: 10.3934/era.2020127 [9] Lan Luo, Zhe Zhang, Yong Yin. Simulated annealing and genetic algorithm based method for a bi-level seru loading problem with worker assignment in seru production systems. Journal of Industrial & Management Optimization, 2021, 17 (2) : 779-803. doi: 10.3934/jimo.2019134 [10] Zhimin Li, Tailei Zhang, Xiuqing Li. Threshold dynamics of stochastic models with time delays: A case study for Yunnan, China. Electronic Research Archive, 2021, 29 (1) : 1661-1679. doi: 10.3934/era.2020085 [11] Vivina Barutello, Gian Marco Canneori, Susanna Terracini. Minimal collision arcs asymptotic to central configurations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 61-86. doi: 10.3934/dcds.2020218 [12] Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005 [13] Helmut Abels, Johannes Kampmann. Existence of weak solutions for a sharp interface model for phase separation on biological membranes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 331-351. doi: 10.3934/dcdss.2020325 [14] Jong Yoon Hyun, Boran Kim, Minwon Na. Construction of minimal linear codes from multi-variable functions. Advances in Mathematics of Communications, 2021, 15 (2) : 227-240. doi: 10.3934/amc.2020055 [15] Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344 [16] Daniel Kressner, Jonas Latz, Stefano Massei, Elisabeth Ullmann. Certified and fast computations with shallow covariance kernels. Foundations of Data Science, 2020, 2 (4) : 487-512. doi: 10.3934/fods.2020022 [17] Hideki Murakawa. Fast reaction limit of reaction-diffusion systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1047-1062. doi: 10.3934/dcdss.2020405 [18] Barbora Benešová, Miroslav Frost, Lukáš Kadeřávek, Tomáš Roubíček, Petr Sedlák. An experimentally-fitted thermodynamical constitutive model for polycrystalline shape memory alloys. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020459 [19] Alessandro Fonda, Rodica Toader. A dynamical approach to lower and upper solutions for planar systems "To the memory of Massimo Tarallo". Discrete & Continuous Dynamical Systems - A, 2021  doi: 10.3934/dcds.2021012 [20] Björn Augner, Dieter Bothe. The fast-sorption and fast-surface-reaction limit of a heterogeneous catalysis model. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 533-574. doi: 10.3934/dcdss.2020406

2019 Impact Factor: 1.373