\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory

  • * Corresponding author: Rosemary A. Renaut

    * Corresponding author: Rosemary A. Renaut 

The first author is supported by the National Key R & D Program of China 2018YFC1503705; The second author is supported by the NSF grant DMS 1913136; The third author is supported by the National Key R & D Program of China 2018YFC1503705 and Hubei Subsurface Multi-scale Imaging Key Laboratory (China University of Geosciences) SMIL-2018-06

Abstract / Introduction Full Text(HTML) Figure(11) / Table(7) Related Papers Cited by
  • A fast non-convex low-rank matrix decomposition method for potential field data separation is presented. The singular value decomposition of the large size trajectory matrix, which is also a block Hankel matrix, is obtained using a fast randomized singular value decomposition algorithm in which fast block Hankel matrix-vector multiplications are implemented with minimal memory storage. This fast block Hankel matrix randomized singular value decomposition algorithm is integrated into the $\text{Altproj}$ algorithm, which is a standard non-convex method for solving the robust principal component analysis optimization problem. The integration of this improved estimation for the partial singular value decomposition avoids the construction of the trajectory matrix in the robust principal component analysis optimization problem. Hence, gravity and magnetic data matrices of large size can be computed and potential field data separation is achieved with better computational efficiency. The presented algorithm is also robust and, hence, algorithm-dependent parameters are easily determined. The performance of the algorithm, with and without the efficient estimation of the low rank matrix, is contrasted for the separation of synthetic gravity and magnetic data matrices of different sizes. These results demonstrate that the presented algorithm is not only computationally more efficient but it is also more accurate. Moreover, it is possible to solve far larger problems. As an example, for the adopted computational environment, matrices of sizes larger than $ 205 \times 205 $ generate "out of memory" exceptions without the improvement, whereas a matrix of size $ 2001\times 2001 $ can now be calculated in $ 1062.29 $s. Finally, the presented algorithm is applied to separate real gravity and magnetic data in the Tongling area, Anhui province, China. Areas which may exhibit mineralizations are inferred based on the separated anomalies.

    Mathematics Subject Classification: Primary: 65F22, 65F55; Secondary: 86A20, 86A22.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Experiments for Algorithm 3 for $ r = 10 $ with $ q = 0 $, $ 1 $, $ 2 $, and increasing $ P = Q $. Each experiment is repeated $ 50 $ times for each parameter setting. Let $ C_q $ be the measured computational cost in terms of clock time measured in seconds of the algorithm in each case for given $ q $. Figure 1(a) is the mean of each $ C_q $ over the $ 50 $ experiments and Figure 1(b) shows the ratios $ C_1/C_0 $, $ C_2/C_0 $ and $ C_2/C_1 $

    Figure 2.  Experiments for Algorithm 3 with $ q = 0 $, $ 1 $, $ 2 $ and increasing $ r $, $ r = 1:4:49 $, and $ \mathbf{X} $ of size $ 141 \times 141 $. Each experiment is repeated $ 50 $ times for each $ r $ and $ q $. The box plots for the computational times are reported in seconds

    Figure 3.  Figures 3(a) and 3(b) are the geologic models and the forward magnetic field, respectively

    Figure 4.  These results show tests of the parameters for Algorithm 4. Figures 4(a) and 4(b) are the $\text{RMSE}$ and the computational times for the separations of the data in Figure 3 with different $ \beta $; Figures 4(c) and 4(d) are the $\text{RMSE}$ and the computational times of the separations of the data in Figure 3 with different $ r^{*} $. These are results over $ 50 $ runs in each case

    Figure 5.  Figures 5(a) and 5(b) are the synthetic regional and residual anomalies for the models in Figure 3, respectively; Figures 5(c) and 5(d) are the separated regional and residual anomalies, respectively, for data of size $ 201 \times 201 $ obtained using Algorithm 4 with $ \beta = 0.0062 $ and $ r^* = 6 $; Figures 5(e) and 5(f) are the separated regional and residual anomalies, respectively, obtained using $\text{LRMD_PFS}$ with $ \alpha = 0.0005 $

    Figure 6.  Figures 6(a) and 6(b) are the geologic models and the forward gravity field, respectively

    Figure 7.  Figures 7(a) and 7(b) are the synthetic regional and residual anomalies for the models in Figure 6, respectively; Figures 7(c) and 7(d) are the separated regional and residual anomalies, respectively, for data of size $ 201 \times 201 $ obtained using Algorithm 4 with $ \beta = 0.0005 $ and $ r^* = 6 $; Figures 7(e) and 7(f) are the separated regional and residual anomalies, respectively, obtained using $\text{LRMD_PFS}$ with $ \alpha = 0.0007 $

    Figure 8.  Figures 8(a) and 8(b) are the maps of the Bouguer gravity and the $\text{RTP}$ magnetic anomalies of the study area in Tongling

    Figure 9.  Figures 9(a) and 9(b) are the separated regional and residual gravity anomalies of the study area, respectively; Figures 9(c) and 9(d) are the separated regional and residual magnetic anomalies of the study area, respectively

    Figure 10.  Predictions of the distributions of areas that may have sharns or ore bodies based on the separated high-gravity and high-magnetic fields

    Figure 11.  Demonstrating non-monotonic increase in computational time using Algorithm 1 for $ P\times Q $ between $ 2^{15} $ and $ 80000 $

    Table 1.  Computational cost measured in terms of floating point operations and storage of floating point entries at each step of Algorithm 3 implemented with ($\text{FBHMRSVD}$), and without ($\text{RSVD}$), the use of $\text{FBHMMM}$ for multiplications with $ \mathbf{T} $. Here $ \ell = \mathcal{O}(r) $, and $ \ell = 2r $ when $ p = r $

    $\text{FBHMRSVD}$ $\text{RSVD}$
    Step Cost in flops Cost in storage Cost in flops Cost in storage
    4, 9 $ \mathcal{O}(\ell PQ\log_{2}{PQ}) $ $ PQ+\ell (K \hat{K}+L \hat{L} ) $ $ 2\ell K \hat{K} L \hat{L} $ $ K \hat{K} L \hat{L} +\ell (K \hat{K}+ L \hat{L} ) $
    5, 10 $ 2\ell^2(L \hat{L} -\ell /3) $ $ 2\ell L \hat{L} $ $ 2\ell^2(L \hat{L} -\ell /3) $ $ 2\ell L \hat{L} $
    7, 13 $ \mathcal{O}(\ell PQ\log_{2}{PQ}) $ $ PQ+\ell (K \hat{K}+ L \hat{L} ) $ $ 4\ell K \hat{K} L \hat{L} $ $ K \hat{K} L \hat{L} +\ell (K \hat{K}+ L \hat{L} ) $
    8 $ 2\ell^2(K \hat{K}-\ell /3) $ $ 2\ell K \hat{K} $ $ 2\ell^2(K \hat{K}-\ell /3) $ $ 2\ell K \hat{K} $
    14 $ \mathcal{O}(\ell^3) $ $ 2\ell^2+\ell $ $ \mathcal{O}(\ell^3) $ $ 2\ell^2+\ell $
    15 $ \ell r(2\ell +3K \hat{K}) $ $ r(K \hat{K}+L \hat{L} )+2\ell^2 $ $ \ell r(2\ell +3K \hat{K}) $ $ r(K \hat{K}+L \hat{L} )+2\ell^2 $
    $ +\ell L \hat{L} +r+\ell $ $ +\ell L \hat{L} +r+\ell $
     | Show Table
    DownLoad: CSV

    Table 2.  Comparisons of the rank $ r $ relative errors using Algorithm 3 to estimate the rank $ r $ partial SVD of $ \mathbf{X} $, with the mean reported over $ 50 $ runs in each case

    Matrix size Rank $ r $ relative error $ (r=10) $
    $ \mathbf{X} $ $ q=0 $ $ q=1 $ $ q=2 $
    $ 51 \times 51 $ $ 1.6056 $ $ 1.0022 $ $ 1.0000 $
    $ 81 \times 81 $ $ 1.6977 $ $ 1.0028 $ $ 1.0002 $
    $ 115 \times 115 $ $ 1.4600 $ $ 1.0150 $ $ 1.0006 $
    $ 141 \times 141 $ $ 1.9577 $ $ 1.0185 $ $ 1.0006 $
    $ 171 \times 171 $ $ 1.5904 $ $ 1.0038 $ $ 1.0001 $
    $ 201 \times 201 $ $ 1.4598 $ $ 1.0063 $ $ 1.0003 $
     | Show Table
    DownLoad: CSV

    Table 3.  Comparisons of the mean computational times over $ 10 $ runs, with rank $ r = 10 $ and $ q = 1 $, for Algorithm 3, both with and without use of fast matrix-matrix multiplication, $\text{FBHMRSVD}$ and $\text{RSVD}$, respectively, as compared to direct calculation using the partial $\text{SVD}$. Algorithm 3 is evaluated using both the $\text{1DFFT}$ and $\text{2DFFT}$ for fast calculation. $ / $ denotes that an "out of memory" error is reported

    Matrix sizes Time (seconds)
    $ \mathbf{X} $ $ \mathbf{T} $ $ \mathtt{FBHMRSVD (2DFFT)} $ $ \mathtt{FBHMRSVD (1DFFT)} $ $ \mathtt{RSVD} $ $ \mathtt{SVD} $
    $ 81 \times 81 $ $ 1681 \times 1681 $ $ 0.023 $ $ 0.038 $ $ 0.13 $ $ 0.24 $
    $ 115 \times 115 $ $ 3364 \times 3364 $ $ 0.064 $ $ 0.079 $ $ 0.41 $ $ 0.92 $
    $ 141 \times 141 $ $ 5041 \times 5041 $ $ 0.058 $ $ 0.129 $ $ 0.78 $ $ 1.95 $
    $ 171 \times 171 $ $ 7396 \times 7396 $ $ 0.059 $ $ 0.14 $ $ 1.45 $ $ 3.93 $
    $ 201 \times 201 $ $ 10201 \times 10201 $ $ 0.13 $ $ 0.31 $ $ 2.80 $ $ 7.61 $
    $ 311 \times 311 $ $ 24336 \times 24336 $ $ 0.47 $ $ 0.64 $ $ 14.09 $ $ 41.18 $
    $ 401 \times 401 $ $ 40401 \times 40401 $ $ 1.46 $ $ 1.15 $ $ 111.14 $ $ 206.79 $
    $ 601 \times 601 $ $ 90601 \times 90601 $ $ 1.85 $ $ 2.17 $ $ / $ $ / $
    $ 1001 \times 1001 $ $ 251001 \times 251001 $ $ 2.44 $ $ 3.58 $ $ / $ $ / $
    $ 2001 \times 2001 $ $ 1002001 \times 1002001 $ $ 13.93 $ $ 26.13 $ $ / $ $ / $
     | Show Table
    DownLoad: CSV

    Table 4.  The parameters that define the geologic models in Figures 3(a) and 6(a)

    Geologic model Shape Central position Model parameters Density Magnetization
    (length, width, depth extent)/radius (g/cm$ ^3 $) (A/m)
    model-$ 1 $ Block $ (700,400,600) $ $ (300,400,200) $ $ 0.5 $ $ 8000 $
    model-$ 2 $ sphere $ (250,600,700) $ $ 200 $ $ 0.4 $ $ 7000 $
    model-$ 3 $ Block $ (500,500, 40) $ $ (50, 20, 40) $ $ 0.5 $ $ 5000 $
    model-$ 4 $ Block $ (500,475, 40) $ $ (10, 30, 40) $ $ 0.5 $ $ 5000 $
    model-$ 5 $ sphere $ (300,200, 40) $ $ 20 $ $ 5000 $
    model-$ 6 $ sphere $ (600,800, 40) $ $ 20 $ $ 5000 $
    model-$ 7 $ sphere $ (200,200, 40) $ $ 20 $ $ 0.7 $
    model-$ 8 $ Block $ (800,800, 40) $ $ (80, 80, 40) $ $ 0.5 $
     | Show Table
    DownLoad: CSV

    Table 5.  Comparisons of the computational times of the $\text{FBHMRSVD}$, $\text{RSVD}$, and $\text{SVD}$. $ / $ denotes that either the computational time is too high to perform the experiment, or an "out of memory" error is reported. Matrices $ \mathbf{X} $ and $ \mathbf{T} $ are square with sizes $ P = Q $ and $ \hat{K} = \hat{L} $, respectively. Reported are the mean times and $\text{RMSE}$ over $ 10 $ runs

    Matrix sizes $\text{FNCLRMD_PFS}$ $\text{LRMD_PFS}$
    $ P $ $ \hat{K} $ $ r^* $ $ \beta $ $\text{RMSE}$ (nT) Time (s) $ \alpha $ $\text{RMSE}$ (nT) Time (s)
    $ 101 $ $ 2601 $ $ 6 $ $ 0.0156 $ $ 3.78 $ $ 4.11 $ $ 0.002 $ $ 16.39 $ $ 17.03 $
    $ 141 $ $ 5041 $ $ 6 $ $ 0.0130 $ $ 3.52 $ $ 3.00 $ $ 0.001 $ $ 15.10 $ $ 109.85 $
    $ 171 $ $ 7396 $ $ 6 $ $ 0.0088 $ $ 3.63 $ $ 2.87 $ $ 0.0008 $ $ 14.46 $ $ 410.89 $
    $ 201 $ $ 10201 $ $ 6 $ $ 0.0062 $ $ 3.59 $ $ 6.75 $ $ 0.0005 $ $ 13.80 $ $ 862.96 $
    $ 241 $ $ 14641 $ $ 6 $ $ 0.0044 $ $ 3.56 $ $ 15.06 $ $ / $ $ / $ $ / $
    $ 311 $ $ 24336 $ $ 6 $ $ 0.0026 $ $ 3.61 $ $ 27.00 $ $ / $ $ / $ $ / $
    $ 401 $ $ 40401 $ $ 6 $ $ 0.0014 $ $ 3.65 $ $ 37.81 $ $ / $ $ / $ $ / $
    $ 601 $ $ 90601 $ $ 6 $ $ 0.0007 $ $ 3.64 $ $ 107.43 $ $ / $ $ / $ $ / $
    $ 1001 $ $ 251001 $ $ 6 $ $ 0.0002 $ $ 3.75 $ $ 171.93 $ $ / $ $ / $ $ / $
    $ 2001 $ $ 1002001 $ $ 6 $ $ 0.00002 $ $ 4.19 $ $ 1305.42 $ $ / $ $ / $ $ / $
     | Show Table
    DownLoad: CSV

    Table 6.  Acronyms used throughout

    Acronym Description
    $\text{FBHMRSVD}$ fast block Hankel matrix randomized SVD algorithm
    $\text{FBHMMM}$ fast block Hankel matrix-matrix multiplication algorithm
    $\text{FBHMVM}$ fast block Hankel matrix-vector multiplication Algorithm
    $\text{FNCLRMD_PFS}$ fast non-convex low-rank matrix decomposition algorithm for potential field separation
    $\text{EALM}$ exact augmented Lagrange multiplier method
    $\text{IALM}$ inexact augmented Lagrange multiplier method
    $\text{LRMD_PFS}$ low-rank matrix decomposition for potential field separation
    $\text{RPCA}$ robust principal component analysis
    $\text{RSVD}$ randomized singular value decomposition
    $\text{SVD}$ singular value decomposition
    $\text{RMSE}$ root mean square error
    $\text{RTP}$ reduce to the pole
     | Show Table
    DownLoad: CSV

    Table 7.  Notation used throughout

    Notation Description
    $\bf J$ exchange matrix
    $\bf X$ $2D$ gridded potential field data matrix
    $\bf Tj$ Hankel matrix constructed from the $j$th column of $\bf X$
    $\bf T$ trajectory matrix of $\bf X$
    $\bf X_1,\cdots,\bf X_Q$ first to $Q$th columns of $\bf X$, respectively
    $\bf U,\bf V,\bf Sigma$ $\text{SVD}$ of $\bf T$, $\bf T=\bf U\bf Sigma V^T$
    $\bf U_r,\bf V_r,\bf Sigma_r$ rank-$r$ partial $\text{SVD}$ of $\bf T$ using $\text{FBHMRSVD}$
    $\bf XD$, $\bf XS$ data matrices of regional and residual anomalies, respectively
    $\bf TD$, $\bf TS$ trajectory matrices of $\bf XD$ and $\bf XS$, respectively
    $\bf XD^*$, $\bf XS^*$ approximations of $\bf XD$ and $\bf XS$ using $\text{FNCLRMD_PFS}$, respectively
    $\bf u_1,\bf u_2,\cdots$ $\bf U=[\bf u_1,\bf u_2,\cdots]$, $\bf u_1,\bf u_2,\cdots$ are the left singular vectors of $\bf T$
    $\bf v_1,\bf v_2,\cdots$ $\bf V=[\bf v_1,\bf v_2,\cdots]$, $\bf v_1,\bf v_2,\cdots$ are the right singular vectors of $\bf T$
    $x_{mn}$ element at $m$th row and $n$th column of $\bf X$
    $P$, $Q$ $\bf X$ is of size $P \times Q$
    $K$, $L$ $\bf Tj$ is of size $K \times L$
    $\hat K$, $\hat L $ $\bf T$ is a block Hankel matrix with $\hat K \times \hat L $ blocks
    $P_C$, $Q_C$ $\bf C$ is of size $P_C \times Q_C$
    $\sigma_1$, $\sigma_2$, $\cdots$ $\bf \Sigma=\mathtt{diag}(\sigma_{1},\sigma_{2},\cdots)$, where $\sigma_1$, $\sigma_2$, $\cdots$ are the singular values of $\bf T$
    $r$ desired rank parameter in $\text{FBHMRSVD}$
    $p$ oversampling parameter in $\text{FBHMRSVD}$
    $q$ power iteration parameter in $\text{FBHMRSVD}$
    $r^*$ desired rank parameter in $\text{FNCLRMD_PFS}$
    $\beta$ thresholding parameter in $\text{FNCLRMD_PFS}$
    $\alpha$ weighting parameter in $\text{LRMD_PFS}$
    $\|\cdot\|_p$, $\|\cdot\|_*$ $\ell_p$ and nuclear norms, respectively
     | Show Table
    DownLoad: CSV
  • [1] W. B. Agocs, Least squares residual anomaly determination, Geophysics, 16 (1951), 686-696.  doi: 10.1190/1.1437720.
    [2] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM Journal on Imaging Sciences, 2 (2009), 183-202.  doi: 10.1137/080716542.
    [3] D. S. Broomhead and G. P. King, Extracting qualitative dynamics from experimental data, Physica D: Nonlinear Phenomena, 20 (1986), 217-236.  doi: 10.1016/0167-2789(86)90031-X.
    [4] H.Q. Cai, J.-F. Cai, T. Wang, and G. Yin, Accelerated Structured Alternating Projections for Robust Spectrally Sparse Signal Recovery, preprint, http://arXiv.org/abs/1910.05859, (2020).
    [5] E. J. Candès, X. D. Li, Y. Ma and J. Wright, Robust principal component analysis?, Journal of the ACM (JACM), 58 (2011), Art. 11, 37 pp. doi: 10.1145/1970392.1970395.
    [6] K. C. Clarke, Optimum second-derivative and downward-continuation filters, Geophysics, 34 (1969), 424-437. 
    [7] M. Fedi and T. Quarta, Wavelet analysis for the regional-residual and local separation of potential field anomalies, Geophysical Prospecting, 46 (1998), 507-525.  doi: 10.1046/j.1365-2478.1998.00105.x.
    [8] G. H. Golub and  C. F. Van LoanMatrix Computations, Johns Hopkins Press, Baltimore, 1996. 
    [9] N. GolyandinaI. Florinsky and K. Usevich, Filtering of digital terrain models by 2D singular spectrum analysis, International Journal of Ecology & Development, 8 (2007), 81-94. 
    [10] Z. Z. Hou and W. C. Yang, Wavelet transform and multi-scale analysis on gravity anomalies of China, Chinese Journal of Geophysics, 40 (1997), 85-95. 
    [11] N. HalkoP. Martinsson and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Review, 53 (2011), 217-288.  doi: 10.1137/090771806.
    [12] E. LibertyF. WoolfeP. MartinssonV. Rokhlin and M. Tygert, Randomized algorithms for the low-rank approximation of matrices, Proceedings of the National Academy of Sciences, 104 (2007), 20167-20172.  doi: 10.1073/pnas.0709640104.
    [13] Z. C. Lin and H. Y. Zhang, Low-rank Models in Visual Analysis, , Elsevier Science Publishing Co Inc, New York, 2017.
    [14] Z. C. Lin, M. M. Chen, and Y. Ma, The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, preprint, arXiv: 1009.5055, (2013).
    [15] L. LuW. Xu and S. Z. Qiao, A fast SVD for multilevel block Hankel matrices with minimal memory storage, Numerical Algorithms, 69 (2015), 875-891.  doi: 10.1007/s11075-014-9930-0.
    [16] A. Mandal and and S. Niyogi, Filter assisted bi-dimensional empirical mode decomposition: a hybrid approach for regional-residual separation of gravity anomaly, Journal of Applied Geophysics, 159 (2018), 218-227. 
    [17] K. L. MickusC. L. V. Aiken and W. D. Kennedy, Regional-residual gravity anomaly separation using the minimum-curvature technique, Geophysics, 56 (1991), 279-283.  doi: 10.1190/1.1443041.
    [18] P. Netrapalli, U. N. Niranjan, S. Sanghavi, A. Anandkumar, and P. Jain, Non-convex robust PCA, Advances in Neural Information Processing Systems, (2014), 1107–1115.
    [19] R. S. Pawlowski, Preferential continuation for potential-field anomaly enhancement, Geophysics, 60 (1995), 390-398.  doi: 10.1190/1.1443775.
    [20] R. S. Pawlowski and R. O. Hansen, Gravity anomaly separation by Wiener filtering, Geophysics, 55 (1990), 539-548.  doi: 10.1190/1.1442865.
    [21] A. Spector and F. S. Grant, Statistical models for interpreting aeromagnetic data, Geophysics, 35 (1970), 293-302.  doi: 10.1190/1.1440092.
    [22] F. Takens, Detecting strange attractors in turbulence, Dynamical Systems and Turbulence, Warwick 1980, (1981), 366–381.
    [23] W. M. TelfordL. P. Geldart and  R. E. SheriffApplied Geophysics,, Cambridge University Press, Cambridge, 2003.  doi: 10.1017/CBO9781139167932.
    [24] A. A. Tsonis and J. B. Elsner, Mapping the channels of communication between the tropics and higher latitudes in the atmosphere, Physica D: Nonlinear Phenomena, 92 (1996), 237-244.  doi: 10.1016/0167-2789(95)00265-0.
    [25] S. Vatankhah, R. A. Renaut and V. E. Ardestani, A fast algorithm for regularized focused 3D inversion of gravity data using randomized singular-value decomposition, Geophysics, 83 (2018), G25–G34. doi: 10.1190/geo2017-0386.1.
    [26] S. Vatankhah, S. Liu, R.A. Renaut, X. Hu and J. Baniamerian, Improving the use of the randomized singular value decomposition for the inversion of gravity and magnetic data, Geophysics, 85 (2020), G93–G107. doi: 10.1190/geo2019-0603.1.
    [27] C. R. Vogel, Computational Methods for Inverse Problems, Society for Industrial and Applied Mathematics, Philadelphia, 2002. doi: 10.1137/1.9780898717570.
    [28] J. WrightY. MaJ. MairalG. SapiroT. Huang and S. C. Yan, Sparse representation for computer vision and pattern recognition, Proceedings of the IEEE, 98 (2010), 1031-1044.  doi: 10.21236/ADA513248.
    [29] W. C. YangZ. Q. ShiZ. Z. Hou and Z. Y. Cheng, Discrete wavelet transform for multiple decomposition of gravity anomalies, Chinese Journal of Geophysics, 44 (2001), 529-537.  doi: 10.1002/cjg2.171.
    [30] L. L. ZhangT. Y. Hao and W. W. Jiang, Separation of potential field data using 3-D principal component analysis and textural analysis, Geophysical Journal International, 179 (2009), 1397-1413.  doi: 10.1111/j.1365-246X.2009.04357.x.
    [31] S. Zhang and M. Wang, Correction of simultaneous bad measurements by exploiting the low-rank Hankel structure, 2018 IEEE International Symposium on Information Theory (ISIT), (2018), 646–650. doi: 10.1109/ISIT.2018.8437340.
    [32] D. Zhu, H. W. Li, T. Y. Liu, L. H. Fu and S. H. Zhang, Low-rank matrix decomposition method for potential field data separation, Geophysics, 85 (2020), G1–G16. doi: 10.1190/geo2019-0016.1.
  • 加载中

Figures(11)

Tables(7)

SHARE

Article Metrics

HTML views(2471) PDF downloads(241) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return