April  2020, 14(2): 267-290. doi: 10.3934/ipi.2020012

Enhanced image approximation using shifted rank-1 reconstruction

Harbin Institute of Technology, Department of Mathematics, 150001 Harbin, China

 

Received  February 2019 Revised  December 2019 Published  February 2020

Low rank approximation has been extensively studied in the past. It is most suitable to reproduce rectangular like structures in the data. In this work we introduce a generalization using "shifted" rank-$ 1 $ matrices to approximate $ \mathit{\boldsymbol{{A}}}\in \mathbb{C}^{M\times N} $. These matrices are of the form $ S_{\mathit{\boldsymbol{{\lambda}}}}(\mathit{\boldsymbol{{u}}}\mathit{\boldsymbol{{v}}}^*) $ where $ \mathit{\boldsymbol{{u}}}\in \mathbb{C}^M $, $ \mathit{\boldsymbol{{v}}}\in \mathbb{C}^N $ and $ \mathit{\boldsymbol{{\lambda}}}\in \mathbb{Z}^N $. The operator $ S_{\mathit{\boldsymbol{{\lambda}}}} $ circularly shifts the $ k $-th column of $ \mathit{\boldsymbol{{u}}}\mathit{\boldsymbol{{v}}}^* $ by $ \lambda_k $.

These kind of shifts naturally appear in applications, where an object $ \mathit{\boldsymbol{{u}}} $ is observed in $ N $ measurements at different positions indicated by the shift $ \mathit{\boldsymbol{{\lambda}}} $. The vector $ \mathit{\boldsymbol{{v}}} $ gives the observation intensity. This model holds for seismic waves that are recorded at $ N $ sensors at different times $ \mathit{\boldsymbol{{\lambda}}} $. Other examples are a car that moves through a video changing its position $ \mathit{\boldsymbol{{\lambda}}} $ in each of the $ N $ frames, or non-destructive testing based on ultrasonic waves that are reflected by defects inside the material.

The main difficulty of the above stated problem lies in finding a suitable shift vector $ \mathit{\boldsymbol{{\lambda}}} $. Once the shift is known, a simple singular value decomposition can be applied to reconstruct $ \mathit{\boldsymbol{{u}}} $ and $ \mathit{\boldsymbol{{v}}} $. We propose a greedy method to reconstruct $ \mathit{\boldsymbol{{\lambda}}} $. By using the formulation of the problem in Fourier domain, a shifted rank-$ 1 $ approximation can be calculated in $ O(NM\log M) $. Convergence to a locally optimal solution is guaranteed. Furthermore, we give a heuristic initial guess strategy that shows good results in the numerical experiments.

We validate our approach in several numerical experiments on different kinds of data. We compare the technique to shift-invariant dictionary learning algorithms. Furthermore, we provide examples from application including object segmentation in non-destructive testing and seismic exploration as well as object tracking in video processing.

Citation: Florian Bossmann, Jianwei Ma. Enhanced image approximation using shifted rank-1 reconstruction. Inverse Problems and Imaging, 2020, 14 (2) : 267-290. doi: 10.3934/ipi.2020012
References:
[1]

P. Amestoy, C. Ashcraft, O. Boiteau, A. Buttari, J.-Y. L'Excellent and C. Weisbecker, Improving multifrontal methods by means of block low-rank representations, SIAM J. Sci. Comput., 37 (2015), A1451–A1474. doi: 10.1137/120903476.

[2]

S. BhojanapalliB. Neyshabur and N. Srebro, Global optimality of local search for low rank matrix recovery, Advances in Neural Information Processing Systems, 29 (2016), 3873-3881. 

[3]

M. Brand, Fast low-rank modifications of the thin singular value decomposition, Linear Algebra Appl., 415 (2006), 20-30.  doi: 10.1016/j.laa.2005.07.021.

[4]

J.-F. CaiE. J. Candes and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[5]

E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, Journal of the ACM (JACM), 58 (2011), 11. doi: 10.1145/1970392.1970395.

[6]

M. ChuR. E. Funderlic and R. J. Plemmons, Structured low rank approximation, Linear Algebra Appl., 366 (2003), 157-172.  doi: 10.1016/S0024-3795(02)00505-0.

[7]

M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE J. Selected Topics Signal Process., 10 (2016), 608-622.  doi: 10.1109/JSTSP.2016.2539100.

[8]

L. Fang, J. Xu, H. Hu, Y. Chen, P. Shi, L. Wang and H. Liu, Noninvasive imaging of epicardial and endocardial potentials with low rank and sparsity constraints, IEEE Trans. Biomedical Engineering. doi: 10.1109/TBME.2019.2894286.

[9]

A. FriezeR. Kannan and S. Vempala, Fast monte-carlo algorithms for finding low-rank approximations, J. ACM, 51 (2004), 1025-1041.  doi: 10.1145/1039488.1039494.

[10]

D. Goldfarb and Z. T. Qin, Robust low-rank tensor recovery: Models and algorithms, SIAM J. Matrix Anal. & Appl., 35 (2014), 225-253.  doi: 10.1137/130905010.

[11]

G. H. Golub and C. F. Van Loan, Matrix Computations, 4th edition, The Johns Hopkins University Press, Baltimore, 2013.

[12]

L. GrasedyckD. Kressner and C. Tobler, A literature survey of low-rank tensor approximation techniques, GAMM-Mitt., 36 (2013), 53-78.  doi: 10.1002/gamm.201310004.

[13]

D. Gross, Recovering low-rank matrices from few coefficients in any basis, Trans. Inform. Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.

[14]

C. J. Hillar and L.-H. Lim, Most tensor problems are np-hard, J. ACM, 60 (2013), 45. doi: 10.1145/2512329.

[15]

J. Hui, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, IEE Conf. on Computer Vision and Pattern Regonition, 1791–1798.

[16]

P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, Proc. of the 45th ACM symposium on Theory of computing, 665–674. doi: 10.1145/2488608.2488693.

[17]

Y. JiaS. YuL. Liu and J. Ma, Orthogonal rank-one matrix pursuit for 3d seismic data interpolation, Journal of Applied Geophysics, 132 (2016), 137-145. 

[18]

P. Jost, P. Vandergheynst, S. Lesage and R. Gribonval, Motif: An efficient algorithm for learning translation invariant dictionaries, IEEE ICASSP 2006 Proceedings, 5. doi: 10.1109/ICASSP.2006.1661411.

[19]

E. LibertyF. WoolfeP. G. MartinssonV. Rokhlin and M. Tygert, Randomized algorithms for the low-rank approximation of matrices, Proc. Nat. Acad. Sci. USA, 104 (2007), 20167-20172.  doi: 10.1073/pnas.0709640104.

[20]

G. LiuZ. Lin and Y. Yu, Robust subspace segmentation by low-rank representation, Proc. of the 27th International Conference on Machine Learning, ICML-10 (2010), 663-670. 

[21]

X. LiuG. ZhaoJ. Yao and C. Qi, Background subtraction based on low-rank and structured sparse decomposition, IEEE Trans. Image Process., 24 (2015), 2502-2514.  doi: 10.1109/TIP.2015.2419084.

[22]

C. Lu, J. Tang, S. Yan and Z. Lin, Generalized nonconvex nonsmooth low-rank minimization, IEE Conf. on Computer Vision and Pattern Regonition, 4130–4137. doi: 10.1109/CVPR.2014.526.

[23]

J. Ma, Three-dimensional irregular seismic data reconstruction via low-rank matrix completion, Geophysics, 78 (2013), V181–V192. doi: 10.1190/geo2012-0465.1.

[24]

B. Mailhé, S. Lesage, R. Gribonval, F. Bimbot and P. Vandergheynst, Shift-invariant dictionary learning for sparse representations: extending k-svd, 16th IEEE European Signal Processing Conf., 1–5.

[25]

R. OtazoE. J. Candes and D. K. Sodickson, Low-rank plus sparse matrix decomposition for accelerated dynamic mri with separation of background and dynamic components, Magnetic Resonance in Medicine, 73 (2015), 1125-1136.  doi: 10.1002/mrm.25240.

[26]

S. OymakA. JalaliM. FazelY. C. Eldar and B. Hassibi, Simultaneously structured models with application to sparse and low-rank matrices, IEEE Trans. Inform. Theory, 61 (2015), 2886-2908.  doi: 10.1109/TIT.2015.2401574.

[27]

H. ParkL. Zhang and J. B. Rosen, Low rank approximation of a hankel matrix by structured total least norm, BIT, 39 (1999), 757-779.  doi: 10.1023/A:1022347425533.

[28]

M. Rahmani and G. K. Atia, High dimensional low rank plus sparse matrix decomposition, IEEE Trans. Signal Process., 65 (2017), 2004-2019.  doi: 10.1109/TSP.2017.2649482.

[29]

B. RechtM. Fazel and P. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.

[30]

M. D. Rodriguez, J. Ahmed and M. Shah, Action mach: A spatio-temporal maximum average correlation height filter for action recognition, IEEE 2008 Conf. Computer Vision and Pattern Recognition, 1–8. doi: 10.1109/CVPR.2008.4587727.

[31]

C. RusuB. Dumitrescu and S. Tsaftaris, Explicit shift-invariant dictionary learning, IEEE Signal Process. Letters, 21 (2014), 6-9.  doi: 10.1109/LSP.2013.2288788.

[32]

X. Shen and Y. Wu, A unified approach to salient object detection via low rank matrix recovery, IEE Conf. on Computer Vision and Pattern Regonition, 853–860.

[33]

X. Shen and Y. Wu, A unified approach to salient object detection via low rank matrix recovery, IEEE Conf. on Computer Vision and Pattern Recognition, 853–860.

[34]

F. ShiJ. ChengL. WangP. T. Yap and D. Shen, Lrtv: Mr image super-resolution with low-rank and total variation regularizations, IEE Trans. Medical Imaging, 34 (2015), 2459-2466.  doi: 10.1109/TMI.2015.2437894.

[35]

P. J. ShinP. E. LarsonM. A. OhligerM. EladJ. M. PaulyD. B. Vigneron and M. Lustig, Calibrationless parallel imaging reconstruction based on structured low–rank matrix completion, Magnetic resonance in medicine, 72 (2014), 959-970.  doi: 10.1002/mrm.24997.

[36]

A. Sobral, T. Bouwmans and E. Zahzah, Lrslibrary: Low-rank and sparse tools for background modeling and subtraction in videos, in Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing, CRC Press, Taylor and Francis Group., 2015. doi: 10.1201/b20190-24.

[37]

K. Soomro and A. R. Zamir, Action recognition in realistic sports videos, computer vision in sports, Springer Computer vision in sports, 181–208.

[38]

P. SprechmannA. M. Bronstein and G. Sapiro, Learning efficient sparse and low rank models, IEEE Trans. Pattern Anal. and Machine Intell., 37 (2015), 1821-1833.  doi: 10.1109/TPAMI.2015.2392779.

[39]

N. Srebro and T. Jaakkola, Weighted low-rank approximations, Proc. of the 20th International Conference on Machine Learning, ICML-03 (2003), 720-727. 

[40]

M. Tao and X. Yuan, Recovering low-rank and sparse components of matrices from incomplete and noisy observations, SIAM Journal on Optimization, 21 (2011), 57-81.  doi: 10.1137/100781894.

[41]

J. J. Thiagarajan, K. N. Ramamurthy and A. Spanias, Shift-invariant sparse representation of images using learned dictionaries, IEEE Workshop on Machine Learning for Signal Processing, 145–150.

[42]

I. Tosic and P. Frossard, Dictionary learning, IEEE Signal Process. Magazine, 28 (2011), 27-38. 

[43]

S. TuR. BoczarM. SimchowitzM. Soltanolkotabi and B. Recht, Low-rank solutions of linear matrix equations via procrustes flow, Proc. of the 33rd International Conference on Machine Learning, ICML-48 (2016), 964-973. 

[44]

M. UdellC. HornR. Zadeh and S. Boyd, Generalized low rank models, Found. Trends Machine Learning, 9 (2016), 1-118. 

[45]

A. E. Waters, A. C. Sankaranarayanan and R. Baraniuk, Sparcs: Recovering low-rank and sparse matrices from compressive measurements, Advances in neural information processing systems, 1089–1097.

[46]

J. Wright, A. Ganesh, S. Rao, Y. Peng and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization, Advances in neural information processing systems, 2080–2088.

[47]

J. Ye, Generalized low rank approximations of matrices, Mach. Learn., 61 (2005), 167-191. 

[48]

X. YeJ. YangX. SunK. LiC. Hou and Y. Wang, Foreground-background separation from video clips via motion-assisted matrix restoration, IEE Trans. Circuits and Systems for Video Technology, 25 (2015), 1721-1734. 

[49]

C. ZhangJ. LiuC. LiangZ. XueJ. Pang and Q. Huang, Image classification by non-negative sparse coding, correlation constrained low-rank and sparse decomposition, Computer Vision and Image Understanding, 123 (2014), 14-22. 

[50]

T. ZhangJ. M. Pauly and I. R. Levesque, Accelerating parameter mapping with a locally low rank constraint, Magnetic resonance in medicine, 73 (2015), 655-661. 

[51]

Z. Zhang and H. Liu, Nonlocal total variation based dynamic pet image reconstruction with low-rank constraints, Physica Scripta, 94 (2019), 065202.

[52]

G. Zheng, Y. Yang and J. Carbonell, Efficient shift-invariant dictionary learning, Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, 2095–2104.

[53]

G. ZhouA. Cichocki and S. Xie, Fast nonnegative matrix/tensor factorization based on low-rank approximation, IEEE Trans. Signal Process., 60 (2012), 2928-2940.  doi: 10.1109/TSP.2012.2190410.

[54]

T. Zhou and D. Tao, Godec: Randomized low-rank & sparse matrix decomposition in noisy case, Proc. of the 20th International Conference on Machine Learning, 33–40.

show all references

References:
[1]

P. Amestoy, C. Ashcraft, O. Boiteau, A. Buttari, J.-Y. L'Excellent and C. Weisbecker, Improving multifrontal methods by means of block low-rank representations, SIAM J. Sci. Comput., 37 (2015), A1451–A1474. doi: 10.1137/120903476.

[2]

S. BhojanapalliB. Neyshabur and N. Srebro, Global optimality of local search for low rank matrix recovery, Advances in Neural Information Processing Systems, 29 (2016), 3873-3881. 

[3]

M. Brand, Fast low-rank modifications of the thin singular value decomposition, Linear Algebra Appl., 415 (2006), 20-30.  doi: 10.1016/j.laa.2005.07.021.

[4]

J.-F. CaiE. J. Candes and Z. Shen, A singular value thresholding algorithm for matrix completion, SIAM J. Optim., 20 (2010), 1956-1982.  doi: 10.1137/080738970.

[5]

E. J. Candès, X. Li, Y. Ma and J. Wright, Robust principal component analysis?, Journal of the ACM (JACM), 58 (2011), 11. doi: 10.1145/1970392.1970395.

[6]

M. ChuR. E. Funderlic and R. J. Plemmons, Structured low rank approximation, Linear Algebra Appl., 366 (2003), 157-172.  doi: 10.1016/S0024-3795(02)00505-0.

[7]

M. A. Davenport and J. Romberg, An overview of low-rank matrix recovery from incomplete observations, IEEE J. Selected Topics Signal Process., 10 (2016), 608-622.  doi: 10.1109/JSTSP.2016.2539100.

[8]

L. Fang, J. Xu, H. Hu, Y. Chen, P. Shi, L. Wang and H. Liu, Noninvasive imaging of epicardial and endocardial potentials with low rank and sparsity constraints, IEEE Trans. Biomedical Engineering. doi: 10.1109/TBME.2019.2894286.

[9]

A. FriezeR. Kannan and S. Vempala, Fast monte-carlo algorithms for finding low-rank approximations, J. ACM, 51 (2004), 1025-1041.  doi: 10.1145/1039488.1039494.

[10]

D. Goldfarb and Z. T. Qin, Robust low-rank tensor recovery: Models and algorithms, SIAM J. Matrix Anal. & Appl., 35 (2014), 225-253.  doi: 10.1137/130905010.

[11]

G. H. Golub and C. F. Van Loan, Matrix Computations, 4th edition, The Johns Hopkins University Press, Baltimore, 2013.

[12]

L. GrasedyckD. Kressner and C. Tobler, A literature survey of low-rank tensor approximation techniques, GAMM-Mitt., 36 (2013), 53-78.  doi: 10.1002/gamm.201310004.

[13]

D. Gross, Recovering low-rank matrices from few coefficients in any basis, Trans. Inform. Theory, 57 (2011), 1548-1566.  doi: 10.1109/TIT.2011.2104999.

[14]

C. J. Hillar and L.-H. Lim, Most tensor problems are np-hard, J. ACM, 60 (2013), 45. doi: 10.1145/2512329.

[15]

J. Hui, C. Liu, Z. Shen and Y. Xu, Robust video denoising using low rank matrix completion, IEE Conf. on Computer Vision and Pattern Regonition, 1791–1798.

[16]

P. Jain, P. Netrapalli and S. Sanghavi, Low-rank matrix completion using alternating minimization, Proc. of the 45th ACM symposium on Theory of computing, 665–674. doi: 10.1145/2488608.2488693.

[17]

Y. JiaS. YuL. Liu and J. Ma, Orthogonal rank-one matrix pursuit for 3d seismic data interpolation, Journal of Applied Geophysics, 132 (2016), 137-145. 

[18]

P. Jost, P. Vandergheynst, S. Lesage and R. Gribonval, Motif: An efficient algorithm for learning translation invariant dictionaries, IEEE ICASSP 2006 Proceedings, 5. doi: 10.1109/ICASSP.2006.1661411.

[19]

E. LibertyF. WoolfeP. G. MartinssonV. Rokhlin and M. Tygert, Randomized algorithms for the low-rank approximation of matrices, Proc. Nat. Acad. Sci. USA, 104 (2007), 20167-20172.  doi: 10.1073/pnas.0709640104.

[20]

G. LiuZ. Lin and Y. Yu, Robust subspace segmentation by low-rank representation, Proc. of the 27th International Conference on Machine Learning, ICML-10 (2010), 663-670. 

[21]

X. LiuG. ZhaoJ. Yao and C. Qi, Background subtraction based on low-rank and structured sparse decomposition, IEEE Trans. Image Process., 24 (2015), 2502-2514.  doi: 10.1109/TIP.2015.2419084.

[22]

C. Lu, J. Tang, S. Yan and Z. Lin, Generalized nonconvex nonsmooth low-rank minimization, IEE Conf. on Computer Vision and Pattern Regonition, 4130–4137. doi: 10.1109/CVPR.2014.526.

[23]

J. Ma, Three-dimensional irregular seismic data reconstruction via low-rank matrix completion, Geophysics, 78 (2013), V181–V192. doi: 10.1190/geo2012-0465.1.

[24]

B. Mailhé, S. Lesage, R. Gribonval, F. Bimbot and P. Vandergheynst, Shift-invariant dictionary learning for sparse representations: extending k-svd, 16th IEEE European Signal Processing Conf., 1–5.

[25]

R. OtazoE. J. Candes and D. K. Sodickson, Low-rank plus sparse matrix decomposition for accelerated dynamic mri with separation of background and dynamic components, Magnetic Resonance in Medicine, 73 (2015), 1125-1136.  doi: 10.1002/mrm.25240.

[26]

S. OymakA. JalaliM. FazelY. C. Eldar and B. Hassibi, Simultaneously structured models with application to sparse and low-rank matrices, IEEE Trans. Inform. Theory, 61 (2015), 2886-2908.  doi: 10.1109/TIT.2015.2401574.

[27]

H. ParkL. Zhang and J. B. Rosen, Low rank approximation of a hankel matrix by structured total least norm, BIT, 39 (1999), 757-779.  doi: 10.1023/A:1022347425533.

[28]

M. Rahmani and G. K. Atia, High dimensional low rank plus sparse matrix decomposition, IEEE Trans. Signal Process., 65 (2017), 2004-2019.  doi: 10.1109/TSP.2017.2649482.

[29]

B. RechtM. Fazel and P. Parrilo, Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev., 52 (2010), 471-501.  doi: 10.1137/070697835.

[30]

M. D. Rodriguez, J. Ahmed and M. Shah, Action mach: A spatio-temporal maximum average correlation height filter for action recognition, IEEE 2008 Conf. Computer Vision and Pattern Recognition, 1–8. doi: 10.1109/CVPR.2008.4587727.

[31]

C. RusuB. Dumitrescu and S. Tsaftaris, Explicit shift-invariant dictionary learning, IEEE Signal Process. Letters, 21 (2014), 6-9.  doi: 10.1109/LSP.2013.2288788.

[32]

X. Shen and Y. Wu, A unified approach to salient object detection via low rank matrix recovery, IEE Conf. on Computer Vision and Pattern Regonition, 853–860.

[33]

X. Shen and Y. Wu, A unified approach to salient object detection via low rank matrix recovery, IEEE Conf. on Computer Vision and Pattern Recognition, 853–860.

[34]

F. ShiJ. ChengL. WangP. T. Yap and D. Shen, Lrtv: Mr image super-resolution with low-rank and total variation regularizations, IEE Trans. Medical Imaging, 34 (2015), 2459-2466.  doi: 10.1109/TMI.2015.2437894.

[35]

P. J. ShinP. E. LarsonM. A. OhligerM. EladJ. M. PaulyD. B. Vigneron and M. Lustig, Calibrationless parallel imaging reconstruction based on structured low–rank matrix completion, Magnetic resonance in medicine, 72 (2014), 959-970.  doi: 10.1002/mrm.24997.

[36]

A. Sobral, T. Bouwmans and E. Zahzah, Lrslibrary: Low-rank and sparse tools for background modeling and subtraction in videos, in Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing, CRC Press, Taylor and Francis Group., 2015. doi: 10.1201/b20190-24.

[37]

K. Soomro and A. R. Zamir, Action recognition in realistic sports videos, computer vision in sports, Springer Computer vision in sports, 181–208.

[38]

P. SprechmannA. M. Bronstein and G. Sapiro, Learning efficient sparse and low rank models, IEEE Trans. Pattern Anal. and Machine Intell., 37 (2015), 1821-1833.  doi: 10.1109/TPAMI.2015.2392779.

[39]

N. Srebro and T. Jaakkola, Weighted low-rank approximations, Proc. of the 20th International Conference on Machine Learning, ICML-03 (2003), 720-727. 

[40]

M. Tao and X. Yuan, Recovering low-rank and sparse components of matrices from incomplete and noisy observations, SIAM Journal on Optimization, 21 (2011), 57-81.  doi: 10.1137/100781894.

[41]

J. J. Thiagarajan, K. N. Ramamurthy and A. Spanias, Shift-invariant sparse representation of images using learned dictionaries, IEEE Workshop on Machine Learning for Signal Processing, 145–150.

[42]

I. Tosic and P. Frossard, Dictionary learning, IEEE Signal Process. Magazine, 28 (2011), 27-38. 

[43]

S. TuR. BoczarM. SimchowitzM. Soltanolkotabi and B. Recht, Low-rank solutions of linear matrix equations via procrustes flow, Proc. of the 33rd International Conference on Machine Learning, ICML-48 (2016), 964-973. 

[44]

M. UdellC. HornR. Zadeh and S. Boyd, Generalized low rank models, Found. Trends Machine Learning, 9 (2016), 1-118. 

[45]

A. E. Waters, A. C. Sankaranarayanan and R. Baraniuk, Sparcs: Recovering low-rank and sparse matrices from compressive measurements, Advances in neural information processing systems, 1089–1097.

[46]

J. Wright, A. Ganesh, S. Rao, Y. Peng and Y. Ma, Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization, Advances in neural information processing systems, 2080–2088.

[47]

J. Ye, Generalized low rank approximations of matrices, Mach. Learn., 61 (2005), 167-191. 

[48]

X. YeJ. YangX. SunK. LiC. Hou and Y. Wang, Foreground-background separation from video clips via motion-assisted matrix restoration, IEE Trans. Circuits and Systems for Video Technology, 25 (2015), 1721-1734. 

[49]

C. ZhangJ. LiuC. LiangZ. XueJ. Pang and Q. Huang, Image classification by non-negative sparse coding, correlation constrained low-rank and sparse decomposition, Computer Vision and Image Understanding, 123 (2014), 14-22. 

[50]

T. ZhangJ. M. Pauly and I. R. Levesque, Accelerating parameter mapping with a locally low rank constraint, Magnetic resonance in medicine, 73 (2015), 655-661. 

[51]

Z. Zhang and H. Liu, Nonlocal total variation based dynamic pet image reconstruction with low-rank constraints, Physica Scripta, 94 (2019), 065202.

[52]

G. Zheng, Y. Yang and J. Carbonell, Efficient shift-invariant dictionary learning, Proc. 22nd ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, 2095–2104.

[53]

G. ZhouA. Cichocki and S. Xie, Fast nonnegative matrix/tensor factorization based on low-rank approximation, IEEE Trans. Signal Process., 60 (2012), 2928-2940.  doi: 10.1109/TSP.2012.2190410.

[54]

T. Zhou and D. Tao, Godec: Randomized low-rank & sparse matrix decomposition in noisy case, Proc. of the 20th International Conference on Machine Learning, 33–40.

Figure 1.  Input data: Cartoon-like, natural, ultrasonic and seismic images
Figure 2.  (a) Singular value ratio to $ \||\hat{\bm{{A}}}|\|_2 $ after the individual steps of Algorithm 3. (b) Average approximation error over all kinds of input data
Figure 3.  Reconstruction of Lena image using 1, 5 and 10 shifted rank-$ 1 $ matrices
Figure 4.  Approximation error of all algorithms for different kinds of input data plotted against the storage costs
Figure 5.  Sparse approximation of image "phantom" using Wavelets (left), SR1 (middle) and UC-DLA (right)
Figure 6.  (a) Average runtime of data approximation against the number of rows. (b) Average runtime of matrix vector multiplication using different number of shifted rank-$ 1 $ matrices
Figure 7.  Separation of an ultrasonic image in two signals (top and bottom) using SR1 (left), MoTIF (middle) and UC-DLA (right)
Figure 8.  Identified earth layer reflection in noisy seismic image
Figure 9.  Tracked route in original video (top) and reconstructed singular vectors $ \mathit{\boldsymbol{{u}}}^1 $, $ \mathit{\boldsymbol{{u}}}^2 $ (middle); as comparison the reconstructed background and person using MAMR is shown (bottom)
Figure 10.  First and last frame of the soccer video clip
Figure 11.  Tracked objects in soccer clip (top): advertising banners, referee and time stamp. As comparison the reconstructed background and foreground using MAMR is shown (bottom)
Table 1.  Mean number of iterations for different kinds of input data
data global local total
orthogonal 139 21 160
natural 137 24 161
cartoon 91 15 106
seismic 95 16 111
ultrasound 95 18 113
data global local total
orthogonal 139 21 160
natural 137 24 161
cartoon 91 15 106
seismic 95 16 111
ultrasound 95 18 113
[1]

Vasso Anagnostopoulou. Stochastic dominance for shift-invariant measures. Discrete and Continuous Dynamical Systems, 2019, 39 (2) : 667-682. doi: 10.3934/dcds.2019027

[2]

R. Yamapi, R.S. MacKay. Stability of synchronization in a shift-invariant ring of mutually coupled oscillators. Discrete and Continuous Dynamical Systems - B, 2008, 10 (4) : 973-996. doi: 10.3934/dcdsb.2008.10.973

[3]

Tao Wu, Yu Lei, Jiao Shi, Maoguo Gong. An evolutionary multiobjective method for low-rank and sparse matrix decomposition. Big Data & Information Analytics, 2017, 2 (1) : 23-37. doi: 10.3934/bdia.2017006

[4]

Simon Arridge, Pascal Fernsel, Andreas Hauptmann. Joint reconstruction and low-rank decomposition for dynamic inverse problems. Inverse Problems and Imaging, 2022, 16 (3) : 483-523. doi: 10.3934/ipi.2021059

[5]

Rua Murray. Approximation error for invariant density calculations. Discrete and Continuous Dynamical Systems, 1998, 4 (3) : 535-557. doi: 10.3934/dcds.1998.4.535

[6]

Antonio G. García. Sampling in $ \Lambda $-shift-invariant subspaces of Hilbert-Schmidt operators on $ L^2(\mathbb{R}^d) $. Mathematical Foundations of Computing, 2021, 4 (4) : 281-297. doi: 10.3934/mfc.2021019

[7]

Henk Broer, Aaron Hagen, Gert Vegter. Numerical approximation of normally hyperbolic invariant manifolds. Conference Publications, 2003, 2003 (Special) : 133-140. doi: 10.3934/proc.2003.2003.133

[8]

Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems and Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076

[9]

Changming Song, Yun Wang. Nonlocal latent low rank sparse representation for single image super resolution via self-similarity learning. Inverse Problems and Imaging, 2021, 15 (6) : 1347-1362. doi: 10.3934/ipi.2021017

[10]

Zhihua Zhang, Naoki Saito. PHLST with adaptive tiling and its application to antarctic remote sensing image approximation. Inverse Problems and Imaging, 2014, 8 (1) : 321-337. doi: 10.3934/ipi.2014.8.321

[11]

Manfred Einsiedler, Elon Lindenstrauss. On measures invariant under diagonalizable actions: the Rank-One case and the general Low-Entropy method. Journal of Modern Dynamics, 2008, 2 (1) : 83-128. doi: 10.3934/jmd.2008.2.83

[12]

Yanfeng Guo, Jinqiao Duan, Donglong Li. Approximation of random invariant manifolds for a stochastic Swift-Hohenberg equation. Discrete and Continuous Dynamical Systems - S, 2016, 9 (6) : 1701-1715. doi: 10.3934/dcdss.2016071

[13]

Gilles Carbou, Bernard Hanouzet. Relaxation approximation of the Kerr model for the impedance initial-boundary value problem. Conference Publications, 2007, 2007 (Special) : 212-220. doi: 10.3934/proc.2007.2007.212

[14]

Richard Archibald, Hoang Tran. A dictionary learning algorithm for compression and reconstruction of streaming data in preset order. Discrete and Continuous Dynamical Systems - S, 2022, 15 (4) : 655-668. doi: 10.3934/dcdss.2021102

[15]

Chris Guiver. The generalised singular perturbation approximation for bounded real and positive real control systems. Mathematical Control and Related Fields, 2019, 9 (2) : 313-350. doi: 10.3934/mcrf.2019016

[16]

Lucian Coroianu, Danilo Costarelli, Sorin G. Gal, Gianluca Vinti. Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4213-4225. doi: 10.3934/cpaa.2020189

[17]

Peng Jiang. Unique global solution of an initial-boundary value problem to a diffusion approximation model in radiation hydrodynamics. Discrete and Continuous Dynamical Systems, 2015, 35 (7) : 3015-3037. doi: 10.3934/dcds.2015.35.3015

[18]

Zhouchen Lin. A review on low-rank models in data analysis. Big Data & Information Analytics, 2016, 1 (2&3) : 139-161. doi: 10.3934/bdia.2016001

[19]

Mingchao Zhao, You-Wei Wen, Michael Ng, Hongwei Li. A nonlocal low rank model for poisson noise removal. Inverse Problems and Imaging, 2021, 15 (3) : 519-537. doi: 10.3934/ipi.2021003

[20]

Ke Wei, Jian-Feng Cai, Tony F. Chan, Shingyu Leung. Guarantees of riemannian optimization for low rank matrix completion. Inverse Problems and Imaging, 2020, 14 (2) : 233-265. doi: 10.3934/ipi.2020011

2020 Impact Factor: 1.639

Article outline

Figures and Tables

[Back to Top]